text
stringlengths 0
316k
| year
stringclasses 50
values | No
stringclasses 911
values |
---|---|---|
REAPING THE BENEFITS OF INTERACTIVE SYNTAX AND SEMANTICS* Kavi Mahesh Georgia Institute of Technology College of Computing Atlanta, GA 30332-0280 USA Internet: [email protected] Abstract Semantic feedback is an important source of informa- tion that a parser could use to deal with local ambigu- ities in syntax. However, it is difficult to devise a sys- tematic communication mechanism for interactive syn- tax and semantics. In this article, I propose a variant of left-corner parsing to define the points at which syntax and semantics should interact, an account of grammat- ical relations and thematic roles to define the content of the communication, and a conflict resolution strategy based on independent preferences from syntax and se- mantics. The resulting interactive model has been im- plemented in a program called COMPERE and shown to account for a wide variety of psycholinguistic data on structural and lexical ambiguities. INTRODUCTION The focus of investigation in language processing research has moved away from the issue of seman- tic feedback to syntactic processing primarily due to the difficulty of getting the communication be- tween syntax and semantics to work in a clean and systematic way. However, it is unquestionable that semantics does in fact provide useful information which when fed back to syntax could help elimi- nate many an alternative syntactic structure. In this article, I address three issues in the commu- nication mechanism between syntax and semantics and provide a complete and promising solution to the problem of interactive syntactic and semantic processing. Since natural languages are replete with ambi- guities at all levels, it appears intuitively that a processor with incremental interaction between the levels of syntax and semantics which makes the best and immediate use of both syntactic and semantic information to eliminate many alternatives would win over either a syntax-first or a semantics-first mechanism. In order to devise such an interactive mechanism, one has to address three important is- sues in the communication: (a) When to communi- cate: at what points should syntax and semantics interact, (b) What to communicate: what and how *The author would like to thank his advisor Dr. Kurt Eiselt and his colleague Justin Peterson for their support and valuable comments on this work. much information should they exchange, and (c) How to agree: how to resolve any conflicting pref- erences between syntax and semantics. In this article, I propose (a) a particular variant of left-corner parsing that I call Head-Signaled Left Corner Parsing (HSLC) to define the points where syntax and semantics should interact, (b) an ac- count of grammatical relations based on thematic roles as a medium for communication, and (c) a simple strategy based on syntactic and semantic preferences for resolving conflicts in the communi- cation. These solutions were motivated from an analysis of a large body of psycholinguistic data and account for a greater variety of experimen- tal observations on how humans deal with struc- tural and lexical ambiguities than previous models (Eiselt et al, 1993). While it also appears that the proposed interaction with semantics could make improvements to the efficiency of the parser in deal- ing with real texts, such a conclusion can only be drawn after an empirical evaluation. WHEN TO COMMUNICATE Syntax and semantics should interact only at those times when one can provide some information to the other to help reduce the number of choices be- ing considered. Only when the parser has analyzed a unit that carries some part of the meaning of the sentence (such as a content word) can semantics provide useful feedback perhaps using selectional preferences for fillers of thematic roles. We need to design a parsing strategy that communicates with semantics precisely at such points. While pure bottom-up parsing turns out to be too circumspect for this purpose, pure top-down parsing is too eager since it makes its commitments too early for seman- tics to have a say. A combination strategy called Left Corner (LC) parsing is a good middle ground making expectations for required constituents from the leftmost unit of a phrase but waiting to see the left corner before committing to a bigger syntactic unit (E.g., Abney and Johnson, 1991). In LC pars- ing, the leftmost child (the left corner) of a phrase is analyzed bottom-up, the phrase is projected up- ward from the leftmost child, and other children of the phrase are projected top-down from the phrase. 310 While LC parsing defines when to project top- down, it does not tell us when to make attachments. That is, it does not tell when to attempt to at- tach the phrase projected from its left corner to higher-level syntactic units. Should it be done im- mediately after the phrase has been formed from its left corner, or after the phrase is complete with all its children (both required and optional adjuncts), or at some intermediate point? Since ambigui- ties arise in making attachments and since seman- tics could help resolve such ambiguities, the points at which semantics can help, determine when the parser should attempt to make such attachments. LC parsing defines a range of parsing strategies in the spectrum of parsing algorithms along the "eagerness" dimension (Abney and Johnson, 1991). The two ends of this dimension are pure bottom- up (most circumspect) and pure top-down (most eager) parsers. Different LC parsers result from the choice of arc enumeration strategies employed in enumerating the nodes in a parse tree. In Arc Eager LC (AELC) Parsing, a node in the parse tree is linked to its parent without waiting to see all its children. Arc Standard LC (ASLC) Parsing, on the other hand, waits for all the children before making attachments. While this distinction vanishes for pure bottom-up or top-down parsing, it makes a big difference for LC Parsing. In this work, I propose an intermediate point in the LC Parsing spectrum between ASLC and AELC strategies and argue that the proposed point, that I call Head-Signaled LC Parsing (HSLC), turns out to be the optimal strategy for in- teraction with semantics. In this strategy, a node is linked to its parent as soon as all the required children of the node are analyzed, without waiting for other optional children to the right. The re- quired units are predefined syntactically for each phrase; they are not necessarily the same as the 'head' of the phrase. (E.g., N is the required unit for NP, V for VP, and NP for PP.) HSLC makes the parser wait for required units before interacting with semantics but does not wait for optional ad- juncts (such as PP adjuncts to NPs or VPs). The parsing spectrum now appears thus: (Bottom-Up --~ Head-Driven -~ ASLC -~ HSLC -~ AELC --~ Top-Down) Algorithm HSLC: Given a grammar and an empty set as the initial forest of parse trees, For each word, Add a new node T~ to the current forest of trees {Ti} for each category for the word in the lexicon mark T~ as a complete subtree Repeat until there are no more complete trees that can be attached to other trees, Propose attachments for a complete subtree Tj to a T~ that is expecting Tj, or to a T~ as an optional constituent, or to a new Tk to be created if Tj can be the left corner (leftmost child) of Tk Select an attachment (see below) and attach If a new Tk was created, add it to the forest, and make expectations for required units of Tk If a T~ in the forest has seen all its required units, Mark the T~ as a complete subtree. Consider a PP attachment ambiguity and the tree traversal labelings produced by different LC parsers shown in Figure 1. It can be seen from Fig- ure la that AELC attempts to attach the PP to the VP or NP even before the noun in the PP has been seen. At this time, semantics cannot provide useful feedback since it has no information on the role filler for a thematic role to evaluate it against known selectional preferences for that role filler. Thus AELC is too eager for interactive semantics. ASLC, on the other hand, does not attempt to at- tach the VP to the S until the very end (Fig lb). Thus even the thematic role of the subject NP re- mains unresolved until the very end. ASLC is too circumspect for interactive semantics. HSLC on the other hand, attempts to make attachments at the right time for interaction with semantics (Fig lc). 6 / (a) AELC 22D~ T 26"~ (b) ASLC 6 1%R ~ 2~, 22DE" T 24 N (©) HSLC Figure 1: LC Parsers at an Attachment Ambiguity WHAT TO COMMUNICATE The content of the communication between syntax and semantics is a set of grammatical relations and thematic roles. Syntax talks about the grammati- cal relations between the parts of a sentence such 311 as Subject, Direct-object, Indirect-object, preposi- tional modifier, and so on. Semantics talks about the thematic relations between parts of the sen- tence such as event, agent, theme, experiencer, beneficiary, co-agent, and so on. These two closed classes of relations are translated to one another by introducing what I call "intermediate roles" to take into account other kinds of linguistic in- formation such as active/passive voice, VP- vs. NP-modification, and so on. Examples of inter- mediate roles are: active-subject, passive-subject, VP-With-modifier, subject-With-modifier, and so on. While space limitations do not permit a more detailed description here, the motivation for intermediate roles as declarative representations for syntax-semantics communication has been de- scribed in (Mahesh and Eiselt, to appear). The grammatical relations proposed by syntax are translated to the corresponding thematic rela- tions using the intermediate roles. Semantics eval- uates the proposed role bindings using any selec- tional preferences for role fillers associated with the meanings of the words involved. It communicates back to syntax a set of either an Yes, a No, or a Don't-Care for each proposed syntactic attach- ment. A Yes answer is the result of satisfying one more selectional preferences for the role binding; a No for failing to meet a selectional constraint; and a Don't-Care when there are no known preferences for the particular role assignment. HOW TO AGREE Since syntax and semantics have independent pref- erences for multiple ways of composing the different parts of a sentence, an arbitrating process (that I call the Unified Process) manages the communica- tion and resolves any conflicts. This unified process helps select the alternative that is best given the preferences of both syntax and semantics. In ad- dition, since the decisions so made are never guar- anteed to be correct, the unified process is not de- terministic and has the capability of retaining uns- elected alternatives and recovering from any errors detected at later times. The details of such an er- ror recovery mechanism are not presented here but can be found in (Eiselt et al, 1993) for example. Syntax has several levels of preferences for the attachments it proposes based on the following cri- teria: Attachment (of a required unit) to an expect- ing unit has the highest preference. Attachment as an optional constituent to an existing (completed) unit has the next highest preference. Attachment to a node to be newly created (to start a new phrase) has the least amount of preference. These preferences are used to rank syntactic alternatives. The algorithm for the unified process: Given: A set of feasible attachments {AI} where each Ai is a fist of the two syntactic nodes being attached, the level of syntactic preference, and one of (Yes, No, Don't-Care) as the semantic feedback, If the most preferred syntactic alternative has an Yes or Don't-Care, select it else if no other syntactic alternative has a Yes, then select the most preferred syntactic alternative that has a Don't-Care else delay the decision and pursue multiple interpretations in parallel until further information changes the balance. DISCUSSION The model of interactive syntactic and semantic processing proposed accounts for a wide range psy- cholinguistic phenomena related to the handling of lexical and structural ambiguities by human parsers. Its theory of communication and the arbi- tration mechanism can explain data that modular theories of syntax and semantics can explain as well as data that interactive theories can (Eiselt et al, 1993). For instance, it can explain why sentence (1) below is a garden-path but sentence (2) is not. (1) The officers taught at the academy were very demanding. (2) The courses taught at the academy were very demanding. HSLC is different from both head-driven pars- ing and head-corner parsing. It can be shown that the sequence of attachments proposed by HSLC is more optimal for interactive semantics than those produced by either of the above strategies. HSLC is a hybrid of left-corner and head-driven parsing strategies and exploits the advantages of both. In conclusion, I have sketched briefly a solution to the three problems of synchronization, content, and conflict resolution in interactive syntax and se- mantics. This solution has been shown to have dis- tinct advantages in explaining psychological data on human language processing. The model is also a promising strategy for improving the efficiency of syntactic analysis. However, the latter claim is yet to be evaluated empirically. REFERENCES Steven P. Abney and Mark Johnson. 1991. Memory Requirements and Local Ambiguities of Parsing Strategies. J. Psycholinguistic Research, 20(3):233-250. Kurt P. Eiselt, Kavi Mahesh, and Jennifer K. Hol- brook. 1993. Having Your Cake and Eating It Too: Autonomy and Interaction in a Model of Sentence Processing. Proc. Eleventh National Conference on Artificial Intelligence (AAAI-93), pp 380-385. Kavi Mahesh and Kurt P. Eiselt. To appear. Uni- form Representations for Syntax-Semantics Arbi- tration. To appear in Proc. Sixteenth Annual Con- ference of the Cognitive Science Society, Atlanta, GA, Aug 1994. 312 | 1994 | 43 |
GRADED UNIFICATION: A FRAMEWORK FOR INTERACTIVE PROCESSING Albert Kim * Department of Computer and Information Sciences University of Pennsylvania Philadelphia, Pennsylvania, USA email: alkim©unagi, cis. upenn, edu Abstract An extension to classical unification, called graded unifica- tion is presented. It is capable of combining contradictory information. An interactive processing paradigm and parser based on this new operator are also presented. Introduction Improved understanding of the nature of knowledge used in human language processing suggests the fea- sibility of interactive models in computational linguis- tics (CL). Recent psycholinguistic work such as (Stowe, 1989; Trueswell et al., 1994) has documented rapid em- ployment of semantic information to guide human syn- tactic processing. In addition, corpus-based stochas- tic modelling of lexical patterns (see Weischedel et al., 1993) may provide information about word sense fre- quency of the kind advocated since (Ford et al., 1982). Incremental employment of such knowledge to resolve syntactic ambiguity is a natural step towards improved cognitive accuracy and efficiency in CL models. This exercise will, however, pose difficulties for the classical ('hard') constraint-based paradigm. As illus- trated by the Trueswell et al. (1994) results, this view of constraints is too rigid to handle the kinds of effects at hand. These experiments used pairs of locally am- biguous reduced relative clauses such as: 1) the man recognized by the spy took off down the street 2) the van recognized by the spy took off down the street The verb recognized is ambiguously either a past par- ticipial form or a past tense form. Eye tracking showed that subjects resolved the ambiguity rapidly (before reading the by-phrase) in 2) but not in 1) 1. The con- clusion they draw is that subjects use knowledge about thematic roles to guide syntactic decisions. Since van, which is inanimate, makes a good Theme but a poor Agent for recognized, the past participial analysis in 2) is reinforced and the main clause (past tense) sup- pressed. Being animate, man performs either thematic role well, allowing the main clause reading to remain *I thank Christy Doran, Jason Eisner, Jeff Reynar, and John Trueswell for valuable comments. I am grateful to Ewan Klein and the Centre for Cognitive Science, Edin- burgh, where most of this work was conducted, and also ac- knowledge the support of DARPA grant N00014-90-J-1863. 1In fact, ambiguity effects were often completely elimi- nated in examples like 2), with reading times matching those for the unambiguous case: 3) the man/van that was recognized by the spy ... plausible until the disambiguating by-phrase is encoun- tered. At this point, readers of 1) displayed confusion. Semantic constraints do appear to be at work here. However, the effects observed by Trueswell et al. are graded. Verb-complement combinations occupy a con- tinuous spectrum of "thematic fit", which influences reading times. This likely stems from the variance of verbs with respect to the thematic roles they allow (e.g., Agent, Instrument, Patient, etc.) and the syntactic po- sitions of these. The upshot of such observations is that classical uni- fication (see Shieber, 1986), which has served well as the combinatory mechanism in classical constraint-based parsers, is too brittle to withstand this onslaught of uncertainty. This paper presents an extension to classical unifi- cation, called graded unification. Graded unification combines two feature structures, and returns a strength which reflects the compatibility of the information en- coded by the two structures. Thus, two structures which could not unify via classical unification may unify via graded unification, and all combinatory decisions made during processing are endowed with a level of goodness. The operator is similar in spirit to the op- erators of fuzzy logic (see Kapcprzyk, 1992), which at- tempts to provide a calculus for reasoning in uncertain domains. Another related approach is the "Unification Space" model of Kempen & Vosse (1989), which unifies through a process of simulated annealing, and also uses a notion of unification strength. A parser has been implemented which combines con- stituents via graded unification and whose decisions are influenced by unification strengths. The result is a paradigm of incremental processing, which maintains a feature-based system of knowledge representation. System Description Though the employment of graded unification engen- ders a new processing style, the system's architecture parallels that of a conventional unification-based parser. Feature Structures: Prioritized Features The feature structures which encode the grammar in this system are conventional feature structures aug- mented by the association of priorities with each atomic-valued feature. Prioritizing features allows them to vary in terms of influence over the strength of unification. The priority of an atomic-valued feature fi in a feature structure X will be denoted by Pri(fi, X). The effect of feature prioritization is clarified in the fol- lowing sections. 313 Graded Unification Given two feature structures, the graded unification mechanism (Ua) computes two results, a unifying struc- ture and a unification strength. Structural Unification Graded unification builds structure exactly as classical unification except in the case of atomic unification, where it deviates crucially. Atoms in this framework are weighted disjunctive val- ues. The weight associated with a disjunct is viewed as the confidence with which the processor believes that disjunct to be the 'correct' value. Figures l(a) and l(b) depict atoms (where l(a) is "truly atomic" because it contains only one disjunct). (a) (b) (¢) Figure h Examples of Atoms Atomic unification creates a mixture of its two ar- gument atoms as follows. When two atoms are unified, the set union of their disjuncts is collected in the result. For each disjunct in the result, the associated weight be- comes the average of the weights associated with that disjunct in the two argument atoms. Figure l(c) shows an example unification of two atoms. The result is an atom which is 'believed' to be SG (singular), but could possibly be PL (plural). Unification Strength The unification strength (de- noted t3aStrength) is a weighted average of atomic uni- fication strengths, defined in terms of two sums, the actual compatibility and the perfect compatibility. If A and B are non-atomic feature structures to be unified, then the following holds: I laStrength(A, B) = ActualCornpatibility(A,B) Per ] ectC ornpatibility( A,B ) " The actual compatibility is the sum: Pri(fi,A)+Pri(li,B) , UGStrength(via,ViB) ~. if fi shared by A and B • Pvi(fi, A) if fi occurs only in A Pri(fi, B) if fi occurs only in B where i indexes all atomic-valued features in A or B, and v;a and ViB are the values of fi in A and B respec- tively. The perfect compatibility is computed by a formula identical to this except that UaStrength is set to 1. If A and B are atomic, then IIGStrenglh(A, B) is the total weight of disjuncts shared by A and B: tJcStrength(A,B) = ~-~i Min(wiA, WiB) where i in- dexes all disjuncts di shared by A and B, and wia and wiB are the weights of di in A and B respectively. By taking atomic unification strengths into account, the actual compatibility provides a raw measure of the extent to which two feature structures agree. By ig- noring unification strengths (assuming a value of 1.o), the perfect compatibility is an idealization of the actual compatibility; it is what the actual compatibility would be if the two structures were able to unify via classical unification. Thus, unification strength is always a value between 0 and 1. The Parser: Activated Chart Edges The parser is a modified unification-based chart parser. Chart edges are assigned activation levels, which repre- sent the 'goodness' of (or confidence in) their associated analyses. Each new edge is activated according to the strength of the unification which licenses its creation and the activations of its constituent edges. Constraining Graded Unification Without some strict limit on its operation, graded unification will over- generate wildly. Two mechanisms exist to constrain graded unification. First, if a particular unification completes with strength below a specified unification threshold, it fails. Second, if a new edge is constructed with activation below a specified activation threshold, it is not allowed to enter the chart, and is suspended. Parsing Strategy The chart is initialized to contain one inactive edge for each lexical entry of each word in the input. Lexical edges are currently assigned an initial activation of 1.o. The chart can then be expanded in two ways: 1. An active edge may be extended by unifying its first unseen constituent with the LrlS of an inactive edge. 2. A new active edge may be created by unifying the LHS of a rule with the first unseen constituent of some active edge in the chart (top down rule invocation). E~EI IA ~ s o/c~>~ ,r~e.2 I I G ~ [ c" -- o ,o Figure 2: Extension of an Active Edge by an Inactive Edge Figure 2 depicts the extension of the active EDGE1 with the inactive EDGE2. The characters represent feature structures, and the ovular nodes on the right end of each edge represent activation level. The parser tries to unify C', the mother node of EDGE2, with C, the first needed constituent of EDGE1. If this unification succeeds, the parser builds the extended edge, EDGE3 (where C Ua C' produces C"). The activation of the new edge is a function of the strength of the unification and the current activations of EDGE1 and EDGE2: activ3 = wl • tJcSTRENGTH(C, C') + w~ • activl 9- w 3 . activ2 (The weights wi sum to 1.) EDGE3 enters the chart only if its activation exceeds the activation threshold. Rule invocation is depicted in figure 3. The first needed constituent in EDGE1 is uni- fied with the LHS of aULE1. EDGE2 is created to begin searching for C. The new edge's activation is again a function of unification strength and other activations: activ 3 --- wl • UGSTRENGTH(C, C') 9- w2 • activl + w 3 . activ2 314 E~E~ I A -- B o / C ~ RULEI [_IGOr-------------'/ [ C ' -- D E ~ EDGE2 ~ ' J ~ " ~ o D E Figure 3: Top Down Rule Invocation The activation levels of grammar rule edges, like those for lexical edges, are currently pegged to 1.o. A Framework for Interactive Processing The system described above provides a flexible frame- work for the interactive use of non-syntactic knowledge. Animacy and Thematic Roles Knowledge about animacy and its important function in the filling of thematic roles can be modelled as a binary feature, ANIMATE. A (active voice) verb can strongly 'want' an animate Agent by specifying that its subject be [ANIMATE Jr] and assigning a high priority to the feature ANIMATE. Thus, any parse combining this verb with an inanimate subject will suffer in terms of unification strength. A noun can be strongly animate by having a high weight associated with the positive value of ANIMATE. Animacy has been encoded in a toy grammar. However, principled settings for the priority of this feature are left to future work. Statistical Information from Corpora Corpus-based part-of-speech (POS) statistics can also be naturally incorporated into the current model. It is proposed here that a Viterbi decoder could be used to generate the likelihoods of the n best POS tags for a given word in the input string. Lexical chart edges would then be initially activated to levels pro- portional to the predicted likelihoods of their associ- ated tags. Since these activations will be propagated to larger edges, parses involving predicted word senses would consequently be given a head start in a race of ac- tivations. Attractively, this strategy allows a fuller use of statistical information than one which uses the in- formation simply to deterministically choose the n best tags, which are then treated as equally likely. Interaction of Diverse Information A crucial feature of this framework is its potential for modelling the interaction between sources of informa- tion like the two above when they disagree. Sentences 1} and 2) again provide illustration. In such sentences, knowledge about word sense frequency supports the wrong analysis, and semantic constraints must be em- ployed to achieve the correct (human) performance. Intuitively, the raw frequency (without considering context) of the past tense form of recognized is higher than that of the past participial. POS taggers, despite considering local context, consistently mis-tag the verb in reduced relatives. The absence of a disambiguating relativizer (e.g., that) is one obvious source of difficulty here. But even the ostensibly disambiguating prepo- sition by, is itself ambiguous, since it might introduce a manner or locative phrase consistent with the main clause analysis. 2 Modelling human performance in such contexts requires allowing thematic information to compete against and defeat word frequency information. The current model allows such competition, as follows. POS information may incorrectly predict the main clause analysis, boosting the lexical edge associated with the past tense, and thereby boosting the main clause parse. However, the unification combining the past tense form of recognized with an inanimate subject (van) will be weak, due to the constraints encoded in the verb's lexi- cal entry. Since the activations of constituent edges de- pend on the strengths of the unifications used to build them, the main clause parse Will lose activation. The parse combining the past participial with an inanimate subject (Theme) will suffer no losses, allowing it to over- take the incorrect parse. Conclusions and Future Work Assigning feature priorities and activation thresholds in this model will certainly be a considerable task. It is hoped that principled and automated methods can be found for assigning values to these variables. One promising idea is to glean information about patterns of subcategorization and thematic roles from annotated corpora. Annotation of such information has been sug- gested as a future direction for the Treebank project (Marcus el al., 1993). It should be noted that learning such information will require more training data (hence larger corpora) than learning to tag part of speech. In addition, psycholinguistic studies such as the large norming study 3 of MacDonald and Pearlmutter (de- scribed in Trueswell et al., 1994) may prove useful in encoding thematic information in small lexicons. References Ford~ M., J. Bresnan, &: B.. Kaplan (1982). A Competence Based Theory of Syntactic Closure. In Bresnan, J. (Ed.), The Mental Representation of Grammatical l:telations (pp. 727-796). MIT Press, Cambridge, MA. Kempen, O. and T. Vosse (1989). Incremental Syntactic Tree Formation in Human Sentence Processing: a Cognitive Architecture Based on Activa- tion Decay and Simulated Annealing. Connection Science, 1(3), 273-290. Kapcprzyk, J. (1992). Fuzzy Sets and Fuzzy Logic. In Shapiro, S. (gd.) The Encyclopedia of Artificial Intelligence. John Wiley 8z Sons., New York. Marcus, M., B. Santorini, and M Markiewicz (1993). Building a Large An- notated Corpus of English: The Penn Treebank. Computational Lin- guistics, 19(2), 1993. Shieber, S. (1986). An Introduction to Unification-Based Approaches to Grammar. CSLI Lecture Notes, Chicago University Press, Chicago. Stowe, L. (1989). Thematic Structures and Sentence Comprehension. In Carlsonp G. and M. Tanenhaus (Eds.) Linguistic Structure in Language Processing Kluwer Academic Publishers. Trueswell, J., M. T~nnenh&us, S. Garnsey (1994). Semantic Influences on Parsing: Use of Thematic Role Information in Syntactic Ambiguity B.es- olutlon. Journal of Memory and Language, 33, In Press. Weischedel, R., B.. Schwartz, J. Palmucci, M. Meteer, and L. P~amshaw (1993). Coping with Ambiguity and Unknown Words through Proba- bilistic Models. Computational Linguistics, 19(2), 359-382. =In fact, the utility of byis neutralized in the case of POS tagging, since prepositions are uniformly tagged (e.g., using the tag IN in the Penn Treebank; see Marcus et al., 1993). 3These studies attempt to establish thematic patterns by asking large numbers of subjects to answer questions like "How typical is it for a van to be recognized by someone?" with a rating between 1 and 7. 315 | 1994 | 44 |
AN INTEGRATED HEURISTIC SCHEME FOR PARTIAL PARSE EVALUATION Alon Lavie School of Computer Science Carnegie Mellon University 5000 Forbes Ave., Pittsburgh, PA 15213 email : [email protected] Abstract GLR* is a recently developed robust version of the Generalized LR Parser [Tomita, 1986], that can parse almost any input sentence by ignoring unrecognizable parts of the sentence. On a given input sentence, the parser returns a collection of parses that correspond to maximal, or close to maximal, parsable subsets of the original input. This paper describes recent work on de- veloping an integrated heuristic scheme for selecting the parse that is deemed "best" from such a collection. We describe the heuristic measures used and their combi- nation scheme. Preliminary results from experiments conducted on parsing speech recognized spontaneous speech are also reported. The GLR* Parser The GLR Parsing Algorithm The Generalized LR Parser, developed by Tomita [Tomita, 1986], extended the original Lit parsing al- gorithm to the case of non-LR languages, where the parsing tables contain entries with multiple parsing ac- tions. Tomita's algorithm uses a Graph Structured Stack (GSS) in order to efficiently pursue in parallel the different parsing options that arise as a result of the multiple entries in the parsing tables. A second data structure uses pointers to keep track of all possible parse trees throughout the parsing of the input, while sharing common subtrees of these different parses. A process of local ambiguity packing allows the parser to pack sub- parses that are rooted in the same non-terminal into a single structure that represents them all. The GLR parser is the syntactic engine of the Univer- sal Parser Architecture developed at CMU [Tomita et al., 1988]. The architecture supports grammatical spec- ification in an LFG framework; that consists of context- free grammar rules augmented with feature bundles that are associated with the non-terminals of the rules. Feature structure computation is, for the most part, specified and implemented via unification operations. This allows the grammar to constrain the applicability of context-free rules. The result of parsing an input sen- tence consists of both a parse tree and the computed feature structure associated with the non-terminal at the root of the tree. The GLR* Parser GLR* is a recently developed robust version of the Gen- eralized LR Parser, that allows the skipping of unrecog- nizable parts of the input sentence [Lavie and Tomita, 1993]. It is designed to enhance the parsability of do- mains such as spontaneous speech, where the input is likely to contain deviations from the grammar, due to either extra-grammaticalities or limited grammar cov- erage. In cases where the complete input sentence is not covered by the grammar, the parser attempts to find a maximal subset of the input that is parsable. In many cases, such a parse can serve as a good approximation to the true parse of the sentence. The parser accommodates the skipping of words of the input string by allowing shift operations to be per- formed from inactive state nodes in the Graph Struc- tured Stack (GSS). Shifting an input symbol from an inactive state is equivalent to skipping the words of the input that were encountered after the parser reached the inactive state and prior to the current word that is being shifted. Since the parser is LR(0), previous reduce operations remain valid even when words fur- ther along in the input are skipped. Information about skipped words is maintained in the symbol nodes that represent parse sub-trees. To guarantee runtime feasibility, the GLR* parser is coupled with a "beam" search heuristic, that dynami- cally restricts the skipping capability of the parser, so as to focus on parses of maximal and close to maximal sub- strings of the input. The efficiency of the parser is also increased by an enhanced process of local ambiguity packing and pruning. Locally ambiguous symbol nodes are compared in terms of the words skipped within them. In cases where one phrase has more skipped words than the other, the phrase with more skipped words is discarded in favor of the more complete parsed phrase. This operation significantly reduces the number of parses being pursued by the parser. 316 The Parse Evaluation Heuristics At the end of the process of parsing a sentence, the GLR* parser returns with a set of possible parses, each corresponding to some grammatical subset of words of the input sentence. Due to the beam search heuristic and the ambiguity packing scheme, this set of parses is limited to maximal or close to maximal grammatical subsets. The principle goal is then to find the maximal parsable subset of the input string (and its parse). How- ever, in many cases there are several distinct maximal parses, each consisting of a different subset of words of the original sentence. Furthermore, our experience has shown that in many cases, ignoring an additional one or two input words may result in a parse that is syn- tactically and/or semantically more coherent. We have thus developed an evaluation heuristic that combines several different measures, in order to select the parse that is deemed overall "best". Our heuristic uses a set of features by which each of the parse candidates can be evaluated and compared. We use features of both the candidate parse and the ignored parts of the original input sentence. The fea- tures are designed to be general and, for the most part, grammar and domain independent. For each parse, the heuristic computes a penalty score for each of the fea- tures. The penalties of the different features are then combined into a single score using a linear combination. The weights used in this scheme are adjustable, and can be optimized for a particular domain and/or grammar. The parser then selects the parse ranked best (i.e. the parse of lowest overall score). 1 The Parse Evaluation Features So far, we have experimented with the following set of evaluation features: 1. The number and position of skipped words 2. The number of substituted words 3. The fragmentation of the parse analysis 4. The statistical score of the disambiguated parse tree The penalty scheme for skipped words is designed to prefer parses that correspond to fewer skipped words. It assigns a penalty in the range of (0.95 - 1.05) for each word of the original sentence that was skipped. The scheme is such that words that are skipped later in the sentence receive the slightly higher penalty. This preference was designed to handle the phenomena of false starts, which is common in spontaneous speech. The GLR* parser has a capability for handling com- mon word substitutions when the parser's input string is the output of a speech recognition system. When the input contains a pre-determined commonly substi- tuted word, the parser attempts to continue with both 1The system can display the n best parses found, where the parameter n is controlled by the user at runtime. By default, we set n to one, and the parse with the lowest score is displayed. the original input word and a specified "correct" word. The number of substituted words is used as an eval- uation feature, so as to prefer an analysis with fewer substituted words. The grammars we have been working with allow a sin- gle input sentence to be analyzed as several grammat- ical "sentences" or fragments. Our experiments have indicated that, in most cases, a less fragmented analy- sis is more desirable. We therefore use the sum of the number of fragments in the analysis as an additional feature. We have recently augmented the parser with a statis- tical disambiguation module. We use a framework simi- lar to the one proposed by Briscoe and Carroll [Briscoe and Carroll, 1993], in which the shift and reduce ac- tions of the LR parsing tables are directly augmented with probabilities. Training of the probabilities is per- formed on a set of disambiguated parses. The proba- bilities of the parse actions induce statistical scores on alternative parse trees, which are used for disambigua- tion. However, additionally, we use the statistical score of the disambiguated parse as an additional evaluation feature across parses. The statistical score value is first converted into a confidence measure, such that more "common" parse trees receive a lower penalty score. This is done using the following formula: penalty = (0.1 * (-loglo(pscore))) The penalty scores of the features are then combined by a linear combination. The weights assigned to the features determine the way they interact. In our exper- iments so far, we have fined tuned these weights manu- ally, so as to try and optimize the results on a training set of data. However, we plan on investigating the pos- sibility of using some known optimization techniques for this task. The Parse Quality Heuristic The utili~ of a parser such as GLR* obviously depends on the semantic coherency of the parse results that it returns. Since the parser is designed to succeed in pars- ing almost any input, parsing success by itself can no longer provide a likely guarantee of such coherency. Al- though we believe this task would ultimately be better handled by a domain dependent semantic analyzer that would follow the parser, we have attempted to partially handle this problem using a simple filtering scheme. The filtering scheme's task is to classify the parse chosen as best by the parser into one of two categories: "good" or "bad". Our heuristic takes into account both the actual value of the parse's combined penalty score and a measure relative to the length of the input sen- tence. Similar to the penalty score scheme, the precise thresholds are currently fine tuned to try and optimize the classification results on a training set of data. 317 GLR GLR*/1) GLR* 2) Unparsable number percent 58 48.3% 5 4.2% 5 4.2% Parsable number percent 62 51.7% 115 95.8% 115 95.8% Good/Close Parses number percent 60 50.0% 84 70.0% 90 75.0% Table I: Performance Results of the GLR* Parser (I) = simple heuristic, (2) = full heuristics Bad Parses number l~ercent 2 1.7% 31 25.8% 25 20.8% Parsing of Spontaneous Speech Using GLR* We have recently conducted some new experiments to test the utility of the GLR* parser and our parse evalu- ation heuristics when parsing speech recognized sponta- neous speech in the ATIS domain. We modified an ex- isting partial coverage syntactic grammar into a gram- mar for the ATIS domain, using a development set of some 300 sentences. The resulting grammar has 458 rules, which translate into a parsing table of almost 700 states. A list of common appearing substitutions was con- structed from the development set. The correct parses of 250 grammatical sentences were used to train the parse table statistics that are used for disambiguation and parse evaluation. After some experimentation, the evaluation feature weights were set in the following way. As previously described, the penalty for a skipped word ranges between 0.95 and 1.05, depending on the word's position in the sentence. The penalty for a substituted word was set to 0.9, so that substituting a word would be preferable to skipping the word. The fragmentation feature was given a weight of 1.1, to prefer skipping a word if it reduces the fragmentation count by at least one. The three penalties are then summed, together with the converted statistical score of the parse. We then used a set of 120 new sentences as a test set. Our goal was three-fold. First, we wanted to compare the parsing capability of the GLR* parser with that of the original GLR parser. Second, we wished to test the effectiveness of our evaluation heuristics in select- ing the best parse. Third, we wanted to evaluate the ability of the parse quality heuristic to correctly classify GLR* parses as "good" or "bad". We ran the parser three times on the test set. The first run was with skipping disabled. This is equivalent to running the original GLR parser. The second run was conducted with skipping enabled and full heuristics. The third run was conducted with skipping enabled, and with a simple heuristic that prefers parses based only on the number of words skipped. In all three runs, the sin- gle selected parse result for each sentence was manually evaluated to determine if the parser returned with a "correct" parse. The results of the experiment can be seen in Table 1. The results indicate that using the GLR* parser results in a significant improvement in performance. When using the full heuristics, the percentage of sentences, for which the parser returned a parse that matched or almost matched the "correct" parse increased from 50% to 75%. As a result of its skipping capabilities, GLR* succeeds to parse 58 sentences (48%) that were not parsable by the original GLR parser. Fully 96% of the test sentences (all but 5) are parsable by GLR*. However, a significant portion of these sentences (23 out of the 58) return with bad parses, due to the skipping of essential words of the input. We looked at the effec- tiveness of our parse quality heuristic in identifying such bad parses. The heuristic is successful in labeling 21 of the 25 bad parses as "bad". 67 of the 90 good/close parses are labeled as "good" by the heuristic. Thus, although somewhat overly harsh, the heuristic is quite effective in identifying bad parses. Our results indicate that our full integrated heuris- tic scheme for selecting the best parse out-performs the simple heuristic, that considers only the number of words skipped. With the simple heuristic, good/close parses were returned in 24 out of the 53 sentences that involved some degree of skipping. With our integrated heuristic scheme, good/close parses were returned in 30 sentences (6 additional sentences). Further analy- sis showed that only 2 sentences had parses that were better than those selected by our integrated parse eval- uation heuristic. References [Briscoe and Carroll, 1993] T. Briscoe and J. Carroll. Generalized Probabilistic LR Parsing of Natural Lan- guage (Corpora) with Unification-Based Grammars. Computational Linguistics, 19(1):25-59, 1993. [Lavie and Tomita, 1993] A. Lavie and M. Tomita. GLR* - An Efficient Noise-skipping Parsing Algo- rithm for Context-free Grammars. In Proceedings of Third International Workshop on Parsing Technolo- gies, pages 123-134, 1993. [Tomita et al., 1988] M. Tomita, T. Mitamura, H. Musha, and M. Kee. The Generalized LR Parser/Compiler- Version 8.1: User's Guide. Tech- nical Report CMU-CMT-88-MEMO, 1988. [Tomita, 1986] M. Tomita. Efficient Parsing for Nat. nral Language. Kluwer Academic Publishers, Hing- ham, Ma., 1986. 318 | 1994 | 45 |
TEMPORAL RELATIONS: REFERENCE OR DISCOURSE COHERENCE? Andrew Kehler Harvard University Aiken Computation Laboratory 33 Oxford Street Cambridge, MA 02138 [email protected] Abstract The temporal relations that hold between events de- scribed by successive utterances are often left implicit or underspecified. We address the role of two phenom- ena with respect to the recovery of these relations: (1) the referential properties of tense, and (2) the role of temporal constraints imposed by coherence relations. We account for several facets of the identification of temporal relations through an integration of these. Introduction Tense interpretation has received much attention in lin- guistics (Partee, 1984; Hinrichs, 1986; Nerbonne, 1986, inter alia) and natural language processing (Webber, 1988; Kameyama et al., 1993; Lascarides and Asher, 1993, inter alia). Several researchers (Partee, 1984; Hinrichs, 1986; Nerbonne, 1986; Webber, 1988) have sought to explain the temporal relations induced by tense by treating it as anaphoric, drawing on Reichen- bach's separation between event, speech, and reference times (Reichenbach, 1947). Specifically, to account for the forward progression of time induced by successive simple past tenses in a narrative, they treat the simple past as referring to a time evoked by a previous past tense. For instance, in Hinrichs's (1986) proposal, ac- complishments and achievements x introduce a new ref- erence point that is temporally ordered after the time of the event itself, "ensuring that two consecutive ac- complishments or achievements in a discourse are al- ways ordered in a temporal sequence." On the other hand, Lascarides and Asher (1993) take the view that temporal relations are resolved purely as a by-product of reasoning about coherence relations holding between utterances, and in doing so, argue that treating sim- ple and complex tenses as anaphoric is unnecessary. This approach parallels the treatment of pronoun res- olution espoused by Hobbs (1979), in which pronouns are modeled as free variables that are bound as a by- product of coherence resolution. The Temporal Cen- tering framework (Kameyama et al., 1993) integrates lWe will limit the scope of this paper by restricting the discussion to accomplishments and achievements. aspects of both approaches, but patterns with the first in treating tense as anaphoric. We argue that aspects of both analyses are necessary to account for the recovery of temporal relations. To demonstrate our approach we will address the following examples; passages (la-b) are taken from Lascarides and Asher (1993): (1) a. Max slipped. He spilt a bucket of water. b. Max slipped. He had spilt a bucket of water. c. Max slipped because he spilt a bucket of water. d. Max slipped because he had spilt a bucket of water. Passage (la) is understood as a narrative, indicating that the spilling was subsequent to the slipping. Pas- sages (lb-d) are instead understood as the second clause explaining the first, indicating that the reverse temporal ordering holds. We address two related questions; the first arises from treating the simple past as anaphoric. Specifically, if a treatment such as Hinrichs's is used to explain the forward progression of time in example (la), then it must be explained why sentence (lc) is as felicitous as sentence (ld). That is, one would predict a clash of temporal relations for sentence (lc), since the simple pasts induce the forward progression of time but the conjunction indicates the reverse temporal ordering. The second question arises from assuming that all tem- poral relations are recovered solely from reasoning with coherence relations. Specifically, because the use of the simple past in passage (lc) is as felicitous as the past perfect in passage (ld) under the explanation interpre- tation (in these cases indicated explicitly by because), then it must be explained why passage (la) is not un- derstood as an explanation as is passage (lb), where in each case the relationship needs to be inferred. We present our analysis in the next section, and account for these facts in Section 3. The Account We postulate rules characterizing the referential nature of tense and the role of discourse relations in further constraining the temporal relations between clauses. The rules governing tense are: 319 1. Main verb tenses are indefinitely referential, cre- ating a new temporal entity under constraints imposed by its type (i.e., past, present, or fu- ture) in relation to a discourse reference time 2 tR. For instance, a main verb past tense introduces a new temporal entity t under the constraint prior- to(t, tR). For simple tenses tR is the speech time, and therefore simple tenses are not anaphoric. 2. Tensed auxiliaries in complex tenses are anaphor- ic, identifying tR as a previously existing tempo- ral entity. The indefinite main verb tense is then ordered with respect to this tR. The tenses used may not completely specify the implicit temporal relations between the described events. We claim that these relations may be further refined by constraints imposed by the coherence relation operative between clauses. We describe three coherence relations relevant to the examples in this paper and give temporal constraints for them. 3 Narration: The Narration relation is characterized by a series of events displaying forward movement of time, such as in passage (la). As did Lascarides and Asher (1993), we capture this ordering as a constraint imposed by the Narration coherence re- lation itself.- 4 (2) If Narration(A, B) then ta < tB Parallel: The Parallel relation relates utterances that share a common topic. This relation does not impose constraints on the temporal relations be- tween the events beyond those provided by the tenses themselves. For instance, if passage (la) was uttered in response to the question What bad things happened to Maz today? (inducing a Paral- lel relation instead of Narration), a temporal or- dering among the sentences is no longer implied. Explanation: The Explanation relation denotes a cause-effect relationship with reversed clause or- dering, as in sentences (lb-d). Therefore, the sec- ond event is constrained to preceding the first: (3) If Ezplanation(A,B) then tB < tA To summarize the analysis, we claim that tense oper- ates as indefinite reference with respect to a possibly anaphorically-resolved discourse reference time. The temporal relations specified may be further refined as 2This term is borrowed from Kameyama et al. (1993). 3We assume here that the two clauses in question are related directly by a coherence relation. This may not be the case; for instance the use of a past perfect may signal the start of an embedded discourse segment, as in Web- ber's flower shop example (Webber, 1988; Kameyama et al., 1993). How this account is to be extended to address coher- ence at the discourse segment level is the subject of future work. 4The Cause-Effect relation also has this ordering constraint. a by-product of establishing the coherence relationship extant between clauses, Narration being but one such relation. We now repeated (4) a. b. c. d. Examples analyze the examples presented in Section 1, below, using this approach: Max slipped. He spilt a bucket of water. Max slipped. He had spilt a bucket of water. Max slipped because he spilt a bucket of water. Max slipped because he had spilt a bucket of water. The implicit ordering on the times indefinitely evoked by the simple pasts in passage (4a) results solely from understanding it as a Narration. In passage (4b), the auxiliary had refers to the event time of the slipping, and thus the past tense on spill creates a temporal en- tity constrained to precede that time. This necessitates a coherence relation that is consistent with this tem- poral order, in this case, Explanation. In passage (4c), the times evoked by the simple pasts are further or- dered by the Explanation relation indicated by because, resulting in the backward progression of time. In pas- sage (4d), both the tense and the coherence relation order the times in backward progression. Restating the first problem noted in Section 1, if treating the simple past as anaphoric is used to account for the forward progression of time in passage (4a), then one would expect the existence of the Explanation re- lation in passage (4c) to cause a temporal clash, where in fact passage (4c) is perfectly felicitous. No clash of temporal relations is predicted by our account, because the use of the simple pasts do not in themselves imply a specific ordering between them. The Narration rela- tion orders the times in forward progression in passage (4a) and the Explanation relation orders them in back- ward progression in passage (4c). The Parallel relation would specify no ordering (see the potential context for passage (4a) given in Section 2). Restating the second problem noted in Section 1, if temporal relations can be recovered solely from reason- ing with coherence relations, and the use of the simple past in passage (4c) is as felicitous as the past perfect in passage (4d) under the Explanation interpretation, then one asks why passage (4a) is not understood as an Explanation as is passage (4b), where in each case the relationship needs to be inferred. We hypothesize that hearers assume that speakers are engaging in Narration in absence of a specific cue to the contrary. The use of the past perfect (as in passage (4b)) is one such cue since it implies reversed temporal ordering; the use of an explicit conjunction indicating a coherence relation other than Narration (as in passages (4c-d)) is another such cue. While passage (4a) could be understood as an Explanation on semantic grounds, the hearer assumes Narration since no other relation is cued. 320 We see several advantages of this approach over that of Lascarides and Asher (1993, henceforth L&A). First, L&A note the incoherence of example (5) (5) ? Max poured a cup of coffee. He had entered the room. in arguing that the past perfect should not be treated as anaphoric: (6) Theories that analyse the distinction between the simple past and pluperfect purely in terms of dif- ferent relations between reference times and event times, rather than in terms of event-connections, fail to explain why [(4b)] is acceptable but [(5)] is awkward. (Lascarides and Asher, 1993, pg. 470) Example (5) indeed shows that coherence relations need to be utilized to account for temporal relations, but it does not bear on the issue of whether the past per- fect is anaphoric. The incoherence of example (5) is predicted by both their and our accounts by virtue of the fact that there is no coherence relation that corre- sponds to Narration with reverse temporal ordering. ~ In addressing this example, L&A specify a special rule (the Connections When Changing Tense (CCT) Law) that stipulates that a sentence containing the simple past followed by a sentence containing the past perfect can be related only by a subset of the otherwise possi- ble coherence relations. However, this subset contains just those relations that are predicted to be possible by accounts treating the past perfect as anaphoric; they are the ones that do not constrain the temporal order of the events against displaying backward progression of time. Therefore, we see no advantages to adopting their rule; furthermore, they do not comment on what other laws have to be stipulated to account for the facts concerning other possible tense combinations. Second, to explain why the Explanation relation can be inferred for passage (4b) but not for passage (4a), L&A stipulate that their causal Slipping Law (stating that spilling can cause slipping) requires that the CCT Law be satisfied. This constraint is imposed only to require that the second clause contain the past per- fect instead of the simple past. However, this does not explain why the use of the simple past is perfectly co- herent when the Explanation relationship is indicated overtly as it is in sentence (4c), nor do they adequately explain why CCT must be satisfied for this causal law and not for those supporting similar examples for which they successfully infer an unsignaled Explanation rela- tion (see discussion of example (2), pg. 463). Third, the L&A account does not explain why the past perfect cannot stand alone nor discourses gener- ally be opened with it; consider stating sentence (7) in isolation: (7) Max had spilt a bucket of water. 5For instance, in the same way that Explanation corre- sponds to Cause-Effect with reverse temporal ordering. Intuitively, such usage is infelicitous because of a depen- dency on a contextually salient time which has not been previously introduced. This is not captured by the L&A account because sentences containing the past perfect are treated as sententially equivalent to those contain- ing the simple past. On the other hand, sentences in the simple past are perfectly felicitous in standing alone or opening a discourse, introducing an asymmetry in ac- counts treating the simple past as anaphoric to a pre- viously evoked time. All Of these facts are explained by the account given here. Conclusion We have given an account of temporal relations whereby (1) tense is resolved indefinitely with respect to a possi- bly anaphorieally-resolved discourse reference time, and (2) the resultant temporal relations may be further re- fined by constraints that coherence relations impose. This work is being expanded to address issues pertain- ing to discourse structure and inter-segment coherence. Acknowledgments This work was supported in part by National Science Foundation Grant IRI-9009018, National Science Foun- dation Grant IRI-9350192, and a grant from the Xerox Corporation. I would like to thank Stuart Shieber and Barbara Grosz for valuable discussions and comments on earlier drafts. References (Hinrichs, 1986) Erhard Hinrichs. Temporal anaphora in discourses of english. Linguistics and Philosophy, 9:63-82, 1986. (Hobbs, 1979) Jerry Hobbs. Coherence and corefer- ence. Cognitive Science, 3:67-90, 1979. (Kameyama et al., 1993) Megumi Kameyama, Rebec- ca Passoneau, and Massimo Poesio. Temporal center- ing. In Proceedings of the 31st Conference of the As- sociation for Computational Linguistics (ACL-93), pages 70-77, Columbus, Ohio, June 1993. (Lascarides and Asher, 1993) Alex Lascarides and Nicolas Asher. Temporal interpretation, discourse relations, and common sense entailment. Linguistics and Philosophy, 16(5):437-493, 1993. (Nerbonne, 1986) John Nerbonne. Reference time and time in narration. Linguistics and Philosophy, 9:83- 95, 1986. (Partee, 1984) Barbara Partee. Nominal and tempo- ral anaphora. Linguistics and Philosophy, 7:243-286, 1984. (Reichenbach, 1947) Hans Reichenbach. Elements of Symbolic Logic. Macmillan, New York, 1947. (Webber, 1988)Bonnie Lynn Webber. Tense as discourse anaphor. Computational Linguistics, 14(2):61-73, 1988. 321 | 1994 | 46 |
SIMULATING CHILDREN'S NULL SUBJECTS: AN EARLY LANGUAGE GENERATION MODEL Carole T. Boster Department of Linguistics, Box U-145 University of Connecticut Storrs, CT 06269-1145, USA [email protected] Abstract This paper reports work in progress on a sentence generation model which attempts to emulate certain language output patterns of children between the ages of one and one-half and three years. In particular, the model addresses the issue of why missing or phonetically "null" subjects appear as often as they do in the speech of young English- speaking children. It will also be used to examine why other patterns of output appear in the speech of children learning languages such as Italian and Chinese. Initial findings are that an output generator successfully approximates the null-subject output patterns found in English-speaking children by using a 'processing overload' metric alone; however, reference to several parameters related to discourse orientation and agreement morphology is necessary in order to account for the differing patterns of null arguments appearing cross- linguistically. Based on these findings, it is argued that the 'null-subject phenomenon" is due to the combined effects of limited processing capacity and early, accurate parameter setting. 1 ~ PROBLEM It is well known among researchers in language acquisition that young children just beginning to speak English frequently omit subjects, in linguistic contexts where subjects are considered mandatory in the adult language. Other major structural components such as verbs and direct objects are also omitted occasionally; however, the frequency at which children omit mandatory object NPs tends to be much lower than the rate at which they omit subjects. For example, P. Bloom's (1990) analysis of early speech transcripts of Adam, Eve and Sarah (Brown, 1973) from the CHILDES database (MacWhinney and Snow, 1985), indicates that these children omitted subjects from obligatory contexts 55% of the time on average, whereas obligatory objects were dropped at rates averaging only 9%. But by around age 2 1/2, or when the mean length of utterance (MLU) exceeds approximately 2.0 morphemes, the percentage of null subjects drops off to a level about equal to the level of null objects. The reason for the so-called null-subject phenomenon in early child English has been widely debated in the literature. Different theories, though they vary greatly in detail, generally fall into two broad categories: processing accounts and parameter-setting accounts. The general claim of those who favor a processing account is that the phenomenon (in English) is caused by severe limitations in the child's sentence-processing or memory capacity. It is known that young children's utterances are much shorter on average than adults', that their sentence length increases steadily with age, and that other components of a sentence are also routinely omitted, which could be evidence of processing limitations. Yet some who argue for a strictly grammatical explanation (including Hyams (1986), Hyams and Wex]er (1993)) claim that the differential patterns of null subjects over null objects cannot be accounted for by any existing processing account, and instead take this as evidence that the 'unmarked' setting for the relevant parameter(s) related to null subjects is (+pro-drop); various accounts are offered for how children learning languages that do not permit null subjects ultimately make the switch to the correct parameter Value. Others, including Valian (1991) and Rizzi (1994) have noted differences in the frequency of early null subjects depending on their position in a sentence; they tend to be omitted in matrix but not embedded clauses, and in sentence-initial position but not after a moved wh-element. This observation has been used to argue for a different grammatical explanation of the null-subject stage. Both Lillo-Martin (1991) and Rizzi (1994), for example, argue that the initial value of the parameters is set to (- pro-drop); Lillo-Martin claims that the matrix subject is outside the domain where the pro-drop parameters are applied initially, while Rizzi claims that the matrix CP is considered optional at an early stage in acquisition. Further evidence which may support either this approach or a 'combined' processing and parameters account includes the higher percentages and different patterns of pro- drop and topic-drop found in the speech of children learning Italian, a pro-drop language (Valian, 1991) and Chinese, which allows 'topic-drop' (Wang et. al., 322 1992), as compared to English-speaking children of the same age and MLU. Processing constraints should remain the same for children around the globe, so it is not clear that processing alone can account for the different distributions of nulls exhibited by 2-year olds learning English, Italian, and Chinese. However, the crosslinguistic differences also argue against the claim that all children start out with the relevant parameter(s) initially set to (+pro-drop). 2 THE MODEL FELICITY, a sentence generation model that emulates early child language output, has been designed in order to determine whether the 'null- subject' phenomenon in early child language can best be accounted for by an incorrect initial setting of certain parameters, by processing limitations, or by an interaction between parameter setting and processing. FELICITY assumes a modular approach, following Garrett (1975), in which the intended message goes through three processing modules to yield three levels of output: semantic, then syntactic, then phonetic. The model incorporates several standard assumptions of Principles-and-Parameters theory including X' structure-building capacity (Chomsky, 1981), head- complement ordering parameters, and several parameters currently thought to be relevant to the null-subject phenomenon. Following the Continuity Hypothesis (Pinker, 1984), the model has the potential capacity for producing a full clausal structure from the beginning; the structure-building mechanism is presumed to be innate. It is also assumed, following the VP-internal Subject Hypothesis (Koopman and Spertiche (1988) and others) that the subject is initially generated within the VP. An algorithm controlling processing capacity, similar in principle to that proposed by Gibson (1991) to account for processing overload effects in adult sentence processing, will limit structure-building and dictate maximum "holding' capacity before a sentence is output. The lexicon will initially include all words used productively in transcripts of an English-speaking child at age 1;7; lexical entries will include information about category, pronunciation, obligatory and optional complements, and selectional restrictions on those complements. All parameters will be binary. They can be assigned either value initially and can be reset; reference to any given parameter can also be switched on or off. The processing capacity of the model can also be adjusted, and the lexicon can be updated. The model will be able to produce a sentence with a specific meaning or intent (as children presumably do), if it is given certain data about the intended proposition; this data will comprise a semantic representation containing a verb, its theta- grid (i.e. agent, experiencer, goal and/or theme), information about time frame or tense, person and number, mood, negation, and whether or not arguments have been identified previously in the discourse. When making direct comparisons of the model's performance with children's actual utterances, the data that is input to the model will be coded on the basis of inferences about what the child 'intended' to say based not only on actual transcribed output but also from the situation, prior discourse, and possibly caregiver's report (cf. L. Bloom (1970) on 'rich interpretation' of children's utterances). Syntactic processing proceeds as follows: Begin structure-building at the level of the matrix CP, but via a recursive phrase-building process. Phrase- building begins by merging a complement phrase with its X ° head (after the complement phrase has been built) to form an intermediate or X' level of structure. This unit is then combined with its specifier to form a 'maximal' phrase or XP. Lexical items are inserted as soon as the appropriate X ° heads (or XPs, for pro-forms) become available. Each time a structural unit is built, and each time a lexical entry is inserted, the processing load is incremented; when the maximum load is exceeded, the model abandons processing and outputs the words currently in the buffer. $ INITIAL APPLICATION FELICITY's output will be compared to actual output from a longitudinal sample of several English-speaking children's early utterances, using transcripts available on the CHILDES database. The initial lexicon will be constructed based on the productive vocabulary of a given child from her first transcript. The 'processing limit' will be set at a given maximum, such that the model's MLU approximates that of the child in the transcript; the algorithm will be fine-tuned to determine how much relative weight or processing 'cost' should be assigned to (a) lexical lookup to get subcategorization information for the verb; (b) building of a structural unit; and (c) retrieval of phonological information. The sentence-generation procedures will be run under two conditions, once with parameter-checking enabled and then with parameter-checking disabled. Additional runs will try to emulate the child's output patterns during subsequent transcripts, after augmenting the model's lexicon with new words found in the child's vocabulary and adjusting the processing limit upward so that the output matches the child's new MLU. Statistical comparisons will be made between the model's and the children's performance (at 323 comparable MLU levels) including percentages of null subjects and null objects in the output, percentages of overt nominalsubjects (full NPs) vs. overt pronominal subjects, percentages of other sentence components omitted, and amount of variability in utterance lengths. 4 PRELIMINARY FINDINGS Initial trials indicate that, once the processing- complexity algorithm is tuned appropriately, FELICITY can approximate the null~subject output patterns found in English-speaking children with no reference to parameter values. Indeed, because the model builds complements before specifiers, it produces a much higher incidence of null subjects than null objects using a proceseing-overload metric alone. Furthermore, it yields a higher incidence of nulls in matrix sentences than in embedded clauses, and within a clause it only omits subjects in initial position, not after a moved wh-element or topic. However, it appears that the model will also need to reference parameter values if it is to account for the patterns observed in the speech of children learning languages which d_oo allow null arguments; processing constraints alone will not explain the different croselin~mistic distributions of nulls. 5 FUTURE APPLICATIONS Once FELICITY's processing metric is fine-tuned for English, it can be used to emulate argument omission patterns shown in other languages like Italian and Chinese, to test various parametric theories. If the relevant parameters involved are as given in Lillo-Martin (1991), for example, FELICITY should be able to emulate the relatively high level of null-subject usage by Italian-speaking children reported in Valian (1991) by simply switching certain subparameters related to Null Pronoun Licensing (NPL) and Null Pronoun Identification (NTI) to positive for an Italian child at age 2, while keeping processing constraints at the same levels that were established for English-speaking children. The model should also be able to emulate the higher percentages of null subjects and null objects found in the output of Chinese-speaking children in experiments reported in Wang et. al. (1992) by simply switching the Discourse Oriented (DO) parameter to positive, while leaving the NPL and NPI parameters set at the default (negative) values. FELICITY can also be used to address theories pertaining to other aspects of language acquisition that appear slightly later in development, such as the appearance of subject-auxiliary inversion in yes/no and wh-questions, and the emergence of Tense and Agreement features. Future enhancements to the model are planned with these applications in mind. ACKNOWLEDGMENTS This material is based upon work supported under a National Science Foundation Graduate Research Fellowship. Thanks go to my committee members Diane Lillo-Martin, Stephen Crain, Ted Gibson and Howard Lasnik, and to two anonymous reviewers for helpful comments on an earlier draft. REFERENCES Bloom, L. (1970). Language development: Form and function in emerging grammars. Cambridge, Mass.: MIT Press, Bloom, P. (1990). Subjectless sentences in child language. Linguistic Inauiry, ~ 491-504. Brown, R. (1973). Afirst language: The early stages. Cambridge, Mass.: Harvard University Press. Chomsky, N. (1981). Lectures on government and binding. Dordrecht: Foris. Garrett, M. F. (1975). The analysis of sentence production. In G. Bower (Ed.), P .sychology of learning and motivation (Vol. 9). New York: Academic Press. Gibson, E. A. F. (1991). A computational theory of human linguistic processing: Memory limitations and processing breakdown [Doctoral dissertation]. Pittsburgh: Carnegie Mellon University. Hyams, N. M. (1986). Language acquisition and the theory of parameters. Dordrecht: D. Reidel Publishing Company. Hyams, N., & Wex]er, K. (1993). On the grammatical basis of null subjects in child language. Linguistic InQuiry, 24, 421-459. Koopman, H., & Sportiche, D. (1988). Subjects [Ms.]. Los Angeles: UCLA. Lillo-Martin, D. C. (1991). Universal Grammar and American Sign Language: Setting the Null Argument Parameters. Dordrecht: Kluwer Academic Publishers. MacWhinney, B., & Snow, C. (1985). The Child Language Data Exchange System. Journal of Child Language, 12, 271-296. Pinker, S. (1984). Language learnability and language development. Cambridge, Mass.: Harvard University Press. Rizzi, L. (1994). Early null subjects and root null subjects. In T. Hoekstra & B. D. Schwartz • (Eds.), Language acquisition studies in generative grammar (pp. 151-176). Amsterdam/Philadelphia: John Benjamins. Valian, V. (1991). Syntactic subjects in the early speech of American and Italian children. Cognition, ~ 21-81. Wang, Q., Lillo-Martin, D., Best, C. T., & Levitt, A. (1992). Null subject versus null object: Some evidence from the acquisition of Chinese and English. Language Acquisition, ~ 221-254. 324 | 1994 | 47 |
DUAL-CODING THEORY AND CONNECTIONIST LEXICAL SELECTION Ye-Yi Wang* Computational Linguistics Program Carnegie Mellon University Pittsburgh, PA 15232 Internet: [email protected] Abstract We introduce the bilingual dual-coding theory as a model for bilingual mental representation. Based on this model, lexical selection neural networks are imple- mented for a connectionist transfer project in machine translation. Introduction Psycholinguistic knowledge would be greatly helpful, as we believe, in constructing an artificial language processing system. As for machine translation, we should take advantage of our understandings of (1) how the languages are represented in human mind; (2) how the representation is mapped from one language to another; (3) how the representation and mapping are acquired by human. The bilingual dual-coding theory (Paivio, 1986) partially answers the above questions. It depicts the verbal representations for two different languages as two separate but connected logogen systems, charac- terizes the translation process as the activation along the connections between the logogen systems, and at- tributes the acquisition of the representation to some unspecified statistical processes. We have explored an information theoretical neu- ral network (Gorin and Levinson, 1989) that can ac- quire the verbal associations in the dual-coding theory. It provides a learnable lexical selection sub-system for a conneetionist transfer project in machine translation. Dual-Coding Theory There is a well-known debate in psycholinguistics concerning the bilingual mental representation: inde- pendence position assumes that bilingual memory is represented by two functionally independent storage and retrieval systems, whereas interdependence po- sition hypothesizes that all information of languages exists in a common memory store. Studies on cross- language transfer and cross-language priming have *This work was partly supported by ARPA and ATR In- terpreting Telephony Research Laboratorie. provided evidence for both hypotheses (de Groot and Nas, 1991; Lambert, 1958). Dual-coding theory explains the coexistence of in- dependent and interdependent phenomena with sepa- rate but connected structures. The general dual-coding theory hypothesizes that human represents language with dual systems -- the verbal system and the im- agery system. The elements of the verbal system are logogens for words in a language. The elements of the imagery system, called "imagens", are connected to the logogens in the verbal systems via referential connections. Logogens in a verbal system are also in- terconnected with associative connections. The bilin- gual dual-coding theory proposes an architecture in which a common imagery system is connected to two verbal systems, and the two verbal systems are inter- connected to each other via associative connections [Figure 1]. Unlike the within-language associations, which are rich and diverse, these between-language associations involve primarily translation equivalent terms that are experienced together frequently. The interconnections among the three systems explain the interdependent functional behavior. On the other hand, the different characteristics of within-language and between-language associations account for the inde- pendent functional behavior. Based on the above structural assumption, dual-" coding theory proposes a parallel set of processing assumptions. Activation of connections between ref- erentially related imagens and logogens is called ref- erential processing. Naming objects and imaging to words are prototypical examples. Activation of asso- ciative connections between logogens is called asso- ciative processing. Lexical translation is an example of associative processing between two languages. Connectionist Lexical Selection Lexical Selection Lexical selection is the task of choosing target lan- guage words that accurately reflect the meaning of the corresponding source language words. It plays an im- portant role in machine translation (Pustejovsky and 325 L1 Verbal System f.. -~ V I Association Network L2 Verbal System f V 2 Association Nelwork VI - I Connections V 2 - I Connections Imagery System Figure 1: Bilingual Dual-Coding Representation Nirenburg, 1987). A common lexical selection practice involves an intermediate representation. It disambiguates the source language words to entities in the intermediate representation, then maps from the entities to the target lexical entries. This intermediate representation may be Lexical Concept Structure (Dorr, 1989) or inter- lingua (Nirenberg, 1987). This engineering approach requires great effort in designing the representation and the mapping rules. Currently, there are some efforts in statistical lex- ical selection. A target language word W t can be se- lected with the posterior probability Pr(Wt I Ws) given the source language word Ws. Several target language lexicai entries may be selected for a single source lan- guage word. Then the correct selections can be iden- tiffed by the language model of the target language (Brown, 1990). This approach is learnable. However, the accuracy is low. One reason is that it does not use any structural information of a language. In next subsections, we propose information- theoretical networks based on the bilingual dual-coding theory for lexical selection. Information-Theoretical Networks Information-theoretical network is a neural network formalism that is capable of doing associations be- tween two layers of representations. The associations can be obtained statistically according to the network's experiences. An information-theoretical network has two lay- ers. Each unit of a layer represents an element in the input or output of a training pattern, which might be a logogen or a word. Units in different layers are con- nected. The weight of the connection between unit i in one layer and unit j in the other layer is assigned with the mutual information between the elements rep- resenled by the two units (1) wij = l(vi, vj) = log(Pr(vjvi)/er(vi)) l Each layer also contains a bias unit, which is al- ways activated. The weight of the connection between the bias unit in one layer and unitj in the other layer is (2) woj = loger(vj) Both the information-theoretical network and the back-propagation network compute the posterior prob- abilities for an association task (Gorin and Levin- son, 1989; Robinson, 1992). However, only the information-theoretical network is isomorphic to the directly interconnected verbal systems in the dual- coding theory. Besides, an information-theoretical net- work has the following advantages: (1) it learns fast. The network can learn in a single pass without gra- dient decent. (2) it is adaptive. It can incrementally adapt to new experiences simply by adding new data to the training samples and modifying the associations according to the changed statistics. These make the network more psychologically plausible. Lexical Selection as an Associative Process We tried to map source language f-structures to target language f-structure in a connectionist transfer project (Wang, 1994). Functionally, there were two sub-tasks: 1. finding the target sub-structures, their phrasal cat- egories and their corresponding source structures; 2. finding the head of a target structure. The second sub- task is a problem of lexical selection. It was first im- plemented with a back-propagation network. We replaced the back-propagation networks for lexical selection with information-theoretical networks simulating the associative process in the dual-coding theory. The networks have two layers of units. Each source (target) language lexical item is represented by a unit in the input (output) layer. One network is con- structed for each phrasal category (NP, VP, AP, etc.). The networks works in the following way: for a target-language f-structure to be generated, the transfer system knows its phrasal category and its correspond- ing source-language f-structure from the networks that perform the sub-task 1. It then activates the lexical se- lection network for that phrasal category with the input units that correspond to the heads of the source lan- guage f-structure and its sub-structures. Through the connections between the two layers, the output units are activated, and the lexical item that corresponds to the most active output unit is selected as the head of the target f-structure. The following example illus- trates how the system selects the head anmelden for 1Where vi means the event that unit i is activated. 326 the German XCOMP sub-structure when it does the transfer from [sentence [subj i] would [xcomp [subj ]] like [xeomp [subj I] register [pp-adjfor the conference]]]] to [sentence [subj Ich] werde [xcomp [subj Ich] [adj gerne] anmelden [pp-aajfuer der Konferenz]]] 2. Since the structure networks find that there is a VP sub-structure of XCOMP in the target structure whose corresponding input structure is [xcomp [subj to register [pp-adjfor the conference]]], it activates the VP lexical selection network's input units for I, register and conference. By propagating the activation via the associative connections, the unit for anmelden is the most active output. Therefore, anmelden is chosen as the head of the xcomp sub-structure. Preliminary Result The domain of our work was the Conference Registra- tion Telephony Conversations. The lexicon for the task contained about 500 English and 500 German words. There were 300 English/German f-structurepairs avail- able from other research tasks (Osterholtz, 1992). A separate set of 154 sentential f-structures was used to test the generalization performance of the system. The testing data was collected for an independent task (Jain, 1991). From the 300 sentential f-structure pairs, every German VP sub-structure is extracted and labeled with its English counterpart. The English counterpart's head and its immediate sub-structures' heads serve as the input in a sample of VP association, and the German f-structure's head become the output of the association. For the above example, the association (]input I, regis- ter, conference] [output anmelden]) is a sample drawn from the f-structures for the VP network. The training samples for all the other networks are created in the same way. The accuracy of our system with information- theoretical network lexical selection is lower than the one with back-propagation networks (around 84% ver- sus around 92%) for the training data. However, the generalization performance on the unseen inputs is bet- ter (around 70% versus around 62%). The information- theoretical networks do not over-learn as the back- propagation networks. This is partially due to the reduced number of free parameters in the information- theoretical networks. Summary The lexical selection approach discussed here has two advantages. First, it is learnable. Little human effort on knowledge engineering is required. Secondly, it is psycholinguisticaUy well-founded in that the approach 2The f-structures are simplified here for the sake of conciseness. adopts a local activation processing model instead of relies upon symbol passing, as symbolic systems usu- ally do. References P. F. Brown and et al. A statistical approach to machine translation. ComputationalLinguistics, 16(2):73- 85, 1990. A. M. de Groot and G. L. Nas. Lexical representation of cognates and noncognates in compound bilin- gums. Journal of Memory and Language, 30(1), 1991. B. J. Dorr. Conceptual basis of the lexicon in ma- chine translation. Technical Report A.I. Memo No. 1166, Artificial Intelligence Laboratory, MIT, August, 1989. A. L. Gorin and S. E. Levinson. Adaptive acquisition of language. Technical report, Speech Research De- partment, AT&T Bell Laboratories, Murray Hill, 1989. A. N. Jain. Parsec: A connectionist learning archi- tecture for parsing spoken language. Technical Report CMU-CS-91-208, Carnegie Mellon Uni- versity, 1991. W. E. Lambert, J. Havelka and C. Crosby. The influ- ence of language acquisition contexts on bilingual- ism. Journal of Abnormal and Social Psychology, 56, 1958. S. Nirenberg, V. Raskin and A. B. Tucker. The struc- ture of interlingua in translator. In S. Niren- burg, editor, Machine Translation: Theoretical andMethodologicallssues. Cambridge University Press, Cambridge, England, 1987. L. Osterholtz and et al. Janus: a multi-lingual speech to speech translation system. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, volume 1, pages 209-212. IEEE, 1992. A. Paivio. Mental Representations ~ A Dual Coding Approach. Oxford University Press, New York, 1986. J. Pustejovsky and S. Nirenburg. Lexical selection in the process of language generation. In Proceed- ings of the 25th Annual Conference of the Associ- ation for Computational Linguistics, pages 201- 206, Standford University, Standford, CA, 1987. A. Robinson. Practical network design and implemen- tation. In Cambridge Neural Network Summer School, 1992. Y. Wang and A. Waibel. Connectionist transfer in ma- chine translation. Inprepare, 1994. 327 | 1994 | 48 |
Integration Of Visual Inter-word Constraints And Linguistic Knowledge In Degraded Text Recognition Tao Hong Center of Excellence for Document Analysis and Recognition Department of Computer Science State University of New York at Buffalo, Buffalo, NY 14260 t aohong@cs, buffalo, edu Abstract 1 2 3 4 Degraded text recognition is a difficult task. Given a Please fin in tire noisy text image, a word recognizer can be applied to 0.90 0.33 0.30 0.80 Fleece fill In toe generate several candidates for each word image. High- o. o5 o. 30 o. 28 o. io level knowledge sources can then be used to select a Pierce flu io lire decision from the candidate set for each word image. 0.02 0.21 0.25 0.05 In this paper, we propose that visual inter-word con- Fierce flit ill the straints can be used to facilitate candidate selection. 0.02 o. 10 o. 13 0.03 Visual inter-word constraints provide a way to link word Pieces till Io Ike images inside the text page, and to interpret them sys- 0.01 0.o6 0.04 0.02 tematically. Introduction The objective of visual text recognition is to transform an arbitrary image of text into its symbolic equivalent correctly. Recent technical advances in the area of doc- ument recognition have made automatic text recogni- tion a viable alternative to manual key entry. Given a high quality text page, a commercial document recog- nition system can recognize the words on the page at a high correct rate. However, given a degraded text page, such as a multiple-generation photocopy or fac- simile, performance usually drops abruptly([1]). Given a degraded text image, word images can be ex- tracted after layout analysis. A word image from a de- graded text page may have touching characters, broken characters, distorted or blurred characters, which may make the word image difficult to recognize accurately. After character recognition and correction based on dic- tionary look-up, a word recognizer will provide one or more word candidates for each word image. Figure 1 lists the word candidate sets for the sentence, "Please fill in the application form." Each word candidate has a confidence score, but the score may not be reliable because of noise in the image. The correct word candi- date is usually in the candidate set, but may not be the candidate with the highest confidence score. Instead of simply picking up the word candidate with the high- est recognition score, which may make the correct rate quite low, we need to find a method which can select a candidate for each word image so that the correct rate can be as high as possible. Contextual information and high-level knowledge can be used to select a decision word for each word image 5 6 7 application farm ! 0.90 0.35 applicators form 0.05 0.30 acquisition forth 0.03 0.20 duplication foam 0.01 0.11 implication force 0.01 0.04 Figure 1: Candidate Sets for the Sentence: in the application form/" "Please fill in its context. Currently, there are two approaches, the statistical approach and the structural approach, towards the problem of candidate selection. In the sta- tistical approach, language models, such as a Hidden Marker Model and word collocation can be utilized for candidate selection ([2, 4, 5]). In the structural ap- proach, lattice parsing techniques have been developed for candidate selection([3, 7]). The contextual constraints considered in a statisti- cal language model, such as word collocation, are local constraints. For a word image, a candidate will be se- lected according to the candidate information from its neighboring word images in a fixed window size. The window size is usually set as one or two. In the lattice parsing method, a grammar is used to select a candi- date for each word image inside a sentence so that the sequence of those selected candidates form a grammat- ical and meaningful sentence. For example, consider the sentence "Please fill in the application form". We assume all words except the word "form" have been recognized correctly and the candidate set for the word "form" is { farm, form, forth, foam, forth } (see the second sentence in Figure 2). The candidate "form" can be selected easily because the collocation between "application" and "form" is strong and the resulting sentence is grammatical. The contextual information inside a small window or inside a sentence sometimes may not be enough to select a candidate correctly. For example, consider the sen- 328 Sentence 1 1 This 2 farm form forth foam force 11 12 Please fill 3 4 5 6 7 8 9 10 is almost the same as that one Sentence 2 13 14 15 16 17 in the application farm ! form forth foam force Figure 2: Word candidates of two example sen- tences(word images 2 and 16 are similar) ® skill; it iologica-t)ly based. LanKua ze is ometh ing-G'K born how to,'g_,,_zo_v¢. Yet hypofl\esis that %h re t9Io:gicat unde. innings to human linguistic abili W does not ex- plain eve,-:ything. There may indeed versal elements. All ,k((dK'f tan,ma -,es z s@certam orgamz a tional principles. Figure 3: Part of text page with three sentences tence "This form is almost the same as that one"(see the first sentence in Figure 2). Word image 16 has five candidates: { farm, form, forth, foam, forth }. After lattice parsing, the candidate "forth" will be removed because it does not fit the context. But it is difficult to select a candidate from "farm", "form" "foam" and "force" because each of them makes the sentence gram- matical and meaningful. In such a case, more contex- tual constraints are needed to distinguish the remaining candidates and to select the correct one. Let's further assume that the sentences in Figure 2 are from the same text. By image matching, we know word images 2 and 16 are visually similar. If two word images are almost the same, they must be the same word. Therefore, same candidates must be selected for word image 2 and word image 16, After "form" is chosen for image 16 it can also be chosen as the decision for image 2. Possible Relations between W1. and W2 type at symbolic level at image level W1--W2 W2=XeWleY prefix_of(W1) = prefix_of(W2) suf yiz_oy(W1) = ~u y yix_o y(W2 ) suyyiz_of(WQ = prefiz_of(W~) Note 1: "~" means approximately Note 2: "e" means concatenation. VV-~ ~ W2 W1 ~ subimage_of(W2) left_part_of(W1) ,~ left_part_of(W2) right_part_of(W1) right_part_of(W2) right_part_of(W1) ,.~ left_part_of(W2) image matching; Table 1: Possible Inter-word Relations Visual Inter-Word Relations A visual inter-word relation can be defined between two word images if they share the same pattern at the image level. There are five types of visual inter-word relations listed in the right part of Table 1. Figure 3 is a part of a scanned text image in which a small number of word relations are circled to demonstrate the abundance of inter-word relations defined above even in such a small fragment of a real text page. Word images 2 and 8 are almost the same. Word image 9 matches the left part of word image 1 quite well. Word image 5 matches a part of the image 6, and so on. Visual inter-word relations can be computed by ap- plying simple image matching techniques. They can be calculated in clean text images, as well as in highly de- graded text fmages, because the word images, due to their relatively large size, are tolerant to noise ([6]). Visual inter-word relations can be used as constraints in the process of word image interpretation, especially for candidate selection. It is not surprising that word relations at the image level are highly consistent with word relations at the symbolic level(see Table 1). If two words hold a relation at the symbolic level and they are written in the same font and size, their word images should keep the same relation at the image level. And also, if two word images hold a relation at the image level, the truth values of the word images should have the same relation at the symbolic level. In Figure 3, word images 2 and 8 must be recognized as the same word because they can match each other; the identity of word image 5 must be a sub-string of the identity of word image 6 because word image 5 can match with a part of word image 6; and so on. Visual inter-word constraints provide us a way to link word images inside a text page, and to interpret them systematically. The research discussed in this paper in- tegrates visual inter-word constraints with a statistical language model and a lattice parser to improve the per- formance of candidate selection. 329 Current Status of Work A word-collocation-based relaxation algorithm and a probabilistic lattice chart parser have been de- signed for word candidate selection in degraded text recognition([3, 4]). The relaxation algorithm runs iter- atively. In each iteration, the confidence score of each candidate is adjusted based on its current confidence and its collocation scores with the currently most pre- ferred candidates for its neighboring word images. Re- laxation ends when all candidates reach a stable state. For each word image, those candidates with a low con- fidence score will be removed from the candidate sets. Then, the probabilistic lattice chart parser will be ap- plied to the reduced candidate sets to select the can- didates that appear in the most preferred parse trees built by the parser. There can be different strategies to use visual inter-word constraints inside the relaxation algorithm and the lattice parser. One of the strategies we are exploiting is to re-evaluate the top candidates for the related word images after each iteration of re- laxation or after lattice parsing. If they hold the same relation at the symbolic level, the confidence scores of the candidates will be increased. Otherwise, the images with a low confidence score will follow the decision of the images with a high confidence score. Five articles from the Brown Corpus were chosen ran- domly as testing samples. They are A06, GO2, J42, NO1 and ROT, each with about %000 words. Given a word image, our word recognizer generates its top10 candi- dates from a dictionary with 70,000 different entries. In preliminary experiments, we exploit only the type-1 relation listed in Table 1. After clustering word im- ages by image matching, similar images will be in the same cluster. Any two images from the same cluster hold the type-1 relation. Word collocation data were trained from the Penn Treebank and the Brown Cor- pus except for the five testing samples. Table 2 shows results of candidate selection with and without using visual inter-word constraints. The top1 correct rate for candidate lists generated by a word recognizer is as low as 57.1%, Without using visual inter-word constraints, the correct rate of candidate selection by relaxation and lattice parsing is 83.1%. After using visual inter-word constraints, the correct rate becomes 88.2%. Article Number Of Words A06 2213 G02 2267 J42 2269 N01 2313 R07 2340 Total 11402 Word Recognition Result 53.8% 67.7% 54.5% 57.3% 52.2% 57.1% Candidate Selection Using No Using Constraints Constraints 83.1% 88.5% 83.8% 87.8% 83.6% 89.5% 82.7% 87.1% 82.6% 88.1% 83.1% 88.2% Table 2: Comparison Of Candidate Selection Results Conclusions and Future Directions Integration of natural language processing and image processing is a new area of interest in document anal- ysis. Word candidate selection is a problem we are faced with in degraded text recognition, as well as in handwriting recognition. Statistical language models and lattice parsers have been designed for the prob- lem. Visual inter-word constraints in a text page can be used with linguistic knowledge sources to facilitate candidate selection. Preliminary experimental results show that the performance of candidate selection is im- proved significantly although only one inter-word rela- tion was used. The next step is to fully integrate visual inter-word constraints and linguistic knowledge sources in the relaxation algorithm and the lattice parser. Acknowledgments I would like to thank Jonathan J. Hull for his support and his helpful comments on drafts of this paper. References [1] Henry S. Baird, "Document Image Defect Models and Their Uses," in Proceedings of the Second In- ternational Conference on Document Analysis and Recognition ICDAR-93, Tsukuba, Japan, October 20-22, 1993, pp. 62-67. [2] Kenneth Ward Church and Patrick Hanks, "Word Association Norms, Mutual Information, and Lexi- cography," Computational Linguistics, Vol. 16, No. 1, pp. 22-29, 1990. [3] Tao Hong and Jonathan J. Hull, "Text Recognition Enhancement with a Probabilistic Lattice Chart Parser," in Proceedings of the Second International Conference on Document Analysis and Recognition ICDAR-93, Tsukuba, Japan, October 20-22, 1993. [4] Tao Hong and Jonathan J. Hull, "Degraded Text Recognition Using Word Collocation," in Pro- ceedings of IS~T/SPIE Symposium on Document Recognition, San Jose, CA, February 6-10, 1994. [5] Jonathan J. Hull, "A Hidden Markov Model for Language Syntax in Text Recognition," in Pro- ceedings of llth IAPR International Conference on Pattern Recognition, The Hague, The Netherlands, pp.124-127, 1992. [6] Siamak Khoubyari and Jonathan J. Hull, "Keyword Location in Noisy Document Image," in Proceed- ings of the Second Annual Symposium on Docu- ment Analysis and Information Retrieval, Las Ve- gas, Nevada, pp. 217-231, April 26-28, 1993. [7] Masaru Tomita, "An Efficient Word Lattice Pars- ing Algorithm for Continuous Speech Recognition," in Proceedings of the International Conference on Acoustic, Speech and Signal Processing, 1986. 330 | 1994 | 49 |
From Strings to Trees to Strings to Trees (Abstract) Aravind K. Joshi Dept. of Computer and Information Science University of Pennsylvania, Philadelphia PA 19104 Sentences are not just strings of words (or are they ?), they have some (hierarchical) structure. This much is accepted by all grammar formalisms. But how much structure is needed? The more the sentences are like strings the less the need for structure. A certain amount of structure is necessary simply be- cause a clause may embed another clause, or one clause may attach to another clause or parts of it. Leav- ing this need of structure aside, the question then is how much structure should a (minimal) clause have? Grammar formalisms can differ significantly on this is- sue. Minimal clauses can be just strings, or words linked by dependencies (dependency trees), or with rich phrase structure trees, or with flat (one level) phrase structure trees (almost strings) and so on. How much hierarchical structure is needed for a minimal clause is still an open question, that is being debated heat- edly. How are clauses put together? Are these oper- ations more like string manipulations (concatenation, insertion, or wrapping, for example) or are they more like tree transformations (generalized transformations of the early transformational grammars, for example)? Curiously, the early transformational grammars, al- though clearly using tree transformations, actually for- mulated the transformations as pseudo string-like op- erations! More recent non-transformational grammars differ significantly with respect to their use of string rewriting or tree rewriting operations. Grammar formalisms differ with respect to their stringiness or treeness. Also during their evolution, they have gone back and forth between string-like and tree-like representations, often combining them in dif- ferent ways. These swings are a reflection of the com- plex interplay between aspects of language structure such as constituency, dependency, dominance, locality of predicates and their arguments, adjacency, order, and discontinuity. We will discuss these issues in an in- formal manner, in the context of a range of formalisms. 33 | 1994 | 5 |
AN AUTOMATIC METHOD OF FINDING TOPIC BOUNDARIES Jeffrey C. Reynar* Department of Computer and Information Science University of Pennsylvania Philadelphia, Pennsylvania, USA j [email protected] Abstract This article outlines a new method of locating discourse boundaries based on lexical cohesion and a graphical technique called dotplotting. The application of dot- plotting to discourse segmentation can be performed ei- ther manually, by examining a graph, or automatically, using an optimization algorithm. The results of two ex- periments involving automatically locating boundaries between a series of concatenated documents are pre- sented. Areas of application and future directions for this work are also outlined. Introduction In general, texts are "about" some topic. That is, the sentences which compose a document contribute infor- mation related to the topic in a coherent fashion. In all but the shortest texts, the topic will be expounded upon through the discussion of multiple subtopics. Whether the organization of the text is hierarchical in nature, as described in (Grosz and Sidner, 1986), or linear, as examined in (Skorochod'ko, 1972), boundaries between subtopics will generally exist. In some cases, these boundaries will be explicit and will correspond to paragraphs, or in longer texts, sec- tions or chapters. They can also be implicit. Newspa- per articles often contain paragraph demarcations, but less frequently contain section markings, even though lengthy articles often address the main topic by dis- cussing subtopics in separate paragraphs or regions of the article. Topic boundaries are useful for several different tasks. Hearst and Plaunt (1993) demonstrated their usefulness for information retrieval by showing that segmenting documents and indexing the resulting subdocuments improves accuracy on an information retrieval task. Youmans (1991) showed that his text segmentation al- gorithm could be used to manually find scene bound- aries in works of literature. Morris and Hirst (1991) at- *The author would like to thank Christy Doran, Ja- son Eisner, A1 Kim, Mark Liberman, Mitch Marcus, Mike Schultz and David Yarowsky for their helpful comments and acknowledge the support of DARPA grant No. N0014-85- K0018 and ARO grant No. DAAL 03~89-C0031 PRI. tempted to confirm the theories of discourse structure outlined in (Grosz and Sidner, 1986) using information from a thesaurus. In addition, Kozima (1993) specu- lated that segmenting text along topic boundaries may be useful for anaphora resolution and text summariza- tion. This paper is about an automatic method of finding discourse boundaries based on the repetition of lexi- cal items. Halliday and Hasan (1976) and others have claimed that the repetition of lexical items, and in par- ticular content-carrying lexical items, provides coher- ence to a text. This observation has been used implic- itly in several of the techniques described above, but the method presented here depends exclusively on it. Methodology Church (1993) describes a graphical method, called doL- plotting, for aligning bilingual corpora. This method has been adapted here for finding discourse boundaries. The dotplot used for discovering topic boundaries is cre- ated by enumerating the lexical items in an article and plotting points which correspond to word repetitions. For example, if a particular word appears at word po- sitions x and y in a text, then the four points corre- sponding to the cartesian product of the set containing these two positions with itself would be plotted. That is, (x, x), (x, y), (y, x) and (y, y) would be plotted on the dotplot. Prior to creating the dotplot, several filters are ap- plied to the text. First, since closed-class words carry little semantic weight, they are removed by filtering based on part of speech information. Next, the remain- ing words are lemmatized using the morphological anal- ysis software described in (Karp et al., 1992). Finally, the lemmas are filtered to remove a small number of common words which are regarded as open-class by the part of speech tag set, but which contribute little to the meaning of the text. For example, forms of the verbs BE and HAVE are open class words, but are ubiquitous in all types of text. Once these steps have been taken, the dotplot is created in the manner described above. A sample dotplot of four concatenated Wall Street Jour- nal articles is shown in figure 1. The real boundaries 331 3.00 --. . . . . . . . . . . .: : ..... ~-'" .,. ..... -- z.~o- ..j. :...:..'.:".'a":: .... : .-... :. :_-~ z~o " " .: ": ' -: ..... : " ~::; ~:~: . .. ". . . . . , ... .., .. : ... .. :~ ~ ;-~..';;-;.; . z~- . ' "'~': :!. ,..~=,..;.;./~.,~;.;;,.:....: .-'.. -': .. - ..... ::: - ~ ~!'~::~~-~!; _ ~-~- ~ ..... ~i." "(".j " ' f .:'.'". i~ii!iii-i"~"'"'~ ~?. ~...:i? "2:; ..' -..': L "" -- o=o-".;~:, .".".:-~:.h , . -" " ", /':." ":'"" :.'i- "..~" . V" ~:.~., " ' . " • • " ¢'-. -. o.~o - -~: ~,~'~. "i:"'%! =i~: ": ". :" • ...-, .,~. ". ~: 7" .". _ 0.~- ;..7; '::if:" :: ::~ " :~ :'~ : ,- " " ?:. :::-. i:~ - • ~- ~; ~i..~i~¢:,::~~5:.,-V : ~": "" " .?- " - I I I I I I I 0.~ o2o I,o0 I~ 2Oo 2,50 3.~ Xxl~ Figure 1: The dotplot of four concatenated Wall Street Journal articles. vzlO "~ ~0.~ -- demity 550.O0 -- /~ ~00.00 -- / 450.00 -- j -- 40020 -- / 15It00 -- ~ 15o.O0 -- 0.00 0.5o 1.00 I~ 2.O0 2.$0 3.0O Xxl~ Figure 2: The outside density plot of the same four articles. between documents are located at word positions 1085, 2206 and 2863. The word position in the file increases as values in- crease along both axes of the dotplot. As a result, the diagonal with slope equal to one is present since each word in the text is identical to itself. The gaps in this line correspond to points where words have been re- moved by one of the filters. Since the repetition of lexi- cal items occurs more frequently within regions of a text which are about the same topic or group of topics, the visually apparent squares along the main diagonal of the plot correspond to regions of the text. Regions are delimited by squares because of the symmetry present in the dotplot. Although boundaries may be identified visually using the dotplot, the plot itself is unnecessary for the dis- covery of boundaries. The reason the regions along the diagonal are striking to the eye is that they are denser. This fact leads naturally to an algorithm based on max- imizing the density of the regions within squares along the diagonal, which in turn corresponds to minimizing the density of the regions not contained within these squares. Once the densities of areas outside these re- gions have been computed, the algorithm begins by se- lecting the boundary which results in the lowest outside density. Additional boundaries are added until either the outside density increases or a particular number of boundaries have been added. Potential boundaries are selected from a list of either sentence boundaries or paragraph boundaries, depending on the experiment. More formally, let n be the length of the concatena- tion of articles; let m be the number of unique tokens (after lemmatization and removal of words on the stop list); let B be a list of boundaries, initialized to contain only the boundary corresponding to the beginning of the series of articles, 0. Maintain B in ascending order. Let i be a potential boundary; let P = B (3 {i), also sorted in ascending order; let Vx,y be a vector contain- ing the word counts associated with word positions x through y in the concatenation. Now, find the i such that the equation below is minimized. Repeat this min- imization, inserting i into B, until the desired number of boundaries have been located. I 'l j=2 The dot product in the equation reveals the similar- ity between this method and Heart and Plaunt's (1993) work which was done in a vector-space framework. The crucial difference lies in the global nature of this equa- tion. Their algorithm placed boundaries by comparing neighboring regions only, while this technique compares each region with all other regions. A graph depicting the density of the regions not en- closed in squares along the diagonal is shown in figure 2. The y-coordinate on this graph represents the den- sity when a boundary is placed at the corresponding location on the x-axis. These data are derived from the dotplot shown in figure 1. Actual boundaries corre- spond to the most extreme minima--those at positions 1085, 2206 and 2863. Results Since determining where topic boundaries belong is a subjective task, (Passoneau and Litman, 1993), the pre- liminary experiments conducted using this algorithm involved discovering boundaries between concatenated articles. All of the articles were from the Wall Streel Journal and were tagged in conjunction with the Penn Treebank project, which is described in (Marcus el al., 1993). The motivation behind this experiment is that newspaper articles are about sufficiently different top- ics that discerning the boundaries between them should serve as a baseline measure of the algorithm's effective- ness. 332 Expt. 1 Expt. 2 # of exact matches 271 106 :#: of close matches 196 55 # of extra boundaries 1085 38 # of missed boundaries 43 355 Precision 0.175 0.549 Precision counting close 0.300 0.803 Recall 0.531 0.208 Recall counting close 0.916 0.304 Table 1: Results of two experiments. The results of two experiments in which between two and eight randomly selected Wall Street Journal arti- cles were concatenated are shown in table 1. Both ex- periments were performed on the same data set which consisted of 150 concatenations of articles containing a total of 660 articles averaging 24.5 sentences in length. The average sentence length was 24.5 words. The differ- ence between the two experiments was that in the first experiment, boundaries were placed only at the ends of sentences, while in the second experiment, they were only placed at paragraph boundaries. Tuning the stop- ping criteria parameters in either method allows im- provements in precision to be traded for declines in re- call and vice versa. The first experiment demonstrates that high recall rates can be achieved and the second shows that high precision can also be achieved. In these tests, a minimum separation between bound- aries was imposed to prevent documents from being repeatedly subdivided around the location of one ac- tual boundary. For the purposes of evaluation, an exact match is one in which the algorithm placed a boundary at the same position as one existed in the collection of articles. A missed boundary is one for which the algo- rithm found no corresponding boundary. If a boundary was not an exact match, but was within three sentences of the correct location, the result was considered a close match. Precision and recall scores were computed both including and excluding the number of close matches. The precision and recall scores including close matches reflect the admission of only one close match per ac- tual boundary. It should be noted that some of the extra boundaries found may correspond to actual shifts in topic and may not be superfluous. Future Work The current implementation of the algorithm relies on part of speech information to detect closed class words and to find sentence boundaries. However, a larger common word list and a sentence boundary recognition algorithm could be employed to obviate the need for tags. Then the method could be easily applied to large amounts of text. Also, since the task of segmenting concatenated documents is quite artificial, the approach should be applied to finding topic boundaries. To this end, the algorithm's output should be compared to the segmentations produced by human judges and the sec- tion divisions authors insert into some forms of writing, such as technical writing. Additionally, the segment in- formation produced by the algorithm should be used in an information retrieval task as was done in (Hearst and Plaunt, 1993). Lastly, since this paper only exam- ined flat segmentations, work needs to be done to see whether useful hierarchical segmentations can be pro- duced. References Church, Kenneth Ward. Char_align: A Program for Aligning Parallel Texts at the Character Level. Pro- ceedings of the 31st Annual Meeting of the Associa- tion for Computational Linguistics, 1993. Grosz, Barbara J. and Candace L. Sidner. Attention, Intentions and the Structure of Discourse. Computa- tional Linguistics, Volume 12, Number 3, 1986. Halliday, Michael and Ruqaiya Hasan. Cohesion in En- glish. New York: Longman Group, 1976. Hearst, Marti A. and Christian Plaunt. Subtopic Struc- turing for Full-Length Document Access. Proceed- ings of the Special Interest Group on Information Re- trieval, 1993. Karp, Daniel, Yves Schabes, Martin Zaidel and Dania Egedi. A Freely Available Wide Coverage Morpho- logical Analyzer for English. Proceedings of the 15th International Conference on Computational Linguis- tics, 1992. Kozima, Hideki. Text Segmentation Based on Similar- ity Between Words. Proceedings of the 31st Annual Meeting of the Association for Computational Lin- guistics, 1993. Marcus, Mitchell P., Beatrice Santorini and Mary Ann Markiewicz. Building a Large Annotated Corpus of English: The Penn Treebank. Computational Lin- guistics, Volume 19, Number 2, 1993. Morris, Jane and Graeme Hirst. Lexical Cohesion Com- puted by Thesaural Relations as an Indicator of the Structure of Text. Computational Linguistics, Vol- ume 17, Number 1, 1991. Passoneau, Rebecca J. and Diane J. Litman. Intention- Based Segmentation: Human Reliability and Corre- lation with Linguistic Cues. Proceedings of the 31st Annual Meeting of the Association for Computational Linguistics, 1993. Skorochod'ko, E.F. Adaptive Method of Automatic Ab- stracting and Indexing. Information Processing, Vol- ume 71, 1972. Youmans, Gilbert. A New Tool for Discourse Analy- sis: The Vocabulary-Management Profile. Language, Volume 67, Number 4, 1991. 333 | 1994 | 50 |
AUTOMATIC ALIGNMENT IN PARALLEL CORPORA Harris Papageorgiou, Lambros Cranias, Stelios Piperidis I Institute for Language and Speech Processing 22, Margari Street, 115 25 Athens, Greece [email protected] ABSTRACT This paper addresses the alignment issue in the framework of exploitation of large bi- multilingual corpora for translation purposes. A generic alignment scheme is proposed that can meet varying requirements of different applications. Depending on the level at which alignment is sought, appropriate surface linguistic information is invoked coupled with information about possible unit delimiters. Each text unit (sentence, clause or phrase) is represented by the sum of its content tags. The results are then fed into a dynamic programming framework that computes the optimum alignment of units. The proposed scheme has been tested at sentence level on parallel corpora of the CELEX database. The success rate exceeded 99%. The next steps of the work concern the testing of the scheme's efficiency at lower levels endowed with necessary bilingual information about potential delimiters. INTRODUCTION Parallel linguistically meaningful text units are indispensable in a number of NLP and lexicographic applications and recently in the so called Example-Based Machine Translation (EBMT). As regards EBMT, a large amount of bi- multilingual translation examples is stored in a database and input expressions are rendered in the target language by retrieving from the database that example which is most similar to the input. A task of crucial importance in this framework, is the establishment of correspondences between units of multilingual texts at sentence, phrase or even word level. The adopted criteria for ascertaining the adequacy of alignment methods are stated as follows : 1This research was supported by the LRE I TRANSLEARN project of the European Union • an alignment scheme must cope with the embedded extra-linguistic data (tables, anchor points, SGML markers, etc) and their possible inconsistencies. • it should be able to process a large amount of texts in linear time and in a computationally effective way. • in terms of performance a considerable success rate (above 99% at sentence level) must be encountered in order to construct a database with truthfully correspondent units. It is desirable that the alignment method is language- independent. s the proposed method must be extensible to accommodate future improvements. In addition, any training or error correction mechanism should be reliable, fast and should not require vast amounts of data when switching from a pair of languages to another or dealing with different text type corpora. Several approaches have been proposed tackling the problem at various levels. [Catizone 89] proposed linking regions of text according to the regularity of word co-occurrences across texts. [Brown 91] described a method based on the number of words that sentences contain. Moreover, certain anchor points and paragraph markers are also considered. The method has been applied to the Hansard Corpus achieving an accuracy between 96%-97%. [Gale 91] [Church 93] proposed a method that relies on a simple statistical model of character lengths. The model is based on the observation that longer sentences in one language tend to be translated into longer sequences in the other language while shorter ones tend to be translated into shorter ones. A probabilistic score is assigned to each pair of proposed sentence pairs, based on the ratio of lengths of the two sentences and the variance of this ratio. 334 Although the apparent efficacy of the Gale- Church algorithm is undeniable and validated on different pairs of languages, it faces problems when handling complex alignments. The 2-1 alignments had five times the error rate of 1-1. The 2-2 category disclosed a 33% error rate, while the 1-0 or 0-1 alignments were totally missed. To overcome the inherited weaknesses of the Gale-Church method, [Simard 92] proposed using cognates, which are pairs of tokens of different languages which share "obvious" phonological or orthographic and semantic properties, since these are likely to be used as mutual translations. In this paper, an alignment scheme is proposed in order to deal with the complexity of varying requirements envisaged by different applications in a systematic way. For example, in EBMT, the requirements are strict in terms of information integrity but relaxed in terms of delay and response time. Our approach is based on several observations. First of all, we assume that establishment of correspondences between units can be applied at sentence, clause, and phrase level. Alignment at any of these levels has to invoke a different set of textual and linguistic information (acting as unit delimiters). In this paper, alignment is tackled at sentence level. THE ALIGNMENT ALGORITHM_ Content words, unlike functional ones, might be interpreted as the bearers that convey information by denoting the entities and their relationships in the world. The notion of spreading the semantic load supports the idea that every content word should be represented as the union of all the parts of speech we can assign to it [Basili 92]. The postulated assumption is that a connection between two units of text is established if, and only if, the semantic load in one unit approximates the semantic load of the other. Based on the fact that the principal requirement in any translation exercise is meaning preservation across the languages of the translation pair, we define the semantic load of a sentence as the patterns of tags of its content words. Content words are taken to be verbs, nouns, adjectives and adverbs. The complexity of transfer in translation imposes the consideration of the number of content tags which appear in a tag pattern. By considering the total number of content tags the morphological derivation procedures observed across languages, e.g. the transfer of a verb into a verb+deverbal noun pattern, are taken into account. Morphological ambiguity problems pertaining to content words are treated by constructing ambiguity classes (acs) leading to a generalised set of content tags. It is essential here to clarify that in this approach no disambiguation module is prerequisite. The time breakdown for morphological tagging, without a disambiguator device, is according to [Cutting 92] in the order of 1000 ~tseconds per token. Thus, tens of megabytes of text may then be tagged per hour and high coverage can be obtained without prohibitive effort. Having identified the semantic load of a sentence, Multiple Linear Regression is used to build a quantitative model relating the content tags of the source language (SL) sentence to the response, which is assumed to be the sum of the counts of the corresponding content tags in the target language (TL) sentence. The regression model is fit to a set of sample data which has been manually aligned at sentence level. Since we intuitively believe that a simple summation over the SL content tag counts would be a rather good estimator of the response, we decide that the use of a linear model would be a cost- effective solution. The linear dependency of y (the sum of the counts of the content tags in the TL sentence) upon x i (the counts of each content tag category and of each ambiguity class over the SL sentence) can be stated as : Y=bo+b 1 x 1 ÷b2x2+b3x3 +--.+bnxn~ (I) where the unknown parameters {bi} are the regression coefficients, and s is the error of estimation assumed to be normally distributed with zero mean and variance 02 . In order to deal with different taggers and alternative tagsets, other configurations of (1), merging acs appropriately, are also recommended. For example, if an acs accounts for unknown words, we can use the fact that most unknown words are nouns or proper nouns and merge this category with nouns. We can also merge acs that are represented with only a few distinct words in the training corpus. Moreover, the use of relatively few acs (associated with content words) reduces the number of parameters 335 to be estimated, affecting the size of the sample and the time required for training. The method of least squares is used to estimate the regression coefficients in (1). Having estimated the b i and 0 2, the probabilistic score assigned to the comparison of two sentences across languages is just the area under the N(0,o 2) p.d.f., specified by the estimation error. This probabilistic score is utilised in a Dynamic Programming (DP) framework similar to the one described in [Gale 91]. The DP algorithm is applied to aligned paragraphs and produces the optimum alignment of sentences within the paragraphs. EVALUATION The application on which we are developing and testing the method is implemented on the Greek-English language pair of sentences of the CELEX corpus (the computerised documentation system on European Community Law). Training was performed on 40 Articles of the CELEX corpus accounting for 30000 words. We have tested this algorithm on a randomly selected corpus of the same text type of about 3200 sentences. Due to the sparseness of acs (associated only with content words) in our training data, we reconstruct (1) by using four variables. For inflective languages like Greek, morphological information associated to word forms plays a crucial role in assigning a single category. Moreover, by counting instances of acs in the training corpus, we observed that words that, for example, can be a noun or a verb, are (due to the lack of the second singular person in the corpus) exclusively nouns. Hence : Y=bo+b 1 x 1 +b2x2+b3x3+b4x4+s (2) where x 1 represents verbs, x 2 stands for nouns, unknown words, vernou (verb or noun) and nouadj (noun or adjective), x 3 adjectives and veradj (verb or adjective), x 4 adverbs and advadj (adverb or adjective ) 02 was estimated at 3.25 on our training sample, while the regression coefficients were: b 0 = 0.2848,b 1 = 1.1075, b 2 = 0.9474, b 3 = 0.8584,b 4 = 0.7579 An accuracy that approximated a 100% success rate was recorded. Results are shown in Table 1. It is remarkable that there is no need for any lexical constraints or certain anchor points to improve the performance. Additionally, the same model and parameters can be used in order to cope with the infra-sentence alignment. In order to align all the CELEX texts, we intend to prepare the material (text handling, pos tagging in different languages pairs and different tag sets, etc.) so that we will be able to evaluate the method on a more reliable basis. We also hope to test the method's efficiency at phrase level endowed with necessary bilingual information about phrase delimiters. It will be shown there, that reusability of previous information facilitates tuning and resolving of inconsistencies between various delimiters. category 1-0 or 0-1 N correct matches 4 5 1-1 3178 3178 2-1 or 1-2 36 35 2-2 0 0 i Table 1 : Matches in sentence pairs of the CELEX corpus REFERENCES. [Basili 92] Basili R. Pazienza M. Velardi P. "Computational lexicons: The neat examples and the odd exemplars". Prec. of the Third Conference on Applied NLP 1992 [Brown 91] Brown P. Lai J. and Mercer R. "Aligning sentences in parallel corpora". Prec. of ACL 1991 [Catizone 89] Catizone R. Russell G. Warwick S. "Deriving translation data from bilingual texts". Prec. of the First Lexical Acquisition Workshop, Detroit 1989 [Church 93] Church K. "Char_align: A program for aligning parallel texts at character level" Prec. of ACL 93 [Cutting 92] Cutting D. Kupiec J. Pedersen J. Sibun P. "A practical part-of-speech tagger " Proc.of ACL 1992 [Gale 91] Gale W. Church K. "A program for aligning sentences in bilingual corpora", Prec. of ACL 1991 [Simard 92] Simard M. Foster G. Isabelle P. "Using cognates to align sentences in bilingual corpora" Prec. of TMI 1992 336 | 1994 | 51 |
CONCEPTUAL ASSOCIATION FOR COMPOUND NOUN ANALYSIS Microsoft Institute 65 Epping Road North Ryde NSW 2113 (t-markl @ microsoft.corn) Mark Lauer AUSTRALIA Department of Computing Macquarie University NSW 2109 (mark @ macadam, mpce. mq.edu .au) Abstract This paper describes research toward the automatic interpretation of compound nouns using corpus statistics. An initial study aimed at syntactic disambiguation is presented. The approach presented bases associations upon thesaurus categories. Association data is gathered from unambiguous cases extracted from a corpus and is then applied to the analysis of ambiguous compound nouns. While the work presented is still in progress, a first attempt to syntactically analyse a test set of 244 examples shows 75% correctness. Future work is aimed at improving this accuracy and extending the technique to assign semantic role information, thus producing a complete interpretation. INTRODUCTION Compound Nouns: Compound nouns (CNs) are a commonly occurring construction in language consisting of a sequence of nouns, acting as a noun; pottery coffee mug, for example. For a detailed linguistic theory of compound noun syntax and semantics, see Levi (1978). Compound nouns are analysed syntactically by means of the rule N --¢ N N applied recursively. Compounds of more than two nouns are ambiguous in syntactic structure. A necessary part of producing an interpretation of a CN is an analysis of the attachments within the compound. Syntactic parsers cannot choose an appropriate analysis, because attachments are not syntactically governed. The current work presents a system for automatically deriving a syntactic analysis of arbitrary CNs in English using corpus statistics. Task description: The initial task can be formulated as choosing the most probable binary bracketing for a given noun sequence, known to form a compound noun, without knowledge of the context. E.G.: (pottery (coffee mug)); ((coffee mug) holder) Corpus Statistics: The need for wide ranging lexical-semantic knowledge to support NLP, commonly referred to as the ACQUISITION PROBLEM, has generated a great deal of research investigating automatic means of acquiring such knowledge. Much work has employed carefully constructed parsing systems to extract knowledge from machine readable dictionaries (e.g., Vanderwende, 1993). Other approaches have used rather simpler, statistical analyses of large corpora, as is done in this work. Hindle and Rooth (1993) used a rough parser to extract lexical preferences for prepositional phrase (PP) attachment. The system counted occurrences of unambiguously attached PPs and used these to define LEXICAL ASSOCIATION between prepositions and the nouns and verbs they modified. This association data was then used to choose an appropriate attachment for ambiguous cases. The counting of unambiguous cases in order to make inferences about ambiguous ones is adopted in the current work. An explicit assumption is made that lexical preferences are relatively independent of the presence of syntactic ambiguity. Subsequently, Hindle and Rooth's work has been extended by Resnik and Hearst (1993). Resnik and Hearst attempted to include information about typical prepositional objects in their association data. They introduced the notion of CONCEPTUAL ASSOCIATION in which associations are measured between groups of words considered to represent concepts, in contrast to single words. Such class-based approaches are used because they allow each observation to be generalized thus reducing the amount of data required. In the current work, a freely available version of Roget's thesaurus is used to provide the grouping of words into concepts, which then form the basis of conceptual association. The research presented here can thus be seen as investigating the application of several key ideas in Hindle and Rooth (1993) and in Resnik and Hearst (1993) to the solution of an analogous problem, that of compound noun analysis. However, both these works were aimed solely at syntactic disambiguation. The goal of semantic interpretation remains to be investigated. METHOD Extraction Process: The corpus used to collect information about compound nouns consists of some 7.8 million words from Grolier's multimedia on-line encyclopedia. The University of Pennsylvania morphological analyser provides a database of more than 315,000 inflected forms and their parts of speech. The Grolier's text was searched for consecutive words 337 listed in the database as always being nouns and separated only by white space. This prevented comma-separated lists and other non-compound noun sequences from being included. However, it did eliminate many CNs from consideration because many nouns are occasionally used as verbs and are thus ambiguous for part of speech. This resulted in 35,974 noun sequences of which all but 655 were pairs. The first 1000 of the sequences were examined manually to check that they were not incidentally adjacent nouns (as in direct and indirect objects, say). Only 2% did not form CNs, thus establishing a reasonable utility for the extraction method. The pairs were then used as a training set, on the assumption that a two word noun compound is unambiguously bracketed) Thesaurus Categories: The 1911 version of Roget's Thesaurus contains 1043 categories, with an average of 34 single word nouns in each. These categories were used to define concepts in the sense of Resnik and Hearst (1993). Each noun in the training set was taagged with a list of the categories in which it appeared." All sequences containing nouns not listed in Roget's were discarded from the training set. Gathering Associations: The remaining 24,285 pairs of category lists were then processed to find a conceptual association (CA) between every ordered pair of thesaurus categories (ti, t2) using the formula below. CA(t1, t2) is the mutual information between the categories, weighted for ambiguity. It measures the degree to which the modifying category predicts the modified category and vice versa. When categories predict one another, we expect them to be attached in the syntactic analysis. Let AMBIG(w) = the number of thesaurus categories w appears in (the ambiguity of w). Let COUNT(wb w2) = the number of instances of Wl modifying w2 in the training set Let FREQ(t~, t2) = COUNT(w~, w~) ,t "~ a ~ "~m ,2 AMBIG(w,)" AMBIG(w2) Let CA (tb t2) = FREQ(tl, t 2) FREQ(t,,i)- ~FREQ(i, t 2) Vi Vi where i ranges over all possible thesaurus categories. Note that this measure is asymmetric. CA(tbt2) measures the tendency for tl to modify t2 in a compound noun, which is distinct from CA(t2, tO. Automatic Compound Noun Analysis: The following procedure can be used to syntactically I This introduces some additional noise, since extraction can not guarantee to produce complete noun compounds 2 Some simple morphological rules were used at this point to reduce plural nouns to singular forms analyse ambiguous CNs. Suppose the compound consists of three nouns: wl w2w3. A left-branching analysis, [[wl w2] w3] indicates that wl modifies w2, while a right-branching analysis, [wl [w2 w3]] indicates that wl modifies something denoted primarily by w3. A modifier should be associated with words it modifies. So, when CA(pottery, mug) >> CA(pottery, coffee), we prefer (pottery (coffee mug)). First though, we must choose concepts for the words. For each wi (i = 2 or 3), choose categories Si (with wl in Si) and Ti (with wi in Ti) so that CA(Si, Ti) is greatest. These categories represent the most significant possible word meanings for each possible attachment. Then choose wi so that CA(Si, Ti) is maximum and bracket wl as a sibling of wi. We have then chosen the attachment having the most significant association in terms of mutual information between thesaurus categories. In compounds longer than three nouns, this procedure can be generalised by selecting, from all possible bracketings, that for which the product of greatest conceptual associations is maximized. RESULTS Test Set and Evaluation: Of the noun sequences extracted from Grolier's, 655 were more than two nouns in length and were thus ambiguous. Of these, 308 consisted only of nouns in Roget's and these formed the test set. All of them were triples. Using the full context of each sequence in the test set, the author analysed each of these, assigning one of four possible outcomes. Some sequences were not CNs (as observed above for the extraction process) and were labeled Error. Other sequences exhibited what Hindle and Rooth (1993) call SEMANTIC INDETERMINACY, where the meanings associated with two attachments cannot be distinguished in the context. For example, college economics texts. These were labeled Indeterminate. The remainder were labeled Left or Right depending on whether the actual analysis is left- or right-branching. TABLE 1 - Test set analysis distribution: Labels L R I E Total Count 163 81 35 29 308 Percentage 53% 26% 11% 9% 100% Proportion of different labels in the test set. Table 1 shows the distribution of labels in the test set. Hereafter only those triples that received a bracketing (Left or Right) will be considered. The attachment procedure was then used to automatically assign an analysis to each sequence in 338 the test set. The resulting correctness is shown in Table 2. The overall correctness is 75% on 244 examples. The results show more success with left branching attachments, so it may be possible to get better overall accuracy by introducing a bias. TABLE 2 - Results of test: x Output Left Output Right Actual Left 131 32 Actual Right 30 51 The proportions of correct and incorrect analyses. DISCUSSION Related Work: There are two notable systems that are related to the current work. The SENS system described in Vanderwende (1993) extracted semantic features from machine readable dictionaries by means of structural patterns applied to definitions. These features were then matched by heuristics which assigned likelihood estimates to each possible semantic relationship. The work only addressed the interpretation of pairs of nouns and did not mention the problem of syntactic ambiguity. A very simple technique aimed at bracketing ambiguous compound nouns is reported in Pustejovsky et al. (1993). While attempting to extract taxonomic relationships, their system heuristically bracketed CNs by searching elsewhere in the corpus for subcomponents of the compound. Such matching fails to take account of the natural frequency of the words and is likely to require a much larger corpus for accurate results. Unfortunately, they provide no evaluation of the performance afforded by their approach. Future Plans: A more sophisticated noun sequence extraction method should improve the results, providing more and cleaner training data. Also, many sequences had to be discarded because they contained nouns not in the 1911 Roget's. A more comprehensive and consistent thesaurus needs to be used. An investigation of different association schemes is also planned. There are various statistical measures other than mutual information, which have been shown to be more effective in some studies. Association measures can also be devised that allow evidence from several categories to be combined. Compound noun analyses often depend on contextual factors. Any analysis based solely on the static semantics of the nouns in the compound cannot account for these effects. To establish an achievable performance target for context free analysis, an experiment is planned using human subjects, who will be given ambiguous noun compounds and asked to choose attachments for them. Finally, syntactic bracketing is only the first step in interpreting compound nouns. Once an attachment is established, a semantic role needs to be selected as is done in SENS. Given the promising results achieved for syntactic preferences, it seems likely that semantic preferences can also be extracted from corpora. This is the main area of ongoing research within the project. CONCLUSION The current work uses thesaurus category associations gathered from an on-line encyclopedia to make analyses of compound nouns. An initial study of the syntactic disambiguation of 244 compound nouns has shown promising results, with an accuracy of 75%. Several enhancements are planned along with an experiment on human subjects to establish a performance target for systems based on static semantic analyses. The extension to semantic interpretation of compounds is the next step and represents promising unexplored territory for corpus statistics. ACKNOWLEDGMENTS Thanks are due to Robert Dale, Vance Gledhill, Karen Jensen, Mike Johnson and the anonymous reviewers for valuable advice, This work has been supported by an Australian Postgraduate Award and the Microsoft Institute, Sydney. REFERENCES t-nnd~ Don and Mats Rooth (1993) " S ~ Ambiguity and Lexical Relations" Computat/ona/ L/ngu/st/cs Vol. 19(1), Special Issue on Using ~ Corpora I, pp 103-20 Levi, Judith (1978) "Ihe Syntax and Semantics of Complex Nominals" Academic Press, New Y~k. Pustejovsky, James, Sabine B~eI" and ~ Anick (1993) "l.exical Semantic Techniques for Corpus Analysis" Computat/ona/L/ng~ Vol. 19(2), Special Issue on Using Large Coqx~ N, pp 331-58 Resnik, Philip and Mani Hearst (1993) "Structural Ambiguity and Conceptual Relations" Proceedings of the Workshop on Very large Corpora: Academic and lndustdal Perspectives, June 22, OlflO Stale UfflVel~ty, pp 58-64 V ~ Lm'y (1993) "SEN& The System for Evaluafiqg Noun Sequences" in Jensen, Karen, George Heidom and Stephen Richardson (eds) "Natural Language Processing: "l'he PI3qLP Aplxoach", Khwer Academic, pp 161-73 339 | 1994 | 52 |
INTENTIONS AND INFORMATION IN DISCOURSE Nicholas Asher IRIT, Universit4 Paul Sabatier, 118 Route de Narbonne, 31062 Toulouse, CEDEX, France asher@irit, fr Alex Lascarides Department of Linguistics, Stanford University, Stanford, Ca 94305-2150, USA, alex~csli, stanford, edu Abstract This paper is about the flow of inference between com- municative intentions, discourse structure and the do- main during discourse processing. We augment a the- ory of discourse interpretation with a theory of distinct mental attitudes and reasoning about them, in order to provide an account of how the attitudes interact with reasoning about discourse structure. INTRODUCTION The flow of inference between communicative intentions and domain information is often essential to discourse processing. It is well reflected in this discourse from Moore and Pollack (1992): (1)a. George Bush supports big business. b. He's sure to veto House Bill 1711. There are at least three different interpretations. Con- sider Context 1: in this context the interpreter I be- lieves that the author A wants to convince him that (lb) is true. For example, the context is one in which I has already uttered Bush won't veto any more bills. I reasons that A's linguistic behavior was intentional, and therefore that A believes that by saying (la) he will convince I that Bush will veto the bill. Even if I believed nothing about the bill, he now infers it's bad for big business. So we have witnessed an inference from premises that involve the desires and beliefs of A (Moore and Pollack's "intentional structure"), as well as his linguistic behavior, to a conclusion about domain information (Moore and Pollack's "informational struc- ture"). Now consider Context 2: in this context I knows that A wants to convince him of (la). As in Context 1, I may infer that the bill is bad for big business. But now, (lb) is used to support (la). Finally, consider Context 3: in this context I knows that House Bill 1711 is bad for big business, but doesn't know A's communicative desires prior to witnessing his linguistic behaviour. From his beliefs about tile domain, he infers that supporting big business would cause Bush to veto this bill. So, A must. have uttered (la) to support (lb). Hence I realises that A wan~ed him to believe (lb). So in contrast to Contexts 1 and 2, we have a flow of inference from informational structure to intentional structure. This story makes two main points. First, we agree with Moore and Pollack that we must represent both the intentional import and the informational import of a discourse. As they show, this is a problem for current formulations of Rhetorical Structure Theory (RST) (Thompson and Mann, 1987). Second, we go further than Moore and Pollack, and argue that rea- soning about beliefs and desires exploits different rules and axioms from those used to infer rhetorical relations. Thus, we should represent intentional structure and dis- course structure separately. But we postulate rhetorical relations that express the discourse function of the con- stituents in the communicative plan of the author, and we permit interaction between reasoning about rhetor- ical relations and reasoning about beliefs and desires. This paper provides the first steps towards a formal analysis of the interaction between intentional struc- ture and informational structure. Our framework for discourse structure analysis is SDRT (Asher 1993). The basic representational structures of that theory may be used to characterise cognitive states. We will extend the logical engine used to infer rhetorical relations--DiCE (Lascarides and Asher 1991, 1993a, 1993b, Lascarides and Oberlander 1993)--to model inferences about in- tentional structure and its interaction with informa- tional structure. BUSH'S REQUIREMENTS We must represent both the intentional import and the informational import of a discourse simultaneously. So we need a theory of discourse structure where dis- course relations central to intentional import and to informational import can hold simultaneously between the same constituents. A logical framework in which all those plausible relations between constituents that are consistent with each other are inferred, such as a non- monotonic logic like that in DICE (Lascarides and Asher, 1993a), would achieve this. So conceivably, a similar nonmonotonic logic for RST might solve the problem of keeping track of the intentional and informational 34 structure simultaneously. But this would work only if the various discourse rela- tions about intentions and information could simultane- ously hold in a consistent knowledge base (KB). Moore and Pollack (1992) show via discourse (2) that the cur- rent commitment to the nucleus-satellite distinction in RST precludes this. (2)a. Let's go home by 5. b. Then we can get to the hardware store before it closes. c. That way we can finish the bookshelves tonight. From an intentional perspective, (2b) is a satellite to (2a) via Motivation. From an informational perspec- tive, (2a) is a satellite to (2b) via Condition. These two structures are incompatible. So augmenting rtsT with a nonmonotonic logic for inferring rhetorical rela- tions would not yield a representation of (2) on multiple levels in which both intentional and informational re- lations are represented. In SDRT, on the other hand, not all discourse relations induce subordination, and so there is more scope for different discourse relations holding simultaneously in a consistent KB. Grosz and Sidner's (1986) model of discourse inter- pretation is one where the same discourse elements are related simultaneously on the informational and inten- tional levels. But using their framework to model (1) is not straightforward. As Grosz and Sidner (1990) point out: "any model (or theory) of the communication sit- uation must distinguish among beliefs and intentions of different agents," but theirs does not. They repre- sent intentional structure as a stack of propositions, and different attitudes aren't distinguished. The informal analysis of (1) above demands such distinctions, how- ever. For example, analysing (1) under Context 3 re- quires a representation of the following statement: since A has provided a reason why (lb) is true, he must want I to believe that (lb) is true. It's unclear how Grosz and Sidner would represent this. SDRT (hsher, 1993) is in a good position to be integrated with a theory of cog- nitive states, because it uses the same basic structures (discourse representation structures or DRSs) that have been used in Discourse Representation Theory (DRT) to represent different attitudes like beliefs and desires (Kamp 1981, Asher 1986, 1987, Kamp 1991, Asher and Singh, 1993). A BRIEF INTRODUCTION TO SDRT AND DICE In SDRT (Asher, 1993), an NL text is represented by a segmented DRS (SDRS), which is a pair of sets contain- ing: the DRSS or SDRSs representing respectively sen- tences and text segments, and discourse relations be- tween them. Discourse relations, modelled after those proposed by Hobbs (1985), Polanyi (1985) and Thomp- son and Mann (1987), link together the constituents of an SDRS. We will mention three: Narration, Result and Evidence. • SDRSS have a hierarchical configuration, and SDRT predicts points of attachment in a discourse structure for new information. Using DICE we infer from the reader's knowledge resources which discourse relation should be used to do attachment. Lascarides and Asher (1991) introduce default rules representing the role of Gricean pragmatic maxims and domain knowledge in calculating the value of the up- date function (r, a, fl), which means "the representation fl of the current sentence is to be attached to a with a discourse relation, where a is an open node in the repre- sentation r of the text so far". Defaults are represented by a conditional--¢ > ¢ means 'if ¢, then normally ¢. For example, Narration says that by default Narration relates elements in a text. • Narration: (v, c~,/3) > garration(c~,/3) Associated axioms show how Narration affects the tem- poral order of the events described: Narration and the corresponding temporal axioms on Narration predict that normally the textual order of events matches their temporal order. The logic on which DICE rests is Asher and Mor- reau's (1991) Commonsense Entailment (CE). Two pat- terns of nonmonotonic inference are particularly rele- vant here. The first is Defeasible Modus PontEs: if one default rule has its antecedent verified, then the con- sequent is nonmonotonically inferred. The second is the Penguin Principle: if there are conflicting default rules that apply, and their antecedents are in logical entailment relations, then the consequent of the rule with the most specific antecedent is inferred. Lascarides and Asher (1991) use DICE to yield the discourse struc- tures and temporal structures for simple discourses. But the theory has so far ignored how A's intentional structure--or more accurately, I's model of A's inten- tional structure--influences I's inferences about the do- main and the discourse structure. ADDING INTENTIONS To discuss intentional structure, we develop a language which can express beliefs, intentions and desires. Fob lowing Bratman (forthcoming) and Asher and Singh (1993), we think of the objects of attitudes either as plans or as propositions. For example, the colloquial intention to do something--like wash the dishes--will be expressed as an intention toward a plan, whereas the intention that Sue be happy is an intention toward a proposition. Plans will just consist of sequences of ba- sic actions al; a2;... ;an. Two operators--7~ for about to do or doing, and 7:) for having done--will convert ac- tions into propositions. The attitudes we assume in our model are believes (BA¢ means 'A believes ¢'), wants (WA¢ means 'A wants ¢'), and intends (ZA¢ means 'A intends ¢'). All of this takes place in a modal, dy- namic logic, where the propositional attitudes are sup- plied with a modal semantics. To this we add the modal conditional operator >, upon Which the logic of DICE is 35 based. Let's take a closer look at (1) in Context 1. Let the logical forms of the sentences (la) and (lb) be respec- tively a and/3. In Context 1, I believes that A wants to convince him of/3 and thinks that he doesn't believe already. Following the DRT analysis of attitudes, we assume I's cognitive state has embedded in it a model of A's cognitive state, which in turn has a represen- tation of I's cognitive state. So )'VABI/3 and BA~BI/3 hold in I's KB. Furthermore, (v, (~,/3) A Info(c~,/3) holds in I's KB, where Info(a,/3) is a gloss for the seman- tic content of a and /~ that I knows about) I must now reason about what A intended by his particular discourse action. I is thus presented with a classical reasoning problem about attitudes: how to derive what a person believes, from a knowledge of what he wants and an observation of his behaviour. The classic means of constructing such a derivation uses the practical syl- logism, a form of reasoning about action familiar since Aristotle. It expresses the following maxim: Act so as to realize your goals ceteris paribus. The practical syllogism is a rule of defeasible reason- ing, expressible in CE by means of the nonmonotonic consequence relation ~. The consequence relation 0~¢ can be stated directly in the object language of CE by a formula which we abbreviate as ~¢, ¢) (Asher 1993). We use 2_(¢, ¢) to state the practical syllogism. First, we define the notion that the KS and ¢, but not the KB alone, nonmonotonically yield ¢: * Definition: ¢) I(KB A ¢, ¢) ^ I(KB, ¢) The Practical Syllogism says that if (a) A wants ¢ but believes it's not true, and (b) he knows that if g, were added to his KB it would by default make ¢ true even- tually, then by default A intends ¢. * The Practical Syllogism: (a) (WA(¢) A (b) BA(3Cb(¢, evenfually(¢)))) > (c) The Practical Syllogism enables.I to reason about A's cognitive state. In Context 1, when substituting in the Practical Syllogism BI/3 for ¢, and (r, c~,/3) A Info(oq j3) for ¢, we find that clause (a) of the antecedent to the Practical Syllogism is verified. The conclusion (c) is also verified, because I assumes that A's discourse act was intentional. This assumption could be expressed explicitly as a >-rule, but we will not do so here. Now, abduction (i.e., explanatory reasoning) as well as nonmonotonic deduction is permitted on the Prac- tical Syllogism. So from knowing (a) and (c), I can conclude the premise (b). We can state in cE an 'ab- ductive' rule based on the Practical Syllogism: * The hbductive Practical Syllogism I (APSl) (}/~]A(¢) A ~A(~¢) A ~'A(¢)) > BA (:1¢b(¢, evenLually(¢))) 1This doesn't necessarily include that House Bill 1711 is bad for big business. hPsl allows us to conclude (b) when (a) and (c) of the Practical Syllogism hold. So, the intended action ¢ must be one that A believes will eventually make ¢ true. When we make the same substitutions for ¢ and !/' in APSl as before, I will infer the conclusion of APS1 via Defeasible Modus Ponens: BA(J.kb((r, 0~,/3) ^ Info(cq/3), eventually(B1~3))). That is, I infers that A believes that, by uttering what he did, I will come to believe/3. In general, there may be a variety of alternatives that we could use to substitute for ¢ and ¢ in APSl, in a given situation. For usually, there are choices on what can be abduced. The problem of choice is one that Hobbs e~ hi. (1990) address by a complex weighting mechanism. We could adopt this approach here. The Practical Syllogism and APS 1 differ in two impor- tant ways from the DICE axioms concerning discourse relations. First, APS1 is motivated by an abductive line of reasoning on a pattern of defeasible reasoning involving cognitive states. The DICE axioms are not. Secondly, both the Practical Syllogism and hPsl don't include the discourse update function (r, c~,/3) together with some information about the semantic content of a and/3 in the antecedent, while this is a standard feature of the DICE axioms for inferring discourse structure. These two differences distinguish reasoning about in- tentional structures and discourse structures. But dis- course structure is linked to intentional structure in the following way. The above reasoning with A's cognitive state has led I to conclusions about the discourse func- tion of ~. Intuitively, a was uttered to support /3, or a 'intentionally supports' /3. This idea of intentional support is defined in DICE as follows: * Intends to Support: Isupport(c~, fl) ~-* (WA(B,~3) A BA(-~13,~) A BA (~bh((r, ~,/3)hInfo(~,/3), even*ually( B1/3) ) ) ) In words, a intentionally supports ]3 if and only if A wants I to believe /3 and doesn't think he does so al- ready, and he also believes that by uttering a and /3 together, so that I is forced to reason about how they should be attached with a rhetorical relation, I will come to believe/3. Isupport(a,/3) defines a relationship between a and/3 at the discourse structural level, in terms of I's and A's cognitive states. With it we infer further information about the particular discourse relation that I should use to attach /3 to c~. Isupport(ot,/3) provides the link between reasoning about cognitive states and reasoning about discourse structure. Let us now return to the interpretation of (1) under Context 1. I concludes Isupport(o~,/3), because the right hand side of the *-*-condition in Intends to Support is satisfied. So I passes from a problem of reasoning about A's intentional structure to one of reasoning about dis- course structure. Now, I should check to see whether o" actually does lead him to believe/3. This is a check on the coherence of discourse; in order for an SDRS r to 36 be coherent, the discourse relations predicated of the constituents must be satisfiable. 2 Here, this amounts to justifying A's belief that given the discourse context and I's background beliefs of which A is aware, I will arrive at the desired conclusion--that he believes ft. So, I must be able to infer a particular discourse relation R between a and fl that has what we will call the Belief Property: (Bin A R(a, fl)) > /~1fl. That is, R must be a relation that would indeed license I's concluding fl from a. We concentrate here for illustrative purposes on two discourse relations with the Belief Property: Result(a, fl) and Evidence(a, fl); or in other words, a results in fl, or a is evidence for ft. * Relations with the Belief Property: (B,c~ A Evidence(a, fl)) > ~.~I~ (t31a ^ Result(a, fl)) > &fl The following axiom of Cooperation captures the above reasoning on I's part: if a Isupports fl, then it must be possible to infer from the semantic content, that either Result(a, fl) or Evidence(a, fl) hold: • Cooperation : (:l&.b((r, a, fl) A [nfo(a, fl), Resull(a, fl))V ~b((r, a, fl) A Info(a, fl), Evidence(a, fl))) The intentional structure of A that I has inferred has restricted the candidate set of discourse relations that I can use to attach fl to a: he must use Result or Evi- dence, or both. If I can't accommodate A's intentions by doing this, then the discourse will be incoherent. We'll shortly show how Cooperation contributes to the explanation of why (3) is incoherent. (3)a. George Bush is a weak-willed president. b. ?He's sure to veto House Bill 1711. FROM INTENTIONS TO INFORMATION: CONTEXTS 1 AND 2 The axioms above allow I to use his knowledge of A's cognitive state, and the behaviour of A that he observes, to (a) infer information about A's communicative inten- tions, and (b) consequently to restrict the set of candi- date discourse relations that are permitted between the constituents. According to Cooperation, I must infer that one of the permitted discourse relations does in- deed hold. When clue words are lacking, the semantic content of the constituents must be exploited. In cer- tain cases, it's also necessary to infer further informa- tion that wasn't explicitly mentioned in the discourse, 2Asher (1993) discusses this point in relation to Con- trast: the discourse marker butis used coherently only if the semantic content of the constituents it connects do indeed form a contrast: compare Mary's hair is black but her eyes are blue, with ?Mary's hair is black but John's hair i.~ black. in order to sanction the discourse relation. For exam- ple, in (1) in Contexts 1 and 2, I infers the bill is bad for big business. Consider again discourse (1) in Context 1. Intu- itively, the reason we can infer Result(a, fl) in the anal- ysis of (1) is because (i) a entails a generic (Bush vetoes bills that are bad for big business), and (ii) this generic makes fl true, as long as we assume that House Bill 1711 is bad for big business. To define the Result Rule below that captures this reasoning for discourse attachment, we first define this generic-instance relationship: instance(e, ¢) holds just in case ¢ is (Vx)(A(x) > B(x)) and ¢ is A[x/a~AB[x/a~. For example, bird(tweety) Afly(tweety) (Tweety is a bird and Tweety flies) is an instance of Vx(bird(x) > fly(x)) (Birds fly). The Result Rule says that if (a) fl is to be attached to a, and a was intended to support fl, and (b) a entails a generic, of which fl and 6 form an instance, and (c) 6 is consistent with what A and I believe, 3 then normally, 6 and Result(a, fl) are inferred. • The Result Rule: (a) ((r, a, fl) A Isupport(a, fl)A (b) ~b^T(a, ¢)^ ~b^~^~(fl, ¢) ^ instance(e, ¢)^ (c) co,sistent(KBi U ~BA U 6)) > (Res.tt(a, fl) ^ 6) The Result Rule does two things. First, it allows us to infer one discourse relation (Result) from those permit- ted by Cooperation. Second, it allows us to infer a new piece of information 6, in virtue of which Result(a, fl) is true. We might want further constraints on 6 than that in (c); we might add that 6 shouldn't violate expectations generated by the text. But note that the Result Rule doesn't choose between different tfs that verify clauses (b) and (c). As we've mentioned, the theory needs to be extended to deal with the problem of choice, and it may be necessary to adopt strategies for choosing among alternatives, which take factors other than logi- cal structure into account. We have a similar rule for inferring Evidence(fl, a) ("fl is evidence for a"). The Evidence rule resembles the Result Rule, except that the textual order of the discourse constituents, and the direction of intentional support changes: * The Evidence Rule: (a) (if, a, fl) ^ Isuppo~t(fl, a)^ (b) ~,b^,(a, ¢)^ ~b^~^~(~, ~) ^ instance(e, ~)^ (c) consistent(Ks~ UKSA U6)) > (E, idence(Z, a) ^ 6) We have seen that clause (a) of the Result Rule is sat- isfied in the analysis of (1) in Context 1. Now, let 6 be the proposition that the House Bill 1711 is bad for big 3Or, more accurately, ~i must be consistent with what I himself believes, and what he believes that A believes. In other words, KBA is I'$ model of A's KB. 37 business (written as bad(1711)). This is consistent with KBI U KBA, and so clause (c) is satisfied. Clause (b) is also satisfied, because (i) a entails Bush vetoes bills that are bad for big business--i.e., :l~B^r(a, ¢) holds, where ¢ is Vx((bill(x) A bad(z)) > veto(bush, x)); (it) fl ^/i is bill(1711) A veto(bush, 1711) A bad(1711); and so (iii) instance(¢,fl A/i) and IKB^T^~(fl, fl A 6) both hold. So, when interpreting (1) in Context 1, two rules ap- ply: Narration and the Result Rule. But the consequent of Narration already conflicts with what is known; that the discourse relation between a and fl must satisfy the Belief Property. So the consequent of the Result Rule is inferred: /i (i.e., House Bill 1711 is bad for big business) and Result(a, fl) .4 These rules show how (1) can make the knowledge that the house bill is bad for big business moot.; one does not need to know that the house bill is bad for big business prior to attempting discourse attachment. One can infer it at the time when discourse attachment is attempted. Now suppose that we start from different premises, as provided by Context 2: BABIfl, BA~BI a and )/VABIa. That is, I thinks A believes that I believes Bush will veto the bill, and I also thinks that A wants to con- vince him that Bush supports big business. Then the 'intentional' line of reasoning yields different re- sults from the same observed behaviour--A's utter- ance of (1). Using APSl again, but substituting Bia for ¢ instead of B1fl, I concludes BA(I-kb((r,a,fl) A I fo(a, fl), eve t any(B a)). So Is vVo t (fl, a) holds. Now the antecedent to Cooperation is verified, and so in the monotonic component of cE, we infer that a and fl must be connected by a discourse relation R' such that (B1fl A R'(a, fl)) > Bla. As before, tiffs restricts the set of permitted discourse relations for attaching /? to a. But unlike before, the textual order of a and fl, and their direction of intentional support mismatch. The rule that applies this time is the Evidence Rule. Consequently, a different discourse relation is inferred, although the same information/i--that House Bill 1711 is bad for big business--supports the discourse relation, and is also be inferred. In contrast, the antecedents of the Result and Evi- dence Rules aren't verified in (3). Assuming I knows about the legislative process, he knows that if George Bush is a weak willed president, then normally, he won't veto bills. Consequently, there is no /i that is consis- tent with his KB, and sanctions the Evidence or Resull relation. Since I cannot infer which of the permitted discourse relations holds, and so by contraposing the axiom Cooperation, a doesn't Isupport ft. And so I has failed to conclude what A intended by his discourse ac- tion. It can no longer be a belief that it will eventually 4We could have a similar rule to the Result Rule for inferring Evidence(a, fl) in this discourse context too. SGiven the new KB, the antecedent of APSl would no longer be verified if we substituted ¢ with Blfl. lead to I believing fl, because otherwise Isupport(a, fl) would be true via the rule Intends To Support. Conse- quently, I cannot infer what discourse relation to use in attachment, yielding incoherence. FROM INFORMATION TO INTENTIONS: CONTEXT 3 Consider the interpretation of (1) in Context 3: I has no knowledge of A's communicative intentions prior to witnessing his linguistic behaviour, but he does know that the House Bill 1711 is bad for big business. I has sufficient information about the semantic content of a and fl to infer Result(a, fl), via a rule given in Lascarides and Asher (1991): • Result (if, a, fl) ^ fl)) > ResetS(a, fl) Resull(a, fl) has the Belief Property, and I reasons that from believing a, he will now come to believe ft. Having used the information structure to infer discourse struc- ture, I must now come to some conclusions about A's cognitive state. Now suppose that BABIa is in I's KS. Then the following principle of Charity allows I to assume that A was aware that I would come to believe fl too, through doing the discourse attachment he did: • Charity: BI¢ > BABI¢ This is because I has inferred Result(a, fl), and since Result has the belief property, I will come to believe fl through believing a; so substituting fl for ¢ in Charity, BAI3Ifl will become part of I's KB via Defeasible Modus Ponens. So, the following is now part of I's KB: BA( [-kb((V, a, fl) ^ Info(a, fl)), eventually(Blfl)). Fur- thermore, the assumption that A's discourse behaviour was intentional again yields the following as part of I's Km 7A((V, a, fl) A Info(a, fl)). So, substituting BIfl and (r, a, fl) A Info(a, fl) respectively for ¢ and ¢ into the Practical Syllogism, we find that clause (b) of the premises, and the conclusion are verified. Explanatory reasoning on the Practical Syllogism this time permits us to infer clause (a): A's communicative goals were to convince I of fl, as required. The inferential mechanisms going from discourse structure to intentional structure are much less well understood. One needs to be able to make some sup- positions about the beliefs of A before one can infer anything about his desires to communicate, and this requires a general theory of commonsense belief attri- bution on tile basis of beliefs that one has. IMPERATIVES AND PLAN UPDATES The revision of intentional structures exploits modes of speech other than the assertoric. For instance, consider another discourse from Moore and Pollack (1992): 38 (2)a. Let's go home by 5. b. Then we can get to the hardware store before it closes. c. That way we can finish the bookshelves tonight. Here, one exploits how the imperative mode affects reasoning about intentions. Sincere Ordering captures the intuition that ifA orders a, then normally he wants a to be true; and Wanting and Doing captures the in- tuition that if A wants a to be true, and doesn't think that it's impossible to bring a about, then by default he intends to ensure that c~ is brought about, either by doing it himself, or getting someone else to do it (cf. Cohen and Levesque, 1990a). * Sincere Ordering: > • Wanting and Doing: (~VA~ A ~BA~eventually(7~)) > ZA(~) These rules about A's intentional structure help us analyse (2). Let the logical forms of (2a-c) be respec- tively or, /3 and 7- Suppose that we have inferred by the linguistic clues that Result(o~,13) holds. That is, the action a (i.e., going home by 5pro), results in /3 (i.e., the ability to go to the hardware store before it closes). Since (~ is an imperative, Defeasible Modus Po- nens on Sincere Ordering yields the inference that )/VA c~ is true. Now let us assume that the interpreter I be- lieves that the author A doesn't believe that c~'s being brought about is impossible. Then we may use Defea- sible Modus Ponens again on Wanting and Doing, to infer ZA(Tia). Just how the interpreter comes to the belief, that the author believes c~ is possible, is a com- plex matter. More than likely, we would have to encode within the extension of DiCE we have made, principles that are familiar from autoepistemic reasoning. We will postpone this exercise, however, for another time. Now, to connect intentions and plans with discourse structure, we propose a rule that takes an author's use of a particular discourse structure to be prima facie evidence that the author has a particular intention. The rule Plan Apprehension below, states that if ~ is a plan that A intends to do, or get someone else to do, and he states that 6 is possible as a Result of this action c~, then the interpreter may normally take the author A to imply that he intends 6 as well. • Plan Apprehension: (nesult(~, t3) A ZA(~) A/3 = can(6)) > ZA(r-(~; 6)) We call this rule Plan Apprehension, to make clear that it furnishes one way for the interpreter of a verbal mes- sage, to form an idea of the author's intentions, on the basis of that message's discourse structure. Plan Apprehension uses discourse structure to at- tribute complex plans to A. And when attaching/3 to ~, having inferred Result(a, 13), this rule's antecedent is verified, and so we infer that 6--which in this case is to go to the hardware store before it closes--as part of A's plan, which he intends to bring about, either himself, or by getting another agent to do it. Now, we process 7- That way in 3' invokes an anaphoric reference to a complex plan. By the acces- sibility constraints in SDRT, its antecedent must [a; 6], because this is the only plan in the accessible discourse context. So 7 must be the DKS below: as a result of do- ing this plan, finishing the bookshelves (which we have labelled e) is possible: (7)Result([a; Now, substituting [c~; ~] and e for a and fl into the Plan Apprehension Rule, we find that the antecedent to this rule is verified again, and so its consequent is non- monotonically inferred: Za(T~(a; 6; e)). Again, I has used discourse structure to attribute plans to A. Moore and Pollack (1992) also discuss one of I's pos- sible responses to (2): (4)We don't need to go to the hardware store. I borrowed a saw from Jane. Why does I respond with (4)? I has inferred the ex- istence of the plan [~r; 6; el via Plan Apprehension; so he takes the overall goal of A to be e (to finish the book- shelves this evening). Intuitively, he fills in A's plan with the reason why going to the hardware store is a subgoal: I needs a saw. So A's plan is augmented with another subgoal ~, where ~ is to buy a saw, as follows: Za(7~.[c~;6;~;e]). But since ~ holds, he says this and assumes that this means that A does not have to do c~ and 6 to achieve ~. To think about this formally, we need to not only reason about intentions but also how agents update their intentions or revise them when pre- sented with new information. Asher and Koons (1993) argue that the following schema captures part of the logic which underlies updating intentions: • VpdateZa(n[al;... ; Z)(al;... ; aS) In other words, if you're updating your intentions to do actions al to ~,, and al to c U are already done, then the new intentions are to do otj+t to an, and you no longer intend to do al to aj. The question is now: how does this interact with dis- course structure? I is attempting to be helpful to A; he is trying to help realize A's goal. We need axioms to model this. Some key tools for doing this have been de- veloped in the past couple of decades--belief revision, intention and plan revision--and the long term aim would be to enable formM theories of discourse struc- ture to interact with these formal theories of attitudes and attitude revision. But since a clear understand- ing of how intentions are revised is yet to emerge, any speculation on the revision of intentions in a particular discourse context seems premature. 39 CONCLUSIONS AND FURTHER WORK We have argued that it is important to separate reason- ing about mental states from reasoning about discourse structure, and we have suggested how to integrate a formal theory of discourse attachment with common- sense reasoning about the discourse participants' cog- nitive states and actions. We exploited a classic principle of commonsense rea- soning about action, the Practical Syllogism, to model I's inferences about A's cognitive state during discourse processing. We also showed how axioms could be de- fined, so as to enable information to mediate between the domain, discourse structure and communicative in- tentions. Reasoning about intentional structure took a differ- ent form from reasoning about discourse attachment, in that explanatory reasoning or abduction was per- mitted for the former but not the latter (but cf. Hobbs et al, 1990). This, we argued, was a principled reason for maintaining separate representations of intentional structure and discourse structure, but preserving close links between them via axioms like Cooperation. Coop- eration enabled I to use A's communicative intentions to reason about discourse relations. This paper provides an analysis of only very simple discourses, and we realise that although we have in- troduced distinctions among the attitudes, which we have exploited during discourse processing, this is only a small part of the story. Though DICE has used domain specific information to infer discourse relations, the rules relate domain structure to discourse structure in at best an indirect way. Implicitly, the use of the discourse update fimction (v, c~, ~) in the DICE rules reflects the intuitively obvious fact that domain information is filtered through the cog- nitive state of A. To make this explicit, the discourse community should integrate work on speech acts and attitudes (Perrault 1990, Cohen and Levesque 1990a, 1990b) with theories of discourse structure. In future work, we will investigate discourses where other axioms linking the different attitudes and discourse structure are important. REFERENCES Asher, Nicholas (1986) Belief in Discourse Representa- tion Theory, Journal of Philosophical Logic, 15, 127- 189. Asher, Nicholas (1987) A Typology for Attitude Verbs, Linguistics and Philosophy, 10, pp125-197. Asher, Nicholas (1993) Reference to Abstract Objects in Discourse, Kluwer Academic Publishers, Dordrecht, Holland. Asher, Nicholas and Koons, Robert (1993) The Revi- sion of Beliefs and Intentions in a Changing World, in Precedings of the hal Spring Symposium Series: Rea- soning about Mental States: Formal Theories and Ap- plications. Asher, Nicholas and Morreau, Michael (1991) Com- mon Sense Entailment: A Modal Theory of Nonmono- tonic Reasoning, in Proceedings to the 12th Interna- tional Joint Conference on Artificial Intelligence, Syd- ney Australia, August 1991. Asher, Nicholas and Singh, Munindar (1993) A Logic of Intentions and Beliefs, Journal of Philosophical Logic, 22 5, pp513-544. Bratman, Michael (forthcoming) Intentions, Plans and Practical Reason, Harvard University Press, Cam- bridge, Mass. Cohen, Phillip R. and Levesque, Hector J. (1990a) Persistence, Intention, and Commitment, In Philip R. Cohen, Jerry Morgan and Martha E. Pollack (editors) Intentions in Communication, pp33-69. Cambridge, Massachusetts: Bradford/MIT Press. Cohen, Phillip R. and Levesque, Hector J. (1990b) Rational Interaction and the Basis for Communica- tion, In Philip R. Cohen, Jerry Morgan and Martha E. Pollack (editors) Intentions in Communication, pp221- 256. Cambridge, Massachusetts: Bradford/MIT Press. Grosz, Barbara J. and Sidner, Candice L. (1986) Attention, Intentions and the Structure of Discourse. Computational Linguistics, 12, 175-204. Grosz, Barbara J. and Sidner, Candice L. (1990) Plans for Discourse. In Philip R. Cohen, Jerry Morgan and Martha E. Pollack (editors) Intentions in Com- munication, pp417-444. Cambridge, Massachusetts: Bradford/MIT Press. Hobbs, Jerry R. (1985) On the Coherence and Struc- ture of Discourse. Report No: CSLI-85-37, Center for the Study of Language and Information, October 1985. Kamp, tlans (1981) A Theory of Truth and Semantic Representation, in Groenendijk, J. A. G., Janssen, T. M. V., and Stokhof, M. B. J. (eds.) Formal Methods in the Study of Language, 277-332. Kamp, Hans (1991) Procedural and Cognitive As- pects of Propositional Attitude Contexts, Lecture Notes from the Third European Summer School in Language, Logic and Information, Saarbriicken, Germany. Lascarides, Alex and Asher, Nicholas (1991) Dis- course Relations and Defeasible Knowledge, in Proceed- ings of the °o9th Annual Meeting of Computational Lin- guistics, 55-63, Berkeley California, USA, June 1991. Lascarides, Alex and Asher, Nicholas (1993a) Tempo- ral Interpretation, Discourse Relations and Common- sense Entailment, in Linguistics and Philosophy, 16, pp437-493. Lascarides, Alex and Asher, Nicholas (1993b) A Se- mantics and Pragmatics for the Pluperfect, in Pro- ceedings of the European Chapter of the Association for Computational Linguistics (EACL93), pp250-259, Utrecht, The Netherlands. Lascarides, Alex, Asher, Nicholas and Oberlander, Jon (1992) Inferring Discourse Relations in Context, in Proceedings of the 30th Annual Meeting of the Asso- 40 ciation of Computational Linguistics, ppl-8, Delaware USA, June 1992. Lascarides, Alex and Oberlander, Jon (1993) Tempo- ral Connectives in a Discourse Context, in Proceedings of the European Chapter of the Association for Com- putational Linguistics (EACL93), pp260-268, Utrecht, The Netherlands. Moore, Johanna and Pollack, Martha (1992) A Prob- lem for RST: The Need for Multi-Level Discourse Anal- ysis Computational Linguistics, 18 4, pp537-544. Perrault, C. Ray (1990) An Application of Default Logic to Speech Act Theory, in Philip R. Cohen, Jerry Morgan and Martha E. Pollack (editors) h~ten- tions in Communication, pp161-185. Cambridge, Mas- sachusetts: Bradford/MIT Press. Polanyi, Livia (1985) A Theory of Discourse Struc- ture and Discourse Coherence, in Eilfor, W. It., Kroe- bet, P. D., and Peterson, K. L., (eds), Papers from the General Session a the Twenty-First Regional Meeting of the Chicago Linguistics Society, Chicago, April 25-27, 1985. Thompson, Sandra and Mann, William (1987) Rhetorical Structure Theory: A Framework for the Analysis of Texts. In IPRA Papers in Pragrnatics, 1, 79-105. 41 | 1994 | 6 |
GENERATING PRECONDITION EXPRESSIONS IN INSTRUCTIONAL TEXT Keith Vander Linden ITRI, University of Brighton Lewes Road Brighton, BN2 4AT UK Internet: [email protected] Abstract This study employs a knowledge intensive corpus analysis to identify the elements of the commu- nicative context which can be used to determine the appropriate lexical and grammatical form of instructional texts. IMAGENE, an instructional text generation system based on this analysis: is presented, particularly with reference to its ex- pression of precondition relations. INTRODUCTION Technical writers routinely employ a range of forms of expression for precondition expressions in instructional text. These forms are not randomly chosen from a pool of forms that say "basically the same thing" but are rather systematicaUy used based on elements of the communicative context. Consider the following expressions of various kinds of procedural conditions taken from a corpus of in- structional text: (la) If light flashes red, insert credit card again. (Airfone, 1991) l (lb) When the 7010 is installed and the battery has charged for twelve hours, move the OFF/STBY/TALK [8] switch to STBY. (Code-a-phone, 19891) (lc) The BATTERY LOW INDICATOR will light when the battery in the handset i~ low. (Excursion, 1989) (ld) Return the OFF/STBY/TALK switch to STBY a/ter your call. (Code-a-phone, 1989) (le) 1. Make sule the handset and base antennas are fully extended. 2. Set the OFF/STBY/TALK SWITCH to Talk. (Excltrsion, 1989) As can be seen here, procedural conditions may be expressed using a number of alternative l In this paper, a reference wiU be added to the end of all examples that have come directly from the corpus, indicating the ma~uual from which they were taken. lexical and grammatical forms. They may occur either before or after the expression of their related action (referred to here as the issue of slot), and may be linked with a variety of conjunctions or prepositions (the issue of linker). Further, they may be expressed in a number of grammatical forms, either as actions or as the relevant state brought about by such actions (called here the ter- minating condition). Finally, they may or may not be combined into a single sentence with the ex- pression of their related action (the issue of clause combining). Text generation systems nmst not only be ca- pable of producing these forms but must also know when to produce them. The study described here has employed a detailed corpus analysis to address these issues of choice and has implemented the re- sults of this study in IMAGENE, an architecture for instructional text generation. CORPUS ANALYSIS The corpus developed for this study contains ap- proximately 1000 clauses (6000 words) of instruc- tions taken from 17 different sources, including in- struction booklets, recipes, and auto-repair man- uals. It contains 98 precondition expressions, where the notion of precondition has been taken from Rhetorical Structure Theory (Mann and Thompson, 1988): and in particular from RSsner and Stede's modified relation called Precondition (1992). This relation is a simple amalgam of the standard RST relations Circumstance and Condi- tion and has proven useful in analyzing various kinds of conditions and circumstances that fre- quently arise in instructions. The analysis involves addressing two related issues: 1. Determining the range of expressional forms commonly used by instructional text writers; 2. Determining the precise comnnmicative context in which each of these forms is used. 42 Text Level lnquirh's IMAGENE System Network -] Sentence Builder "1 PENMAN I I Figure 1: The Architecture of IMAGENE Instructhmal Text Determining the range of forms was a matter of cataloging the fl~rms that ~mcurred in the cor- pus. Example (1) shows exemplars of the major forms found, which include present tense action expressions (la), agentless passives (lb), relational expressions of resultant states (lc): phrasal forms, (ld), and separated iml,erative forms (le). Determining the functional context in whidl each ~,f the forms is used inw~lves identifying corre- lations between the contextual features of commu- nicative context on the -he hand, and the lexical and grammatical form on the other. I focus here on the range of lexical and gralnmatical forms cor- responding to the precondition expressions in the corpus. The analyst begins by identifying a fea- ture of the communicative context that appears to correlate with the variation of some aspect of tlle lexical and grammatical forms. They then at- tempt to validate the hypothesis by referring to the examples in the corpus. These two phases are repeated until a good match is achieved or until a relevant hypothesis cannot be found. IMAGENE The analysis has resulted in a number of identified covariations which have been coded in the Sys- tem Network formalism from Systemic-Functional Linguistics (Halliday, 1976) and included in the IMAGENE architecture. The system network is basically a derision network where each choice point distinguishes between alternate features of the communicative context. It has been used ex- tensively in Systemic Linguistics to address both sentence-level and text-level issues. Such networks are traversed based on the appropriate features of the comnmnicative context, and as a side-effect, of this traversal, linguistic structures are con- structed by realization ~statemcnts which are as- sociated with each feature of the network. These statements allow several types of manipulation of the evolving text structure, including the insertion of text structure nodes, grammatical marking of the nodes, textual ordering, and clause combin- ing. Currently, the network is traversed manually; the data structures and code necessary to auto- matically navigate the structure have not been im- plemented. This has allowed me to focus on the contextual distinctions that need to be made and on their lexical and grammatical consequences. The general architecture of IMAGENE, as de- picted in Figure 1. consists of a System Network and a Sentence Building routine, and is built on top of the Penman text generation system (Mann, 1985). It transforms inputs (shown on the left) into instructional text (shown on the right). The following sections will detail the results of the analysis for precondition expressions. It should be noted that they will include intuitive motivations for the distinctions made in the sys- tem network. This is entirely motivational; the de- terminations made by the systems are based solely on the results of the corpus analysis. PRECONDITION SLOT In the corpus, preconditions are typically fronted, and therefore the sub-network devoted to precon- dition expression will default to fronting. There are four exceptions to this default which are illus- trated here: (2a) The BATTER?*" LOW INDICATOR will light when the battery is the handset is low. (Excursion, 1989) 43 Local Nt~cleus > Precond Act- Topic Not-Local Figure 2: The Precondition Slot Selection Network (2b) Return the OFF/STBY/TALK switch to STBY after your call. (Code-a-phone, 1989) (2c) The phone will ring only if the handset 'is on the base. (Code-a-phone, 1989) (2d) In the STBY (standby) position, the phone will ring whether the handset .is on the base or in another location. (Code-a-phone, 1989) The slot selection fi~r example (2a) couht go either way; except that it is the first sentence in a section titled ':Battery Low Indicator", mak- ing the discussion of this indicator the local topic of conversation, and thus the appropriate theme of the sentence. This distinction is made in the portion of the system network shown in figure 2. This sub-network has a single system which dis- tinguishes between preconditions associated with actions referring to thematic material and those associated with non-thematic material. The re- alization statement, Nucleus>Precond, indicates that the main action associated with the condi- tion (called the nucleus in RST terminology) is to be placed before the precondition itself. The slot determinations for the remainder of example (2) are embedded in system networks shown later in this paper. Example (2b) is the example of what I call rhetorical demotion. The action is considered obvious and is thus demoted to phrase status and put at the end of its imme- diately following action. Examples (2c) and (2d) show preconditions that are not fronted because of the syntax used to express the logical nature of the precondition. In (2c), the condition is expressed as an exclusive condition which is never fronted. One could, perhaps; say "?? Only if the handset is on the base, will the phone ring." 2 but this form is never used in the corpus. Neither is the condition form in (2d) ever fronted in the corpus. 2The "??" notation is used to denote a possible form of expression that is not typically found in the corpus; it does not indicate ungrammaticMity. PRECONDITION LINKER Preconditions are marked with a number of link- ers, illustrated in the IoUowing examples: (3a) Lift the handset and set the OFF/STBY/TALK [8] switch to TALK. When you hear dial tone, dial the number on the Dialpad [4]. (Code-a-phone, 1989) (3b) If you have touch-tone service, move the TONE/PULSE SWITCH to the Tone position. (Excursion, 1989) (3c) I. Make sure the handset and base antennas are fully extended. 2. Set the OFF/STBY/TALK SWITCH to Talk. (Excursion, 1989) The systems largely dedicated to selecting precondition linkers are shown in figure 3. 3 Two parallel systems are entered, Condition- Probability and Changeable-Type. Condition-Probability distinguishes ac- tions which are probable from those which are not. Highly probable actions are typicaUy marked with ':when". Those actions which are not highly prob- ably are marked with "If" or some similar linker, as determined by the Complexity system and its descendants. The Complexity system is entered for ac- tions which are not probable and not changeable. It determines the logical nature of the precondi- tions and sets the linker accordingly. The three possible linkers chosen by this sld:)-network are ':if"; "only if", or "whether... or... ". Precond-When is entered when the action is conditional and further is highly probable. The occurrence of the dial tone in example (3a) is part of a sequence of actions and is conditional in that it nlay not actually happen, say if the telephone sys- tem is malflmctioning in some way, but is ntmethe- less highly probable. Precond-Nominal is en- tered immediately after Precond-When when- ever the precondition is being stated as a nom- inalization. It overwrites the linker choice with ':after" in only this case. Preconditions that the user is expected to be able to change if necessary and which come at the beginning of sections that contain sequences of prescribed actions are called Change.able pre- conditions. Example (3c) is such a case. Here, the reader is expected to check the antennas and ex- tend them if they are not already extended. This 3In the figure, the bold-italic con(htious attached to the front of these systems denote conditions that hold on entry (e.g., ConditionM-Action is a condition trite on the entry of Condition-Probability), They axe nec- essary because the networks shown are only portions of a much larger network. 44 Conditional-Action Probable Not-Probable Changeable Mark(make-sure) Changeable- Procedural-Sequence and Not-Concurrent and (Obvious or Not.Coordinate) Not-Changeable PrecoItd>Nucleus Precond -When Mark(when) Nominal.Available J Simplex Complexity Complex Precond-Nomin',d Mark(after) Exclusivity Alternativeness Exclusive Mark(only-i39 Nucleus> Precond Not-Exclusive Mark(~ Alternatives Mark(whether- or-nol) Not- Alternatives Mark( iJ9 Figure 3: The Precondition Linker Selection Network type of precondition is marked as a "Make sure" imperative clause by Changeable-Type. PRECONDITION FORM As noted above, preconditions can be expressed as either a terminating condition or as an action. The choice between the two is made by the form selection sub-networks: shown in figures 4 and 5. This choice depends largely upon the type of ac- tion on which the precondition is based. The ac- tions in the corpus can be divided into five cate- gories which affect the grammatical form of pre- c(mdition expressions: • Monitor Actions; • Giving Actions; • Placing Actions; • Habitual Decisions; • Other Actions. The first four actions are special categories of actions that have varying act and terminating condition forms of expression. The last category, other actions: encompasses all actions not falling into the previous four categories. The sub-network which distinguishes these forms is shown in figure 4. This section wiU discuss each category in turn: starting with the following examples of the first four action types: (4a) Listen for dial tone, then dial AREA CODE + NUMBER slowly (Airfone, 1991) (4b) If you have touch-tone service: move the TONE/PULSE SWITCH to the Tone position. (Excursion, 1989) (4c) The phone will ring only if the handset is on the base. (Code-a-phone, 1989) (4d) /f you leave the OFF/STBY/TALK [8] switch in TALK: move the switch to PULSE: and tap FLASH [6] the next time you lift the handset; to return to PULSE dialing mode. (Code-a-phone, 1989) Monitor actions, as shown in example (4a), concern explicit commands to monitor conditions in the environment. In this case, readers are being commanded to listen for a dial tone: with the un- derlying assumption that they will not continue on 45 Previous- Act-Type Monitor MarI~pr~$era) ~a~im~) ( Procedural-Giving ] Made(present) Giving " ~ M~*4ha*ing) Primitive-Giving Made(is-required) Habitual-Decision Mark(present) Mark{act) ( Procedural-Placing ] Made(present) Made(locative) Placing ~ Primitive-Placing Made(locative) Other Figure 4: The Precondition Form Selection Net- work with the instructions unless one is heard. Giving and Placing actions, however, tend to be expressed as terminating conditions, as shown in (4b) and (4c). The corpus does not include active forms of these actions: such as "?? If the phone com- pany has given you touch-tone service, do ..." or "?? Do ... if you have placed the handset on the base." An Habitual decision is a decision to make a practice of performing some action or of per- forming an action in some way. When stated as preconditions, they take the present tense form in (4d). Taken in context, this expression refers not to a singular action of leaving the OFF/STBY/- TALK switch in TALK position; but rather to the decision to habitually leave it in such a state. The singular event would be expressed as "If you have left the OFF/STBY/TALK switch in TALK, do ..." which means something quite different from the expression in (4d) which is stated in present tense. The bulk of the preconditions in the corpus (70.4%) are based on other types of actions. These types are distinguished in figure 5. In general, the Other Effective Action systems are based on the actor of the action. Reader actions are expressed either as present tense passives or as present tense actions, depending upon whether the action has been mentioned before or not. These distinctions are made by the gates Repeated-Reader and Not-Repeated-Reader. An example of the for- mer can be found in (5a), (':When the 7010 is in- stalled"). In the corpus, such expressions of ac- tions already detailed in the previous text take the present tense, agentless passive form. If the reader action is not a repeated mention, a simple present tense active form is used, as in example (5b). (5a) When the 7010 is installed and the battery has charged for twelve hours, move the OFF/STBY/TALK [8] switch to STBY. (Code-a-phone, 1989) (5b) /f you make a dialing error, or want to make another call immediately, FLASH gives you new dial tone without moving the OFF/STBY/TALK switch. (Code-a-phone, 1989) The Act-Hide system and its descendants are entered for non-obvious, non-reader actions. There are four basic forms for these precondition expressions, examples of which are shown here: (6a) If light flashes red, insert credit card again (Aiffone, 1991) (6b) When you hear dial tone, dial the number on the Dialpad [4]. (Code-a-phone, 1989) (6c) The BATTERY LOW INDICATOR will light when the battery in the handset is low. (Excursion, 1989) (6d) When instructed (approx. 10 sec.) remove phone by firmly grasping top of handset and pulling out. (Airfone, 1991) Act-Hide distinguishes actions which are overly complex or long duration and those that are not. Those which are not will be expressed either as present tense actions, as the one in ex- ample (6a), if the action form is available in the lexico-grammar. Active-Available makes this determination. If no action form is available, then Inception-Status is entered. If the inception of the action is expected to have been witnessed by the reader, then the present tense sensing action form is used, as shown in example (6b). Termination-Availability is entered either if the action is to be hidden or if the inception of the action was not expected to be experienced by the reader. In these cases, the relational form of the terminating condition is used if it is available. An example of this is shown in example (6c). The long duration action of the battery draining is not expressed in the relational form used there. If the relational form is not available, the present tense, agentless passive is specified, as shown in example (6d). Finally, if an action being expressed as a pre- condition is considered obvious to the reader, the nominalization is used, provided its nominalized form is available in the lexicon. Example (ld) is an example of such an expression. 46 Not.Obvious-Action and Reader-Action / / Repeated-Reader Mark(present) Mark(pc~*'sive ) Not-Repeated-Reader Mark(present ) Mark(act) Not-Obvious-Action and Non-Reader.Action Hid* Active- Not-Hide Hide Available Mark(acO Mark(present) i ] Experienced ] Mark(sena'ing) Mark(present) Not-Available Stat-~ 1 Not-Experienced-~ Termination. Availability Figure 5: The Other Effective Actions Selection Network Available Mark(relational) Mark(present) Not-Available Mark(passive) Mark(present) VERIFYING IMAGENE'S PRESCRIPTIONS This study has been based primarily on an analysis of a small subset of the fitll corpus: namely on the instructions for a set of three cordless telephone manuals. This training set constitutes appro~- mately 35% of the 1000 clause corpus. The results of this analysis were implemented in IMAGENE and tested by manually re-running the system network for all of the precondition expressions in the train- ing set. These tests were performed without the Penman realization component engaged: compar- ing the text structure output by the system net- work with the structure inherent in the corpus text. A sample of such a text structure: showing IMAGENE:s output when run on the actions ex- pressed in the text in example (7)., is shown in fig- ure 6. The general structure of this figure is reflec- tive of the underlying RST structure of the text. The nodes of the structure are fitrther marked with all the lexical and grammatical information rele- vant to the issues addressed here. (7) Wh, en the 7010 i.~ installed and the battery has charged for twelve hours; move the OFF/STBY/TALK [8] switch to STBY. The 7010 is now ready to use. Fully extend the base antenna [12]. Extend the handset antenna [1] for telephone conversations. (Code-a-phone, 1989) Statistics were kept on how well IMAGENE:s text structure output matched the expressions in the corpus with respect to the four lexical and grammatical issues considered here (i.e.: slot: form; linker: and clause combining). In the ex- ample structure, all of the action expressions are specified correctly except for the Charge action (the second clause). This action is marked as a present tense passive, and occurs in the corpus in present perfect form. In fi|ll realization mode: IMAGENE translates the text structure into sentence generation com- mands for the Penman generation system: produc- ing the following output for example (7): (8) PiOten the phone is installed, and the battery is charged, move the OFF/STBY/TALK switch to the STBY position. The phone is now ~eady to use. Extend the base antenna. Extend the handset antenna for phone convez:~ation. As just mentioned, this text identical to the original with respect to the four lexical and gram- matical issues addressed in the corpus study with 47 *IG-Text* I I I n I I Ready-to-use J New_~ Extend-Hands t Converse Precondition Form: Relational /Sentence Form: Imper. Form: Nominal Tense: Present ~New New- Linker: For Sentence • Move . Continue- Form: Imper. Sentence Sentence Install Charge Jontinue- Form: Passive Form: Passiv~.1" Sentence Linker: When Linker: And Tense: Present Tense: Present ",,._.f Continue- Sentence Figure 6: A Sample Text Structure the exception of the second clause. There are other differences: however; having to do with issues not addressed in the study; such as referring expres- sions and the expression of manner. A corpus study of these issues is yet to he performed. The overall results are shown in table 7 (see Vander Linden, 1993b for the results concerning other rhetorical relations). This (:hart indicates the percentage of the prec.ndition examples for which IMAGENE:s predic:tions matched the c(~rpus for each of the four lexical and grammatical issues considered. The values for the training and testing sets are differentiated. The training set results indicate that there are patterns of expression in cordless telephone manuals that can he identified and implemented. The system's predictions were als. tested on a separate and m(~re diverse portion ,,f the cor- pus which includes instructions for different types of devices and processes. This additional testing serves both to disallow over-fitting of the data in the training portion: and to give a measure of how far beyond the telephone domain the predictions can legitimately he applied. As (::an be seen in fig- ure 7; the testing set results were not as good as those for the training set. hut were still well above random guesses. 100 90 80 70 60 50 40 Preconditions 30 20 10 0 [] Training Set [] Testing Set Figure 7: The Accuracy of IMAGENE's Realizations for Precondition Expressions 48 CONCLUSIONS This study has employed a knowledge intensive corpus analysis to identify the elements of the communicative context which can be used to de- termine the appropriate lexical and grammatical form of precondition expressions in instructional texts. The methodology provides a principled means for cataloging the use of lefical and gram- matical forms in particular registers, and is thus critical for any text generation project. The cur- rent study of precondition expressions in instruc- tions can be seen as providing the sort of register specific data required for some current approaches to register-based text generation (Bateman and Paris. 1991). The methodology is designed to identify co- variation between elements of the communicative context on the one hand and grammatical form on the other. Such covariations, however: do not constitute proof that the technical writer actu- ally considers those elements during the genera- tion process; nor that the prescribed form is ac- tually more effective than any other. Proof of ei- ther of these issues would require psycholinguistic testing. This work provides detailed prescriptions concerning how such testing could be performed: i.e.: what forms should be tested and what con- texts controlled for: but does not actually perform them (cf. Vander Linden: 1993a). The analysis was carried out by hand (with the help of a relational database): and as such was tedious and limited in size. The prospect of au- tomation: however: is not a promising one at this point. While it might be possible to automati- call)' parse the grammatical and lexical forms: it remains unclear how to automate the determina- tion of the complex semantic and pragmatic fea- tures relevant to choice in generation. It might be possible to use automated learning procedures (Quinlan: 1986) to construct the system networks~ but this assumes that one is given the set of rele- vant features to start with. Future work on this project will include at- tempts to automate parts of the process to facili- tate the use of larger corlmra, and the implemen- tation of the data structures and code necessary to automate the inquiry process. ACKNOWLEDGMENTS This work was done in conjunction with Jim Mar- tin and Susanna Cumming whose help is grate- fitlly acknowledged. It was supported by the National Science Foundation under Contract No. IRI-9109859. REFERENCES Airfone (1991). Inflight Entertainment ~ In]orma- tion Guide. United Airlines. Bateman: J. A. and Paris: C. L. (1991). Con- straining the development of lexicogrammati- cal resources during text generation: towards a computational instantiation of register the- ory. In Ventola, E.: editor: Functional and Systemic Linguistics Approaches and Uses: pages 81 106. Mouton: Amsterdam. Selected papers from the 16th International Systemics Congress: Helsinki: 1989. Code-a-phone (1989). Code-A-Phone Owner's Guide. Code-A-Phone Corporation; P.O. Box 5678, Portland, OR 97228. Excursion (1989). Excursion 3100. Northwestern Bell Phones. A USWest Company. Halliday, M. A. K. (1976). System and Function in Language. Oxford University Press, London. edited by G. R. Kress. Mann. W. C. (1985). An introduction to the Nigel text generation grammar. In Benson, J. D.; Freedle; R. O.: and Greaves, W. S.: edi- tors, Systemic Perspectives on Discourse, vol- ume 1; pages 84 95. Ablex. Mann, W. C. and Thompson, S. A. (1988). Rhetori- cal structure theory: Toward a fimctional the- ory of text organization. Text: An Interdisci- plinary Journal for the Study of Text, 8(2). Quinlan: J. R. (1986). Induction of decision trees. Machine Learning; 1:81 106. RSsner, D. and Stede; M. (1992). Customizing RST for the automatic production of techni- cal manuals. In Dale: R., Hovy: E., RSesner: D., and Stock, O., editors, Aspects of Au- tomated Natural Language Generation; Lec- ture Notes in Artificial Intelligence 587: pages 199 214. Springer Verlag: Berlin. Proceedings of the 6th International Workshop on Natu- ral Language Generation, Trento, Italy; April, 1992. Vander Linden~ K. (1993a). Generating effective in- structions. In Proceedings of the Fifteenth An- nual Conference of the Cognitive Science So- ciety, June 18 21, Boulder: CO, pages 1023 1028. Vander Linden: K. (1993b). Speahnng of Actions: Choosing Rhetorical Status and Grammatical Form in Instructional Text Generation. PhD thesis, University of Colorado. Available as Technical Report CU-CS-654-93. 49 | 1994 | 7 |
COMMON TOPICS AND COHERENT SITUATIONS: INTERPRETING ELLIPSIS IN THE CONTEXT OF DISCOURSE INFERENCE Andrew Kehler Harvard University Aiken Computation Laboratory 33 Oxford Street Cambridge, MA 02138 [email protected] Abstract It is claimed that a variety of facts concerning ellip- sis, event reference, and interclausal coherence can be explained by two features of the linguistic form in ques- tion: (1) whether the form leaves behind an empty constituent in the syntax, and (2) whether the form is anaphoric in the semantics. It is proposed that these features interact with one of two types of discourse in- ference, namely Common Topic inference and Coherent Situation inference. The differing ways in which these types of inference utilize syntactic and semantic repre- sentations predicts phenomena for which it is otherwise difficult to account. Introduction Ellipsis is pervasive in natural language, and hence has received much attention within both computational and theoretical linguistics. However, the conditions under which a representation of an utterance may serve as a suitable basis for interpreting subsequent elliptical forms remain poorly understood; specifically, past at- tempts to characterize these processes within a single traditional module of language processing (e.g., consid- ering either syntax, semantics, or discourse in isolation) have failed to account for all of the data. In this paper, we claim that a variety of facts concerning ellipsis res- olution, event reference, and interclausal coherence can be explained by the interaction between the syntactic and semantic properties of the form in question and the type of discourse inference operative in establishing the coherence of the antecedent and elided clauses. In the next section, we introduce the facts concerning gapping, VP-ellipsis, and non-elliptical event reference that we seek to explain. In Section 3, we categorize elliptical and event referential forms according to two features: (1) whether the expression leaves behind an empty constituent in the syntax, and (2) whether the expression is anaphoric in the semantics. In Section 4 we describe two types of discourse inference, namely Common Topic inference and Coherent Situation in- ference, and make a specific proposal concerning the interface between these and the syntactic and seman- tic representations they utilize. In Section 5, we show how this proposal accounts for the data presented in Section 2. We contrast the account with relevant past work in Section 6, and conclude in Section 7. Ellipsis and Interclausal Coherence It has been noted in previous work that the felicity of certain forms of ellipsis is dependent on the type of co- herence relationship extant between the antecedent and elided clauses (Levin and Prince, 1982; Kehler, 1993b). In this section we review the relevant facts for two such forms of ellipsis, namely gapping and VP-ellipsis, and also compare these with facts concerning non-elliptical event reference. Gapping is characterized by an antecedent sentence (henceforth called the source sentence) and the elision of all but two constituents (and in limited circumstances, more than two constituents) in one or more subsequent target sentences, as exemplified in sentence (1): (1) Bill became upset, and Hillary angry. We are concerned here with a particular fact about gap- ping noticed by Levin and Prince (1982), namely that gapping is acceptable only with the purely conjunc- tive symmetric meaning of and conjoining the clauses, and not with its causal asymmetric meaning (para- phraseable by "and as a result"). That is, while either of sentences (1) or (2) can have the purely conjunctive reading, only sentence (2) can be understood to mean that Hillary's becoming angry was caused by or came as a result of Bill's becoming upset. (2) Bill became upset, and Hillary became angry. This can be seen by embedding each of these examples in a context that reinforces one of the meanings. For instance, gapping is felicitous in passage (3), where con- text supports the symmetric reading, but is infelicitous in passage (4) under the intended causal meaning of and. 1 1This behavior is not limited to the conjunction and; a similar distinction holds between symmetric and asymmet- ric uses of or and but. See Kehler (1994) for further discus- sion. 50 (3) The Clintons want to get the national debate fo- cussed on health care, and are getting annoyed because the media is preoccupied with Whitewa- ter. When a reporter recently asked a Whitewater question at a health care rally, Bill became upset, and Hillary became/0 angry. (4) Hillary has been getting annoyed at Bill for his in- ability to deflect controversy and do damage con- trol. She has repeatedly told him that the way to deal with Whitewater is to play it down and not to overreact. When a reporter recently asked a Whitewater question at a health care rally, Bill became upset, and (as a result) Hillary became/# angry. The common stipulation within the literature stating that gapping applies to coordinate structures and not to subordinate ones does not account for why any co- ordinated cases are unacceptable. VP-ellipsis is characterized by an initial source sen- tence, and a subsequent target sentence with a bare auxiliary indicating the elision of a verb phrase: (5) Bill became upset, and Hillary did too. The distribution of VP-ellipsis has also been shown to be sensitive to the coherence relationship extant be- tween the source and target clauses, but in a differ- ent respect. In a previous paper (Kehler, 1993b), five contexts for VP-ellipsis were examined to determine whether the representations retrieved are syntactic or semantic in nature. Evidence was given that VP-ellipsis copies syntactic representations in what was termed parallelconstructions (predicting the unacceptability of the voice mismatch in example (6) and nominalized source in example (8)), but copies semantic represen- tations in non-parallel constructions (predicting the ac- ceptability of the voice mismatch in example (7) and the nominalized source in example (9)): 2 (6) # The decision was reversed by the FBI, and the ICC did too. [ reverse the decision ] (7) In March, four fireworks manufacturers asked that the decision be reversed, and on Monday the ICC did. [ reverse the decision ] (8) # This letter provoked a response from Bush, and Clinton did too. [ respond ] (9) This letter was meant to provoke a response from Clinton, and so he did. [ respond ] These examples are analogous with the gapping cases in that constraints against mismatches of syntactic form hold for the symmetric (i.e., parallel) use of and in examples (6) and (8), but not the asymmetric (i.e., non-parallel) meaning in examples (7) and (9). In 2These examples have been taken or adapted from Kehler (1993b). The phrases shown in brackets indicate the elided material under the intended interpretation. fact, it appears that gapping is felicitous in those con- structions where VP-ellipsis requires a syntactic an- tecedent, whereas gapping is infelicitous in cases where VP-ellipsis requires only a suitable semantic antecedent. Past approaches to VP-ellipsis that operate within a single module of language processing fail to make the distinctions necessary to account for these differences. Sag and Hankamer (1984) note that while elliptical sentences such as (6) are unacceptable because of a voice mismatch, similar examples with non-elided event referential forms such as do it are much more accept- able: (10) The decision was reversed by the FBI, and the ICC did it too. [ reverse the decision ] An adequate theory of ellipsis and event reference must account for this distinction. In sum, the felicity of both gapping and VP-ellipsis appears to be dependent on the type of coherence re- lation extant between the source and target clauses. Pronominal event reference, on the other hand, appears not to display this dependence. We seek to account for these facts in the sections that follow. Syntax and Semantics of Ellipsis and Event Reference In this section we characterize the forms being ad- dressed in terms of two features: (1) whether the form leaves behind an empty constituent in the syntax, and (2) whether the form is anaphoric in the semantics. In subsequent sections, we show how the distinct mecha- nisms for recovering these types of missing information interact with two types of discourse inference to predict the phenomena noted in the previous section. We illustrate the relevant syntactic and semantic properties of these forms using the version of Catego- rial Semantics described in Pereira (1990). In the Mon- tagovian tradition, semantic representations are com- positionaUy generated in correspondence with the con- stituent modification relationships manifest in the syn- tax; predicates are curried. Traces are associated with assumptions which are subsequently discharged by a suitable construction. Figure 1 shows the representa- tions for the sentence Bill became upset; this will serve as the initial source clause representation for the exam- ples that follow. 3 For our analysis of gapping, we follow Sag (1976) in hypothesizing that a post-surface-structure level of syn- tactic representation is used as the basis for interpreta- tion. In source clauses of gapping constructions, con- stituents in the source that are parallel to the overt con- stituents in the target are abstracted out of the clause representation. 4 For simplicity, we will assume that 3We will ignore the tense of the predicates for ease of exposition. 4It has been noted that in gapping constructions, con- trastive accent is generally placed on parallel elements in 51 S: become '(upset ')(Bill') NP: Bill' VP: beeome'(upset') Bill: Bill' V: become' AP: upset' I I bec~ame: becx~me' upset: upset' Figure 1: Syntactic and Semantic Representations for Bill became upset. this abstraction is achieved by fronting the constituents in the post-surface-structure, although nothing much hinges on this; our analysis is compatible with several possible mechanisms. The syntactic and semantic rep- resentations for the source clause of example (1) after fronting are shown in Figure 2; the fronting leaves trace assumptions behind that are discharged when combined with their antecedents. S: bccomc'(upsct'XBill') [tracc-abs] hiP: Bill' S: beeome'(upset'X t o [trae~abs] Bill: Bill' ~: upset' S: become'(tuX tb) upset: upset' NP:t b [~'ace-licl VP: become'(tu) t6 V: become' AP:t u [Iraee-lic] I I bee~me: become' 6 Figure 2: Syntactic and Semantic Representations for Bill became upset after fronting. Target clauses in gapping constructions are therefore represented with the overt constituents fronted out of an elided sentence node; for instance the representation of the target clause in example (1) is shown in Figure 3 both the target and the source clauses, and that abstracting these elements results in an "open proposition" that both clauses share (Sag, 1976; Prince, 1986; Steedman, 1990). This proposition needs to be presupposed (or accommo- dated) for the gapping to be felicitous, for instance, it would be infelicitous to open a conversation with sentence such as (1), whereas it is perfectly felicitous in response to the ques- tion How did the Clintons react?. Gapping resolution can be characterized as the restoration of this open proposition in the gapped clause. (the empty node is indicated by ¢). The empty con- s: NP: Hillary' S: HiUary: Hinary' AP: angry' S: I I angry: angry' ~5 Figure 3: Syntactic and Semantic Representations for Hillary angry. stituent is reconstructed by copying the embedded sen- tence from the source to the target clause, along with parallel trace assumptions which are to be bound within the target. The semantics for this embedded sentence is the open proposition that the two clauses share. This semantics, we claim, can only be recovered by copying the syntax, as gapping does not result in an indepen- dently anaphoric expression in the semantics. ~ In fact, as can be seen from Figure 3, before copying takes place there is no sentence-level semantics for gapped clauses at all. Like gapping, VP-ellipsis results in an empty con- stituent in the syntax, in this case, a verb phrase. How- ever, unlike gapping, VP-ellipsis also results in an inde- pendently anaphoric form in the semantics. 6 Figure 4 shows the representations for the clause Hillary did (the anaphoric expression is indicated by P). J NP: Hillary' I ttillary: Hillary' S: P(Hillary') VP:P AUX: '~Q.Q VP: P [l~-on-lic] I did: AQ.Q Figure 4: Syntactic and Semantic Representations for Hillary did. Given the representation in Figure 1 as the source, the semantics for the missing VP may be recovered in 5This claim is supported by well-established facts sug- gesting that gapping does not pattern with standard forms of anaphora. For instance, unlike VP-ellipsis and overt pro- nouns, gapping cannot be cataphoric, and can only obtain its antecedent from the immediately preceding clause. 6Unlike gapping, VP-ellipsis patterns with other types of anaphora, for instance it can be cataphoric and can locate antecedents from clauses other than the most immediate one. 52 one of two ways. The syntactic VP could be copied down with its corresponding semantics, from which the semantics for the complete sentence can be derived. In this case, the anaphoric expression is constrained to have the same semantics as the copied constituent. Al- ternatively, the anaphoric expression could be resolved purely semantically, resulting in the discharge of the anaphoric assumption P. The higher-order unification method developed by Dalrymple et al. (1991) could be used for this purpose; in this case the sentence-level semantics is recovered without copying any syntactic representations. Event referential forms such as do it, do tha~, and do so constitute full verb phrases in the syntax. It has been often noted (Halliday and Hasan, 1976, inter alia) that it is the main verb do that is operative in these forms of anaphora, in contrast to the auxiliary do operative in VP-ellipsis/ It is the pronoun in event referential forms that is anaphoric; the fact that the pronouns refer to events results from the type constraints imposed by the main verb do. Therefore, such forms are anaphoric in the semantics, but do not leave behind an empty constituent in the syntax. To summarize this section, we have characterized the forms being addressed according to two features, a sum- mary of which appears in Table 1. Whereas anaphoric Form Empty Node Anaphoric [[ in Syntax in Semantics II Gapping ~/ VP-Ellipsis ~/ V / Event Reference ~/ Table l: Common Topic Relations forms in the semantics for these forms are indepen- dently resolved, empty syntactic constituents in and of themselves are not anaphoric, and thus may only be restored when some independently-motivated process necessitates it. In the section that follows we outline two types of discourse inference, one of which requires such copying of empty constituents. Discourse Inference To be coherent, utterances within a discourse segment require more than is embodied in their individual syn- tactic and semantic representations alone; additional rFor instance, other auxiliaries can appear in elided forms but cannot be followed by it, tt, at, or so as in ex- ample (11), and a pronominal object to the main verb do cannot refer to a state as VP-ellipsis can as in example (12). (11) George was going to the golf course and Bill was •/(# it)/(# that)/(# so) too. (12) Bill dislikes George and Hillary does fl/(# it)/(# that)/(# so) too. inter-utterance constraints must be met. Here we de- scribe two types of inference used to enforce the con- straints that are imposed by coherence relations. In each case, arguments to coherence relations take the form of semantic representations retrieved by way of their corresponding node(s) in the syntax; the oper- ations performed on these representations are dictated by the nature of the constraints imposed. The two types of inference are distinguished by the level in the syntax from which these arguments are retrieved, s Common Topic Inference Understanding segments of utterances standing in a Common Topic relation requires the determination of points of commonality (parallelism) and departure (contrast) between sets of corresponding entities and properties within the utterances. This process is reliant on performing comparison and generalization opera- tions on the corresponding representations (Scha and Polanyi, 1988; Hobbs, 1990; Priist, 1992; Asher, 1993). Table 2 sketches definitions for some Common Topic relations, some taken from and others adapted from Hobbs (1990). In each case, the hearer is to understand the relation by inferring po(al,..., a,) from sentence So and inferring p1(bl, ..., bn) from sentence $1 under the listed constraints. 9 In order to meet these constraints, the identification of p0 and Pl may require arbitrary lev- els of generalization from the relations explicitly stated in the utterances. Examples of these relations are given in sentences (13a-d). (13) a. John organized rallies for Clinton, and Fred distributed pamphlets for him. (Parallel) b. John supported Clinton, but Mary supported Bush. (Contrast) c. Young aspiring politicians usually support their party's presidential candidate. For in- stance, John campaigned hard for Clinton in 1992. (Exemplification) d. A young aspiring politician was arrested in Texas today. John Smith, 34, was nabbed in a Houston law firm while attempting to em- bezzle funds for his campaign. (Elaboration) Passage (13a), for instance, is coherent under the un- derstanding that John and Fred have a common prop- SHobbs (1990), following Hume (1748), suggests a clas- sification of coherence relations into three broad cate- gories, namely Resemblance, Cause or Effect, and Contiguity (Hume's terminology). Here, Resemblance relations appear to pattern well with those employing our Common Topic inference, and likewise Cause or effect and Contiguity with our Coherent Situation inference. 9Following Hobbs, by al and bi being similar we mean that for some salient property qi, qi(ai) and qi(b,) holds. Likewise by dissimilar we mean that for some qi, q,(al) and "~qi (bi ) holds. 53 Constraints Conjunctions [I Relation Parallel Contrast Exemplification Elaboration Po = Pl, ai and bi are similar (1) Po = -~Pl, ai and bi are similar (2) P0 = Pl, ai and bi are dissimilar for some i Po =Pl ;bl Eaior biCai PO = pl , ai ---- bi and but for example ' in other words Table 2: Common Topic Relations erty, namely having done something to support Clin- ton. Passage (13c) is likewise coherent by virtue of the inferences resulting from identifying parallel elements and properties, including that John is a young aspiring politician and that he's a Democrat (since Clinton is identified with his party's candidate). The character- istic that Common Topic relations share is that they require the identification of parallel entities (i.e., the al and bi) and relations (P0 and Px) as arguments to the constraints. We posit that the syntactic representation is used both to guide the identification of parallel ele- ments and to retrieve their semantic representations. Coherent Situation Inference Understanding utterances standing in a Coherent Sit- uation relation requires that hearers convince them- selves that the utterances describe a coherent situation given their knowledge of the world. This process re- quires that a path of inference be established between the situations (i.e., events or states) described in the participating utterances as a whole, without regard to any constraints on parMlelism between sub-sententiM constituents. Four such relations are summarized in Table 3. l° In all four cases, the hearer is to infer A from sentence $1 and B from sentence $2 under the constraint that the presuppositions listed be abduced (ttobbs et al., 1993): 11 Relation Presuppose Conjunctions Result Explanation Violated Expectation Denial of Preventer A-. B B - - , A A ---* -, B B --* -~ A and (as a result) therefore because but even though despite Table 3: Coherent Situation Relations Examples of these relations are given in sentences (14a-d). (14) a. Bill is a politician, and therefore he's dishon- est. (Result) 1°These relations are what Hume might have termed Cause or Effect. 11We are using implication in a very loose sense here, as if to mean "could plausibly follow from". b. Bill is dishonest because he's a politician. (Explanation) c. Bill is a politician, but he's honest. (Violated Expectation) d. Bill is honest, even though he's a politician. (Denial of Preventer) Beyond what is asserted by the two clauses individually, understanding each of these sentences requires the pre- supposition that being a politician implies being dishon- est. Inferring this is only reliant on the sentential-level semantics for the clauses as a whole; there are no p, ai, or bi to be independently identified. The same is true for what Hume called Contiguity relations (perhaps in- eluding Hobbs' Occasion and Figure-ground relations); for the purpose of this paper we will consider these as weaker cases of Cause or Effect. To reiterate the crucial observation, Common Topic inference utilizes the syntactic structure in identify- ing the semantics for the sub-sentential constituents to serve as arguments to the coherence constraints. In contrast, Coherent Situation inference utilizes only the sentential-level semantic forms as is required for ab- ducing a coherent situation. The question then arises as to what happens when constituents in the syntax for an utterance are empty. Given that the discourse inference mechanisms retrieve semantic forms through nodes in the syntax, this syntax will have to be recov- ered when a node being accessed is missing. Therefore, we posit that missing constituents are recovered as a by-product of Common Topic inference, to allow the parallel properties and entities serving as arguments to the coherence relation to be accessed from within the re- constructed structure. On the other hand, such copying is not triggered in Coherent Situation inference, since the arguments are retrieved only from the top-level sen- tence node, which is always present. In the next section, we show how this difference accounts for the data given in Section 2. Applying the Analysis In previous sections, we have classified several ellip- tical and event referential forms as to whether they leave behind an empty constituent in the syntax and whether they are anaphoric in the semantics. Empty. constituents in the syntax are not in themselves refer- ential, but are recovered during Common Topic infer- 54 ence. Anaphoric expressions in the semantics are inde- pendently referential and are resolved through purely semantic means regardless of the type of discourse in- ference. In this section we show how the phenomena presented in Section 2 follow from these properties. Local Ellipsis Recall from Section 2 that gapping constructions such as (15) are only felicitous with the symmetric (i.e., Common Topic) meaning of and: (15) Bill became upset, and Hillary angry. This fact is predicted by our account in the following way. In the case of Common Topic constructions, the missing sentence in the target will be copied from the source, the sentential semantics may be derived, and the arguments to the coherence relations can be identified and reasoning carried out, predicting felicity. In the case of Coherent Situation relations, no such recovery of the syntax takes place. Since a gapped clause in and of itself has no sentence-level semantics, the gapping fails to be felicitous in these cases. This account also explains similar differences in fe- licity for other coordinating conjunctions as discussed in Kehler (1994), as well as why gapping is infelicitous in constructions with subordinating conjunctions indi- cating Coherent Situation relations, as exemplified in (16). (16) # Bill became upset, { because } even though Hillary angry. despite the fact that The stripping construction is similar to gapping ex- cept that there is only one bare constituent in the tar- get (also generally receiving contrastive accent); unlike VP-ellipsis there is no stranded auxiliary. We therefore might predict that stripping is also acceptable in Com- mon Topic constructions but not in Coherent Situation constructions, which appears to be the case: 12 (17) Bill became upset, but not # and (as a result) # because Hillary. # even though # despite the fact that In summary, gapping and related constructions are infelicitous in those cases where Coherent Situation in- ference is employed, as there is no mechanism for re- covering the sentential semantics of the elided clause. 12Stripping is also possible in comparative deletion con- structions. A comprehensive analysis of stripping, pseudo- gapping, and VP-ellipsis in such cases requires an articula- tion of a syntax and semantics for these constructions, which will be carried out in future work. VP-Ellipsis Recall from Section 2 that only in Coherent Situation constructions can VP-ellipsis obtain purely semantic antecedents without regard to constraints on structural parallelism, as exemplified by the voice mismatches in sentences (18) and (19). (18) # The decision was reversed by the FBI, and the ICC did too. [ reverse the decision ] (19) In March, four fireworks manufacturers asked that the decision be reversed, and on Monday the ICC did. [ reverse the decision ] These facts are also predicted by our account. In the case of Common Topic constructions, a suitable syn- tactic antecedent must be reconstructed at the site of the empty VP node, with the result that the anaphoric expression takes on its accompanying semantics. There- fore, VP-ellipsis is predicted to require a suitable syn- tactic antecedent in these scenarios. In Coherent Sit- uation constructions, the empty VP node is not re- constructed. In these cases the anaphoric expression is resolved on purely semantic grounds; therefore VP- ellipsis is only constrained to having a suitable semantic antecedent. The analysis accounts for the range of data given in Kehler (1993b), although one point of departure exists between that account and the current one with respect to clauses conjoined with but. In the previous account these cases are all classified as non-parallel, resulting in the prediction that they only require semantic source representations. In our analysis, we expect cases of pure contrast to pattern with the parallel class since these are Common Topic constructions; this is opposed to the vi- olated expectation use of but which indicates a Coherent Situation relation. The current account makes the cor- rect predictions; examples (20) and (21), where but has the contrast meaning, appear to be markedly less ac- ceptable than examples (22) and (23), where but has the violated expectation meaning: 13 (20) ?? Clinton was introduced by John, but Mary didn't. [ introduce Clinton ] (21) ?? This letter provoked a response from Bush, but Clinton didn't. [ respond ] (22) Clinton was to have been introduced by someone, but obviously nobody did. [ introduce Clinton ] (23) This letter deserves a response, but before you do, ... [ respond ] To summarize thus far, the data presented in the ear- lier account as well as examples that conflict with that analysis are all predicted by the account given here. As a final note, we consider the interaction between VP-ellipsis and gapping. The following pair of examples are adapted from those of Sag (1976, pg. 291): lZThese examples have been adapted from several in Kehler (1993b). 55 (24) :Iohn supports Clinton, and Mary $ Bush, al- though she doesn't know why she does. (25) ?? John supports Clinton, and Mary 0 Bush, and Fred does too. Sag defines an alphabeiic variance condition that cor- rectly predicts that sentence (25) is infelicitous, but in- correctly predicts that sentence (24) is also. Sag then suggests a weakening of his condition, with the result that both of the above examples are incorrectly pre- dicted to be acceptable; he doesn't consider a solution predicting the judgements as stated. The felicity of sentence (24) and the infelicity of sen- tence (25) are exactly what our account predicts. In example (25), the third clause is in a Common Topic relationship with the second (as well as the first) and therefore requires that the VP be reconstructed at the target site. However, the VP is not in a suitable form, as the object has been abstracted out of it (yielding a trace assumption). Therefore, the subsequent VP- ellipsis fails to be felicitous. In contrast, the conjunc- tion alfhough used before the third clause in example (24) indicates a Coherent Situation relation. Therefore, the VP in the third clause need not be reconstructed, and the subsequent semantically-based resolution of the anaphoric form succeeds. Thus, the apparent paradox between examples (24) and (25) is just what we would expect. Event Reference Recall that Sag and Hankamer (1984) note that whereas elliptical sentences such as (26a) are unacceptable due to a voice mismatch, similar examples with event ref- erential forms are much more acceptable as exemplified by sentence (26b): 14 (26) a. # The decision was reversed by the FBI, and the ICC did too. [ reverse the decision ] b. The decision was reversed by the FBI, and the ICC did it too. [ reverse the decision ] As stated earlier, forms such as do it are anaphoric, but leave no empty constituents in the syntax. Therefore, it follows under the present account that such reference is successful without regard to the type of discourse inference employed. Relationship to Past Work The literature on ellipsis and event reference is volumi- nous, and so we will not attempt a comprehensive com- parison here. Instead, we briefly compare the current work to three previous studies that explicitly tie ellipsis 14Sag and Hankamer claim that all such cases of VP- ellipsis require syntactic antecedents, whereas we suggest that in Coherent Situation relations VP-eUipsis operates more like their Model-Interpretive Anaphora, of which do it is an example. resolution to an account of discourse structure and co- herence, namely our previous account (Kehler, 1993b) and the accounts of Priist (1992) and Asher (1993). In Kehler (1993b), we presented an analysis of VP- ellipsis that distinguished between two types of rela- tionship between clauses, parallel and non-parallel. An architecture was presented whereby utterances were parsed into propositional representations which were subsequently integrated into a discourse model. It was posited that VP-ellipsis could access either proposi- tional or discourse model representations: in the case of parallel constructions, the source resided in the propo- sitional representation; in the case of non-parallel con- structions, the source had been integrated into the dis- course model. In Kehler (1994), we showed how this architecture also accounted for the facts that Levin and Prince noted about gapping. The current work improves upon that analysis in sev- eral respects. First, it no longer needs to be posited that syntactic representations disappear when inte- grated into the discourse model; 15 instead, syntactic and semantic representations co-exist. Second, various issues with regard to the interpretation of propositional representations are now rendered moot. Third, there is no longer a dichotomy with respect to the level of repre- sentation from which VP-ellipsis locates and copies an- tecedents. Instead, two distinct factors have been sepa- rated out: the resolution of missing constituents under Common Topic inference is purely syntactic whereas the resolution of anaphoric expressions in all cases is purely semantic; the apparent dichotomy in VP-ellipsis data arises out of the interaction between these different phenomena. Finally, the current approach more read- ily scales up to more complex cases. For instance, it was not clear in the previous account how non-parallel constructions embedded within parallel constructions would be handled, as in sentences (27a-b): (27) a. Clinton was introduced by John because Mary had refused to, and Gore was too. [ introduced by John because Mary had refused to ] b. # Clinton was introduced by John because Mary had refused to, and Fred did too. [ in- troduced Clinton because Mary had refused to ] The current approach accounts for these cases. The works of Priist (1992) and Asher (1993) pro- vide analyses of VP-ellipsis 16 in the context of an account of discourse structure and coherence. With l~This claim could be dispensed with in the treatment of VP-eUipsis, perhaps at the cost of some degree of the- oretical inelegance. However, this aspect was crucial for handling the gapping data, since the infelicity of gapping in non-parallel constructions hinged on there no longer being a propositional representation available as a source. 16In addition, Prfist addresses gapping, and Asher ad- dresses event reference. 56 Priist utilizing a mixed representation (called syntac- tic/semantic structures) and Asher utilizing Discourse Representation Theory constructs, each defines mecha- nisms for determining relations such as parallelism and contrast, and gives constraints on resolving VP-ellipsis and related forms within their more general frame- works. However, each essentially follows Sag in requir- ing that elided VP representations be alphabetic vari- ants of their referents. This constraint rules out cases where VP-ellipsis obtains syntactically mismatched an- tecedents, such as example (19) and other non-parallel cases given in Kehler (1993b). It also appears that nei- ther approach can account for the infelicity of mixed gapping/VP-ellipsis cases such as sentence (25). Conclusion In this paper, we have categorized several forms of el- lipsis and event reference according to two features: (1) whether the form leaves behind an empty constituent in the syntax, and (2) whether the form is anaphoric in the semantics. We have also described two forms of discourse inference, namely Common Topic inference and Coherent Situation inference. The interaction be- tween the two features and the two types of discourse inference predicts facts concerning gapping, VP-ellipsis, event reference, and interclausal coherence for which it is otherwise difficult to account. In future work we will address other forms of ellipsis and event reference, as well as integrate a previous account of strict and sloppy ambiguity into this framework (Kehler, 1993a). Acknowledgments This work was supported in part by National Science Foundation Grant IRI-9009018, National Science Foun- dation Grant IRI-9350192, and a grant from the Xerox Corporation. I would like to thank Stuart Shieber, Bar- bara Grosz, Fernando Pereira, Mary Dalrymple, Candy Sidner, Gregory Ward, Arild Hestvik, Shalom Lappin, Christine Nakatani, Stanley Chen, Karen Lochbaum, and two anonymous reviewers for valuable discussions and comments on earlier drafts. References Nicholas Asher. 1993. Reference to Abstract Objects in Discourse. SLAP 50, Dordrecht, Kluwer. Mary Dalrymple, Stuart M. Shieber, and Fernando Pereira. 1991. Ellipsis and higher-order unification. Linguistics and Philosophy, 14:399-452. M.A.K. Halliday and Ruqaiya Hasan. 1976. Cohesion in English. Longman's, London. English Language Series, Title No. 9. Jerry R. Hobbs, Mark E. Stickel, Douglas E. Appelt, and Paul Martin. 1993. Interpretation as abduction. Artificial Intelligence, 63:69-142. Jerry Hobbs. 1990. Literature and Cognition. CSLI Lecture Notes 21. David Hume. 1748. An Inquiry Concerning Human Understanding. The Liberal Arts Press, New York, 1955 edition. Andrew Kehler. 1993a. A discourse copying algorithm for ellipsis and anaphora resolution. In Proceedings of the Sixth Conference of the European Chapter of the Association for Computational Linguistics (EACL- 93), pages 203-212, Utrecht, the Netherlands, April. Andrew Kehler. 1993b. The effect of establishing co- herence in ellipsis and anaphora resolution. In Pro- ceedings of the 31st Conference of the Association for Computational Linguistics (ACL-93), pages 62-69, Columbus, Ohio, June. Andrew Kehler. 1994. A discourse processing account of gapping and causal implicature. Manuscript pre- sented at the Annual Meeting of the Linguistic Soci- ety of America, January. Nancy Levin and Ellen Prince. 1982. Gapping and causal implicature. Presented at the Annual Meeting of the Linguistic Society of America. Fernando Pereira. 1990. Categorial semantics and scoping. Computational Linguistics, 16(1):1-10. Ellen Prince. 1986. On the syntactic marking of pre- supposed open propositions. In Papers from the Parasession on pragmalics and grammatical theory at the g2nd regional meeting of the Chicago Linguis- tics society, pages 208-222, Chicago, IL. Hub Priist. 1992. On Discourse Structuring, VP Anaphora, and Gapping. Ph.D. thesis, University of Amsterdam. Ivan Sag and Jorge Hankamer. 1984. Toward a theory of anaphoric processing. Linguistics and Philosophy, 7:325-345. Ivan Sag. 1976. Deletion and Logical Form. Ph.D. thesis, MIT. Remko Scha and Livia Polanyi. 1988. An augmented context free grammar for discourse. In Proceedings of the International Conference on Computational Linguistics (COLING-88), pages 573-577, Budapest, August. Mark Steedman. 1990. Gapping as constituent coordi- nation. Linguistics and Philosophy, 13(2):207-263. 57 | 1994 | 8 |
A HYBRID REASONING MODEL FOR INDIRECT ANSWERS Nancy Green Department of Computer Science University of Delaware Newark, DE 19716, USA Internet: [email protected] Sandra Carberry Department of Computer Science University of Delaware Visitor: Inst. for Research in Cognitive Science University of Pennsylvania Internet: [email protected] Abstract This paper presents our implemented computa- tional model for interpreting and generating in- direct answers to Yes-No questions. Its main fea- tures are 1) a discourse-plan-based approach to implicature, 2) a reversible architecture for gen- eration and interpretation, 3) a hybrid reasoning model that employs both plan inference and log- ical inference, and 4) use of stimulus conditions to model a speaker's motivation for providing ap- propriate, unrequested information. The model handles a wider range of types of indirect answers than previous computational models and has sev- eral significant advantages. 1. INTRODUCTION Imagine a discourse context for (1) in which R's use of just (ld) is intended to convey a No, i.e., that R is not going shopping tonight. (By con- vention, square brackets indicate that the enclosed text was not explicitly stated.) The part of R's re- sponse consisting of (ld) - (le) is what we call an indirect answer to a Yes-No question, and if (lc) had been uttered, (lc) would have been called a direct answer. l.a. Q: I need a ride to the mall. b. Are you going shopping tonight? c. R: [no] d. My car's not running. e. The rear axle is broken. According to one study of spoken English [Stenstrhm, 1984], 13 percent of responses to Yes- No questions were indirect answers. Thus, the ability to interpret indirect answers is required for robust dialogue systems. Furthermore, there are good reasons for generating indirect answers in- stead of just yes, no, or I don't know. First, they may provide information which is needed to avoid misleading the questioner [Hirschberg, 1985]. Sec- ond, they contribute to an efficient dialogue by anticipating follow-up questions. Third, they may be used for social reasons, as in (1). This paper provides a computational model for the interpretation and generation of indirect answers to Yes-No questions in English. More pre- cisely, by a Yes-No question we mean one or more utterances used as a request by Q (the questioner) that R (the responder) convey R's evaluation of the truth of a proposition p. An indirect answer implicitly conveys via one or more utterances R's evaluation of the truth of the questioned proposi- tion p, i.e. that p is true, that p is false, that there is some truth to p, that p may be true, or that p may be false. Our model presupposes that Q's question has been understood by R as intended by Q, that Q's request was appropriate, and that Q and R are engaged in a cooperative goal-directed dialogue. The interpretation and generation com- ponents of the model have been implemented in Common Lisp on a Sun SPARCstation. The model employs an agent's pragmatic knowledge of how language typically is used to answer Yes-No questions in English to constrain the process of generating and interpreting indirect answers. This knowledge is encoded as a set of domain-independent discourse plan operators and a set of coherence rules, described in section 2.1 Figure 1 shows the architecture of our system. It is reversible in that the same pragmatic knowl- edge is used by the interpretation and generation modules. The interpretation algorithm, described in section 3, is a hybrid approach employing both plan inference and logical inference to infer R's dis- course plan. The generation algorithm, described in section 4, constructs R's discourse plan in two phases. During the first phase, stimulus condi- tions are used to trigger goals to include appro- priate, extra information in the response plan. In the second phase, the response plan is pruned to eliminate parts which can be inferred by Q. hOur main sources of data were previous studies [Hirschberg, 1985, Stenstrhm, 1984], transcripts of naturally occurring two-person dialogue [American Express transcripts, 1992], and constructed examples. 58 discourse plan operators discourse expectation response --I INTERPRETATION I I G:NERATION I coherence rules discourse expectation R's beliefs Figure 1: Architecture of system 2. PRAGMATIC KNOWLEDGE Linguists (e.g. see discussion in [Levinson, 1983]) have claimed that use of an utterance in a dia- logue may create shared expectations about sub- sequent utterances. In particular, a Yes-No ques- tion creates the discourse expectation that R will provide R's evaluation of the truth of the ques- tioned proposition p. Furthermore, Q's assump- tion that R's response is relevant triggers Q's at- tempt to interpret R's response as providing the requested information. We have observed that coherence relations similar to the subject-matter relations of Rhetorical Structure Theory (RST) [Mann and Thompson, 1987] can be used in defin- ing constraints on the relevance of.an indirect an- swer. For example, the relation between the (im- plicit) direct answer in (2b) and each of the indi- rect answers in (2c) - (2e) is similar to RST's rela- tions of Condition, Elaboration, and (Volitional) Cause, respectively. 2.a. Q: Are you going shopping tonight? b. R: [yes] c. if I finish my homework d. I'm going to Macy's e. Winter clothes are on sale Furthermore, for Q to interpret any of (2c) - (2e) as conveying an affirmative answer, Q must be- lieve that R intended Q to recognize the relational proposition holding between the indirect answer and (2b), e.g. that (2d) is an elaboration of (25). Also, coherence relations hold between parts of an indirect answer consisting of multiple utterances. For example, (le) describes the cause of the fail- ure reported in (ld). Finally, we have observed that different relations are usually associated with different types of answers. Thus, a speaker who has inferred a plausible coherence relation holding between an indirect answer and a possible (im- plicit) direct answer may be able to infer the di- rect answer. (If more than one coherence relation ( (Plausible (cr-obstacle ((not (in-state ?stateq ?tq)) (not (occur ?eventp ?tp))))) <- (state ?stateq) (event ?eventp) (timeperiod ?tq) (timeperiod ?tp) (before ?tq ?tp) (app-cond ?stateq ?eventp) (unless (in-state ?stateq ?tq)) (unless (occur ?eventp ?tp))) Figure 2: A coherence rule for cr-obstacle is plausible, or if the same coherence relation is used with more than one type of answer, then the indirect answer may be ambiguous.) In our model we formally represent the co- herence relations which constrain indirect answers by means of coherence rules. Each rule consists of a consequent of the form (Plausible (CR q p)) and an antecedent which is a conjunction of conditions, where CR is the name of a coherence relation and q and p are formulae, symbols pre- fixed with "?" are variables, and all variables are implicitly universally quantified. Each antecedent condition represents a condition which is true iff it is believed by R to be mutually believed with Q.2 Each rule represents sufficient conditions for the plausibility of (CR q p) for some CR, q, p. An example of one of the rules describing the Obsta- 2Our model of R's beliefs (and similarly for Q's), represented as a set of Horn clauses, includes 1) general world knowledge presumably shared with Q, 2) knowl- edge about the preceding discourse, and 3) R's beliefs (including "weak beliefs"} about Q's beliefs. Much of the shared world knowledge needed to evaluate the co- herence rules consists of knowledge from domain plan operators. 59 (Answer-yes s h ?p): Applicability conditions: (discourse-expectation (informif s h ?p)) (believe s ?p) Nucleus: (inform s h ?p) Satellites: (Use-condition s h ?p) (Use-cause s h ?p) (Use-elaboration s h ?p) Primary goals: (BMB h s ?p) Figure 3: Discourse plan (Answer-no s h ?p): Applicability conditions: (discourse-expectation (informif s h ?p)) (believe s (not ?p)) Nucleus: (inform s h (not ?p)) Satellites: (Use-otherwise s h (not ?p)) (Use-obstacle s h (not ?p)) (Use-contrast s h (not ?p)) Primary goals: (BMB h s (not ?p)) operators for Yes and No answers cle relation 3 is shown in Figure 2. The predicates used in the rule are defined as follows: (in-state p /) denotes that p holds during t, (occur p t) de- notes that p happens during t, (state z) denotes that the type of x is state, (event x) denotes that the type of x is event, (timeperiod t) denotes that t is a time interval, (before tl t2) denotes that tl begins before or at the same time as t2, (app-cond q p} denotes that q is a plausible enabling con- dition for doing p, and (unless p) denotes that p is not provable from the beliefs of the reasoner. For example, this rule describes the relation be- tween (ld) and (lc), where (ld) is interpreted as (not (in-state (running R-car) Present)) and (lc) as (not (occur (go-shopping R) Future)). That is, this relation would be plausible if Q and R share the belief that a plausible enabling condition of a subaction of a plan for R to go shopping at the mall is that R's car be in running condition. In her study of responses to questions, Sten- strSm [Stenstrfm, 1984] found that direct an- swers are often accompanied by extra, relevant information, 4 and noted that often this extra in- formation is similar in content to an indirect an- swer. Thus, the above constraints on the relevance of an indirect answer can serve also as constraints on information accompanying a direct answer. For maximum generality, therefore, we went beyond our original goal of handling indirect answers to the goal of handling what we call full answers. A full answer consists of an implicit or explicit direct answer (which we call the nucleus) and, possibly, extra, relevant information (satellites). s In our awhile Obstacle is not one of the original relations of RST, it is similar to the causal relations of RST. 461 percent of direct No answers and 24 percent of direct Yes answers 5The terms nucleus and satellite have been bor- rowed from RST to reflect the informational con- straints within a full answer. Note that according to RST, a property of the nucleus is that its removal re- model, we represent each type of full answer as a (top-level) discourse plan operator. By represent- ing answer types as plan operators, generation can be modeled as plan construction, and interpreta- tion as plan recognition. Examples of (top-level) operators describing a full Yes answer and a full No answer are shown in Figure 3. 6 To explain our notation, s and h are constants denoting speaker (R) and hearer (Q), respectively. Symbols prefixed with "?" de- note propositional variables. The variables in the header of each top-level operator will be instan- tiated with the questioned proposition. In inter- preting example (1), ?p would be instantiated with the proposition that R is going shopping tonight. Thus, instantiating the Answer-No operator in Figure 3 with this proposition would produce a plan for answering that P~ is not going shopping tonight. Applicability conditions are necessary conditions for appropriate use of a plan operator. For example, it is inappropriate for R to give an affirmative answer that p if R believes p is false. Also, an answer to a Yes-No question is not ap- propriate unless s and h share the discourse ex- pectation that s will provide s's evaluation of the truth of the questioned proposition p, which we denote as (discourse-ezpectation (informif s h p)). Primary goals describe the intended effects of the plan operator. We use (BMB h s p) to denote that h believes it mutually believed with s that p [Clark and Marshall, 1981]. In general, the nucleus and satellites of a dis- course plan operator describe primitive or non- primitive communicative acts. Our formalism el- suits in incoherence. However, in our model, a di- rect answer may be removed without causing incoher- ence, provided that it is inferable from the rest of the response. 6The other top-level operators in our model, Answer-hedged, Answer-maybe, and Answer-maybe- not, represent the other answer types handled. 60 (Use-obstacle s h ?p): ;; s tells h of an obstacle explaining ;; the failure ?p Existential variable: ?q Applicability conditions: (believe s (cr-obstacle ?q ?p)) (Plausible (cr-obstacle ?q ?p)) Stimulus conditions: (explanation-indicated s h ?p ?q) (excuse-indicated s h ?p ?q) Nucleus: (inform s h ?q) Satellites: (Use-elaboration s h ?q) (Use-obstacle s h ?q) (Use-cause s h ?q) Primary goals: (BMB h s (cr-obstacle ?q ?p)) Figure 4: Discourse plan operator for Obstacle lows zero, one, or more occurrences of a satellite in a full answer, and the expected (but not re- quired) order of nucleus and satellites is the order they are listed in the operator. (inform s h p) de- notes the primitive act of s informing h that p. The satellites in Figure 3 refer to non-primitive acts, described by discourse plan operators which we have defined (one for each coherence relation used in a full answer). For example, Use-obstacle, a satellite of Answer-no in Figure 3, is defined in Figure 4. To explain the additional notation in Figure 4, (cr-obstacle q p) denotes that the coherence rela- tion named obstacle holds between q and p. Thus, the first applicability condition can be glossed as requiring that s believe that the coherence rela- tion holds. In the second applicability condition, (Plausible (cr-obstacle q p)) denotes that, given what s believes to be mutually believed with h, the coherence relation (cr-obstacle q p) is plausi- ble. This sort of applicability condition is evalu- ated using the coherence rules described above. Stimulus conditions describe conditions moti- vating a speaker to include a satellite during plan construction. They can be thought of as trig- gers which give rise to new speaker goals. In order for a satellite to be selected during gen- eration, all of its applicability conditions and at least one of its stimulus conditions must hold. While stimulus conditions may be derivative of principles of cooperativity [Grice, 1975] or po- liteness [Brown and Levinson, 1978], they provide a level of precompiled knowledge which reduces the amount of reasoning required for content- planning. For example, Figure 5 depicts the dis- course plan which would be constructed by R (and Answer-no /\ [Ic] Use-obstacle /\ Id Use-obstacle J le Figure 5: Discourse plan underlying (ld) - (le) must be inferred by Q) for (1). The first stimu- lus condition of Use-obstacle, which is defined as holding whenever s suspects that h would be sur- prised that p holds, describes R's reason for includ- ing (le). The second stimulus condition, which is defined as holding whenever s suspects that the Yes-No question is a prerequest [Levinson, 1983], describes R's reason for including (ld). 7 3. INTERPRETATION We assume that interpretation of dialogue is controlled by a Discourse Model Processor (DMP), which maintains a Discourse Model [Carberry, 1990] representing what Q believes R has inferred so far concerning Q's plans. The dis- course expectation generated by a Yes-No question leads the DMP to invoke the answer recognition process to be described in this section. If answer recognition is unsuccessful, the DMP would invoke other types of recognizers for handling less pre- ferred types of responses, such as I don't know or a clarification subdialogue. To give an example of where our recognition algorithm fits into the above framework, consider (4). 4a. Q: Is Dr. Smith teaching CSI next fall? b. R: Do you mean Dr. Smithson? c. Q: Yes. d. R: [no] e. He will be on sabbatical next fall. f. Why do you ask? Note that a request for clarification and its answer are given in (4b) - (4c). Our recognition algorithm, when invoked with (4e) - (4f) as input, would infer an Answer-no plan accounting for (4e) and satis- fying the discourse expectation generated by (4a). When invoked by the DMP, our interpretation module plays the role of the questioner Q. The inputs to interpretation in our model consist of 7Stimulus conditions are formally defined by rules encoded in the same formalism as used for our co- herence rules. A full description of the stimu- lus conditions used in our model can be found in [Green, in preparation]. 61 1) the set of discourse plan operators and the set of coherence rules described in section 2, 2) Q's beliefs, 3) the discourse expectation (discourse- expectation (informif s h p)), and 4) the semantic representation of the sequence of utterances per- formed by R during R's turn. The output is a partially ordered set (possibly empty) of answer discourse plans which it is plausible to ascribe to R as underlying It's response. The set is ordered by plausibility using preference criteria. Note that we assume that the final choice of a discourse plan to ascribe to R is made by the DMP, since the DMP must select an interpretation consistent with the interpretation of any remaining parts of R's turn not accounted fo~ by the answer discourse plan, e.g. (4f). To give a high-level description of our answer interpretation algorithm, first, each (top-level) an- swer discourse plan operator is instantiated with the questioned proposition from the discourse ex- pectation. For example (1), each answer operator would be instantiated with the proposition that R is going shopping tonight. Next, the answer interpreter must verify that the applicability con- ditions and primary goals which would be held by R if R were pursuing the plan are consistent with Q's beliefs about It's beliefs and goals. Consis- tency checking is implemented using a Horn clause theorem-prover. For all candidate answer plans which have not been eliminated during consistency checking, recognition continues by attempting to match the utterances in R's turn to the actions specified in the candidates. However, no candi- date plan may be constructed which violates the following structural constraint. Viewing a candi- date plan's structure as a tree whose leaves are primitive acts from which the plan was inferred, no subtree Ti may contain an act whose sequential position in the response is included in the range of sequential positions in the response of acts in a subtree Tj having the same parent node as 7~. For example, (5e) cannot be interpreted as related to (5c) by cr-obstaele, due to the occurrence of (5d) between (5c) and (5e). Note that a more coherent response would consist of the sequence, (5c), (5e), (Sd). 5.a. O: Are you going shopping tonight? b. R: [no] c. My car's not running. d, Besides, I'm too tired. e. The timing belt is broken. To recognize a subplan for a non-primitive ac- tion, e.g. Use-obstacle in Figure 4, a similar proce- dure is used. Note that any applicability condition of the form (Plausible (CR q p)) is defined to be consistent with Q's beliefs if it is provable, i.e., if the antecedents of a coherence rule for CR are true with respect to what Q believes to be mutu- ally believed with R. The recognition process for non-primitive actions differs in that these opera- tors contain existential variables which must be instantiated. In our model, the answer interpreter first attempts to instantiate an existential variable with a proposition from R's response. For exam- ple (1), the existential variable ?q of Use-obstacle would be instantiated with the proposition that R's car is not running. However, if (ld) was not explicitly stated by R, i.e., if R's response had just consisted of (le), it would be necessary for ?q to be instantiated with a hypothesized proposition, corresponding to (ld), to understand how (le) re- lates to R's answer. The answer interpreter finds the hypothesized proposition by a subprocedure we refer to as hypothesis generation. Hypothesis generation is constrained by the assumption that R's response is coherent, i.e., that (le) may play the role of a satellite in a subplan of some Answer plan. Thus, the coherence rules are used as a source of knowledge for generating hy- potheses. Hypothesis generation begins with ini- tializing the root of a tree of hypotheses with a proposition p0 to be related to a plan, e.g. the proposition conveyed by (le). A tree of hypothe- ses is constructed by expanding each of its nodes in breadth-first order until all goal nodes (as de- fined below) have been reached, subject to a limit on the depth of the breadth-first search, s A node containing a proposition Pi is expanded by search- ing for all propositions Pi+l such that for some coherence relation CR which may be used in the type of answer being recognized, (Plausible ( CR pi pi+l)) holds from Q's point of view. (The search is implemented using a Horn clause theorem prover.) The plan operator invoking hypothesis gener- ation has a partially instantiated applicability con- dition of the form, (Plausible (CR ?q p)), where CR is a coherence relation, p is the proposition that was used to instantiate the header variable of the operator, and ?q is the operator's existential variable. Since the purpose of the search is to find a proposition q with which to instantiate ?q, a goal node is defined as a node containing a proposition q satisfying the above condition. (E.g. in Figure 6 P0 is the proposition conveyed by (le), Px is the proposition conveyed by (ld), P0 and Pl are plau- sibly related by er-obstaele, P2 is the proposition conveyed by a No answer to (la), Pl and P2 are plausibly related by cr-obstacle, P2 is a goal node, and therefore, Pl will be used to instantiate the existential variable ?q in Use-obstacle.) After the existential variable is instantiated, plan recognition proceeds as described above at SPlacing a limit on the maximum depth of the tree is reasonable, given human processing constraints. 62 ~ goal (conveyed if lc were uttered) hypothesized (conveyed if ld were uttered) proposition from utterance (conveyed in le) Figure 6: Hypothesis generation tree relating (le) to (lc) the point where the remaining conditions are checked for consistency. 9 For example, as recog- nition of the Use-obstacle subplan proceeds, (le) would be recognized as the realization of a Use- obstacle satellite of this Use-obstacle subplan. Ul- timately, the inferred plan would be the same as that shown in Figure 5, except that (ld) would be marked as hypothesized. The set of candidate plans inferred from a re- sponse are ranked using two preference criteria. 1° First, as the number of hypothesized propositions in a candidate increases, its plausibility decreases. Second, as the number of non-hypothesized propo- sitions accounted for by the plan increases, its plausibility increases. To summarize the interpretation algorithm, it is primarily expectation-driven in the sense that the answer interpreter attempts to interpret R's response as an answer generated by some answer discourse plan operator. Whenever the answer in- terpreter is unable to relate an utterance to the plan which it is currently attempting to recognize, the answer interpreter attempts to find a connec- tion by hypothesis generation. Logical inference plays a supplementary role, namely, in consistency checking (including inferring the plausibility of co- herence relations) and in hypothesis generation. 4. GENERATION The inputs to generation consist of 1) the same sets of discourse plan operators and coherence rules used in interpretation, 2) R's beliefs, and 3) the same discourse expectation. The output is a 9Note that, in general, any nodes on the path be- tween p0 and Ph, where Ph is the hypothesis returned, will be used as additional hypotheses (later) to connect what was said to ph. 1°Another possible criterion is whether the actual ordering reflects the default ordering specified in the discourse plan operators. We plan to test the useful- ness of this criterion. discourse plan for an answer (indirect, if possible). Generation of an indirect reply has two phases: 1) content planning, in which the generator creates a discourse plan for a full answer, and 2) plan prun- ing, in which the generator determines which parts of the planned full answer do not need to be ex- plicitly stated. For example, given an appropriate set of R's beliefs, our system generates a plan for asserting only the proposition conveyed in (le) as an answer to (lb). 11 Content-planning is performed by top-down expansion of an answer discourse plan operator. Note that applicability conditions prevent inap- propriate use of an operator, but they do not model a speaker's motivation for providing extra information. Further, a full answer might provide too much information if every satellite whose oper- ator's applicability conditions held were included in a full answer. On the other hand, at the time R is asked the question, R may not yet have the pri- mary goals of a potential satellite. To overcome these limitations, we have incorporated stimulus conditions into the discourse plan operators in our model. As mentioned in section 2, stimulus condi- tions can be thought of as triggers or motivating conditions which give rise to new speaker goals. By analyzing the speaker's possible motivation for providing extra information in the examples in our corpus, we have identified a small set of stimu- lus conditions which reflect general concerns of accuracy, efficiency, and politeness. In order for a satellite to be included in a full answer, all of its applicability conditions and at least one of its stimulus conditions must hold. (A theorem prover is used to search for an instantiation of the exis- tential variable satisfying the above conditions.) The output of the content-planning phase, a discourse plan representing a full answer, is the input to the plan-pruning phase. The goal of this phase is to make the response more concise, i.e. to determine which of the planned acts can be omit- ted while still allowing Q to infer the full plan. To do this, the generator considers each of the acts in the frontier of the full plan tree from right to left (thus ensuring that a satellite is considered be- fore its nucleus). The generator creates trial plans consisting of the original plan minus the nodes pruned so far and minus the current node. Then, the generator simulates Q's interpretation of the trial plan. If Q could infer the full plan (as the most preferred plan), then the current node can be pruned. Note that, even when it is not possi- ble to prune the direct answer, a benefit of this approach is that it generates appropriate extra in- formation with direct answers. 11The tactical component must choose an appropri- ate expression to refer to R's car's timing belt, de- pending on whether (ld) is omitted. 63 5. RELATED RESEARCH It has been noted [Diller, 1989, Hirsehberg, 1985, Lakoff, 1973] that indirect answers conversa- tionally implicale [Grice, 1975] direct answers. Recently, philosophers [Thomason, 1990, MeCaf- ferty, 1987] have argued for a plan-based ap- proach to conversational implicature. Plan-based computational models have been proposed for similar discourse interpretation problems, e.g. indirect speech acts [Perrault and Allen, 1980, Hinkelman, 1989], but none of these models ad- dress the interpretation of indirect answers. Also, our use of coherence relations, both 1) as con- straints on the relevance of indirect answers, and 2) in our hypothesis generation algorithm, is unique in plan-based interpretation models. In addition to RST, a number of theories of text coherence have been proposed [Grimes, 1975, Halliday, 1976, Hobbs, 1979, Polanyi, 1986, Reiehman, 1984]. Coherence relations have been used in interpretation [Dahlgren, 1989, Wu and Lytinen, 1990]. However, inference of co- herence relations alone is insufficient for inter- preting indirect answers, since additional prag- matic knowledge (what we represent as discourse plan operators) and discourse expectations are necessary also. Coherence relations have been used in generation [MeKeown, 1985, Hovy, 1988, Moore and Paris, 1988, Horacek, 1992] but none of these models generate indirect answers. Also, our use of stimulus conditions is unique in gener- ation models. Most previous formal and computational models of conversational implicature [Gazdar, 1979, Green, 1990, Hirschberg, 1985, Lasearides and Asher, 1991] derive implieatures by classi- cal or nonclassical logical inference with one or more licensing rules defining a class of implica- tures. Our coherence rules are similar conceptu- ally to the licensing rules in Lascarides et al.'s model of temporal implicature. (However, dif- ferent coherence relations play a role in indirect answers.) While Lascarides et al. model tem- poral implicatures as defeasible inferences, such an approach to indirect answers would fail to distinguish what R intends to convey by his re- sponse from other default inferences. We claim that R's response in (1), for example, does not warrant the attribution to R of the intention to convey that the rear axle of R's car is made of metal. Hirsehberg's model for deriving scalar im- plicatures addresses only a few of the types of indirect answers that our model does. Further- more, our discourse-plan-based approach avoids problems faced by licensing-rule-based approaches in handling backward cancellation and multiple- utterance responses [Green and Carberry, 1992]. Also, a potential problem faced by those ap- proaches is scalability, i.e., as licensing rules for handling more types of implieature are added, rule conflicts may arise and tractability may decrease. In contrast, our approach avoids such problems by restricting the use of logical inference. 6. CONCLUSION We have described our implemented computa- tional model for interpreting and generating in- direct answers to Yes-No questions. Its main fea- tures are 1) a discourse-plan-based approach to implicature, 2) a reversible architecture, 3) a hy- brid reasoning model, and 4) use of stimulus condi- tions for modeling a speaker's motivation for pro- viding appropriate extra information. The model handles a wider range of types of indirect answers than previous computational models. Further- more, since Yes-No questions and their answers have features in common with other types of adja- cency pairs [Levinson, 1983], we expect that this approach can be extended to them as well. Fi- nally, a discourse-plan-based approach to implica- ture has significant advantages over a licensing- rule-based approach. In the future, we would like to integrate our interpretation and generation components with a dialogue system and investi- gate other factors in generating indirect answers (e.g. multiple goals, stylistic concerns). References [Allen, 1979] James F. Allen. A Plan-Based Ap- proach 1o Speech Act Recognition. PhD the- sis, University of Toronto, Toronto, Ontario, Canada, 1979. [American Express transcripts, 1992] American Express tapes. Transcripts of audio- tape conversations made at SRI International, Menlo Park, California. Prepared by Jaequeline Kowto under the direction of Patti Price. [Brown and Levinson, 1978] Penelope Brown and Stephen Levinson. Universals in language usage: Politeness phenomena. In Es- ther N. Goody, editor, Questions and politeness: Strategies in social inleraction, pages 56-289. Cambridge University Press, Cambridge, 1978. [Carberry, 1990] Sandra Carberry. Plan Recogni- tion in Natural Language Dialogue. MIT Press, Cambridge, Massachusetts, 1990. [Clark and Marshall, 1981] H. Clark and C. Mar- shall. Definite reference and mutual knowl- edge. In A. K. Joshi, B. Webber, and I. Sag, editors, Elements of discourse understanding. Cambridge University Press, Cambridge, 1981. 64 [Dahlgren, 1989] Kathleen Dahlgren. Coherence relation assignment. In Proceedings of the An- nual Meeting of the Cognitive Science Society, pages 588-596, 1989. [Diller, 1989] Anne-Marie Diller. La pragmatique des questions et des rdponses. In Tfibinger Beitr~ige zur Linguistik 243. Gunter Narr Ver- lag, Tiibingen, 1989. [Gazdar, 1979] G. Gazdar. Pragmatics: lmplica- ture, Presupposition, and Logical Form. Aca- demic Press, New York, 1979. [Green, 1990] Nancy L. Green. Normal state im- plicature. In Proceedings of the 28th Annual Meeting of the Association for Computational Linguistics, pages 89-96, 1990. [Green, in preparation] Nancy L. Green. A Com- putational Model for Interpreting and Generat- ing Indirect Answers. PhD thesis, University of Delaware, in preparation. [Green and Carberry, 1992] Nancy L. Green and Sandra Carberry. Conversational implicatures in indirect replies. In Proceedings of the 30th Annual Meeting of the Association for Compu- tational Linguistics, pages 64-71, 1992. [Grice, 1975] H. Paul Grice. Logic and conver- sation. In P. Cole and J. L. Morgan, editors, Syntax and Semantics III: Speech Acts, pages 41-58, New York, 1975. Academic Press. [Grimes, 1975] J. E. Grimes. The Thread of Dis- course. Mouton, The Hague, 1975. [Halliday, 1976] M. Halliday. Cohesion in English. Longman, London, 1976. [Hinkelman, 1989] Elizabeth Ann Hinkelman. Linguistic and Pragmatic Constraints on Utter- ance Interpretation. PhD thesis, University of Rochester, 1989. [Hirschberg, 1985] Julia Bell Hirschberg. A The- ory of Scalar Implicalure. PhD thesis, Univer- sity of Pennsylvania, 1985. [Hobbs, 1979] Jerry R. Hobbs. Coherence and coreference. Cognitive Science, 3:67-90, 1979. [Horacek, 1992] Helmut Horacek. An Integrated View of Text Planning. In R. Dale, E. Hovy, D. RSsner, and O. Stock, editors, Aspects of Auto- mated Natural Language Generation, pages 29- 44, Berlin, 1992. Springer-Verlag. [Hovy, 1988] Eduard H. Hovy. Planning coherent multisentential text. In Proceedings of the 26th Annual Meeting of the Association for Compu- tational Linguistics, pages 163-169, 1988. [Lakoff, 1973] Robin Lakoff. Questionable an- swers and answerable questions. In Braj B. Kachru, Robert B. Lees, Yakov Malkiel, An- gelina Pietrangeli, and Sol Saporta, editors, Pa- pers in Honor of Henry and Rende Kahane, pages 453-467, Urbana, 1973. University of Illi- nois Press. [Lascarides and Asher, 1991] Alex Lascarides and Nicholas Asher. Discourse relations and defea- sible knowledge. In Proceedings of the 29th An- nual Meeting of the Association for Computa- tional Linguistics, pages 55-62, 1991. [Levinson, 1983] S. Levinson. Pragmatics. Cam- bridge University Press, Cambridge, 1983. [McCafferty, 1987] Andrew Schaub McCafferty. Reasoning about lmplicature: a Plan-Based Ap- proach. PhD thesis, University of Pittsburgh, 1987. [McKeown, 1985] Kathleen R. McKeown. Text Generation. Cambridge University Press, 1985. [Mann and Thompson, 1987] W. C. Mann and S. A. Thompson. Rhetorical structure theory: Toward a functional theory of text organization. Text, 8(3):167-182, 1987. [Moore and Paris, 1988] Johanna D. Moore and Cecile L. Paris. Constructing coherent text us- ing rhetorical relations. In Proceedings of the lOth Annual Conference of the Cognitive Sci- ence Society, August 1988. [Perrault and Allen, 1980] R. Per- rault and J. Allen. A plan-based analysis of indirect speech acts. American Journal of Com- putational Linguistics, 6(3-4):167-182, 1980. [Polanyi, 1986] Livia Polanyi. The linguistics dis- course model: Towards a formal theory of dis- course structure. Technical Report 6409, Bolt Beranek and Newman Laboratories Inc., Cam- bridge, Massachusetts, 1987. [Reichman, 1984] Rachel Reichman. Extended person-machine interface. Artificial Intelli- gence, 22:157-218, 1984. [StenstrSm, 1984] Anna-Brita StenstrSm. Ques- tions and responses in english conversation. In Claes Schaar and Jan Svartvik, editors, Lund Studies in English 68. CWK Gleerup, MalmS, Sweden, 1984. [Thomason, 1990] Richmond H. Thomason. Ac- commodation, meaning, and implicature: In- terdisciplinary foundations for pragmatics. In P. Cohen, J. Morgan, and M. Pollack, edi- tors, Intentions in Communication. MIT Press, Cambridge, Massachusetts, 1990. [Wu and Lytinen, 1990] Horng Jyh Wu and Steven Lytinen. Coherence relation reasoning in persuasive discourse. In Proceedings of the Annual Meeting of the Cognitive Science Soci- ety, pages 503-510, 1990. 65 | 1994 | 9 |
Learning Phonological Rule Probabilities from Speech Corpora with Exploratory Computational Phonology Gary Tajchman, Daniel Jurafsky, and Eric Fosler International Computer Science Institute and University of California at Berkeley {tajchman,jurafsky, fosler}~icsi.berkeley.edu Abstract This paper presents an algorithm for learn- ing the probabilities of optional phonolog- ical rules from corpora. The algorithm is based on using a speech recognition sys- tem to discover the surface pronunciations of words in spe.ech corpora; using an auto- matic system obviates expensive phonetic labeling by hand. We describe the details of our algorithm and show the probabili- ties the system has learned for ten common phonological rules which model reductions and coarticulation effects. These probabili- ties were derived from a corpus of 7203 sen- tences of read speech from the Wall Street Journal, and are shown to be a reason- ably close match to probabilities from pho- netically hand-transcribed data (TIMIT). Finally, we analyze the probability differ- ences between rule use in male versus fe- male speech, and suggest that the differ- ences are caused by differing average rates of speech. 1 Introduction Phonological r-ules have formed the basis of phono- logical theory for decades, although their form and their coverage of the data has changed over the years. Until recently, however, it was difficult to deter- mine the relationship between hand-written phono- logical rules and actual speech data. The current availability of large speech corpora and pronunci- ation dictionaries has allowed us to connect rules and speech in much tighter ways. For example, a number of algorithms have recently been proposed which automatically induce phonological rules from dictionaries or corpora (Gasser 1993; Ellison 1992; Daelemans c~ al. 1994). While such algorithms have successfully induced syllabicity or harmony constraints, or simple oblig- *Currently at Voice Processing Corp, 1 Main St, Cambridge, MA 02142: [email protected] atory phonological rules, there has been much less work on non-obligatory (optional) rules. In part this is because optional rules like flapping, vowel reduc- tion, and various coarticulation effects are postlexi- cal and often products of fast speech, and hence have been considered less central to phonological theory. In part, however, this is because optional rules are inherently probabilistic. Where obligatory rules ap- ply to every underlying form which meets the en- vironmental conditions, producing a single surface form, optional rules may not apply, and hence the underlying form may appear as the surface form, unmodified by the rule. This makes the induction problem non-deterministic, and not solvable by the above algorithms. 1 While optional rules have received less attention in linguistics because of their probabilistic nature, in speech recognition, by contrast, optional rules are commonly used to model pronunciation variation. In this paper, we employ techniques from speech recog- nition research to address the problem of assign- ing probabilities to these optional phonological rules. We introduce a completely automatic algorithm that explores the coverage of a set of phonological rules on a corpus of lexically transcribed speech using the computational resources of a speech recognition sys- tem. This algorithm belongs to the class of tech- niques we call Exploratory Computational Phonol- ogy, which use statistical pattern recognition tools to explore phonological spaces. We describe the details of our probability esti- mation algorithm and also present the probabilities the system has learned for ten common phonological rules which model reductions and coarticulation ef- fects. Our probabilities are derived from a corpus of 7203 sentences of read speech from the Wall Street Journal (NIST 1993). We also benchmark the prob- abilities generated by our system against probabil- ities from phonetically hand-transcribed data, and show a relatively good fit. Finally, we analyze the probability differences between rule use in male ver- 1Note that this is true whether phonological theory considers these true phonological rules or rather rules of ~phonetic interpretation". sus female speech, and suggest that the differences are caused by differing average rates of speech. 2 The Algorithm In this section we describe our algorithm which as- signs probabilities to hand-written, optional phono- logical rules like flapping. The algorithm takes a lexicon of underlying forms and applies phonologi- cal rules to produce a new lexicon of surface forms. Then we use a speech recognition system on a large corpus of recorded speech to check how many times each of these surface forms occurred in the corpus. Finally, by knowing which rules were used to gener- ate each surface form, we can compute a count for each rule. By combining this with a count of the times a rule did not apply, the algorithm can com- pute a probability for each rule. The rest of this section will discuss each of the aspects of the algorithm in detail. 2.1 The Base Lexicon Our base lexicon is quite large; it is used to gen- erate the lexicons for all of our speech recognition work at ICSI. It contains 160,000 entries (words) with 300,000 pronunciations. The lexicon contains underlying forms which are very shallow; thus they are post-lexical in the sense that there is no rep- resented relationship between e.g. 'critic' and 'criti- cism' (where critic is pronounced kritik and criticism kritisizrn). However, the entries do not represent flaps, vowel reductions, and other coarticulatory ef- fects. In order to collect our 300,000 pronunciations, we combined seven different on-line pronunciation dic- tionaries, including the five shown in Table 12 . Source [ Words [ Base Prons CMU 95,781 99,279 LIMSI 32,873 37,936 "PRONLEX 30,353 30,354 BRITPRON 77,685 85,450 TTS 77,383 83,297 All Prons 399,265 49,597 81,936 108,834 111,028 Table 1: Pronunciation sources used to build fully expanded lexicon. For further information about these sources please refer to CMU (CMU 1993), LIMSI (Lamel 1993), PRONLEX (COMLEX 1994), BRITPRON (Robin- son 1994). A text-to-speech system was used to gen- 2Although it was not relevant to the experiments de- scribed here, our lexicon also included two sources which directly supply surface forms. These were 13,362 hand- transcribed pronunciations of 5871 words from TIMIT (TIMIT 1990), and 230 pronunciations of 36 words de- rived in-house from the OGI Numbers database (Cole et al. 1994). erate phone sequences from word orthography as an additional source of pronunciations. [IPAIARPAIICSI I IPA I ARPAIICSI I b b b b ° bcl d d d d ° dcl g g g gO - gcl p p p pO pcl t t t t ° - tcl k k k k ° - kcl (1 aa aa s s s ae ae z z z A ah ah J' sh sh O ao ao ~ zh zh eh eh f f f 3 ~ er er v v v ih ih IJ th th i iy iy 6 dh dh o ow ow t j" ch ch c~ uh uh dz jh jh u uw uw h hh hh ct w aw aw l'i - hv a ~ ay ay y y y e ey ey r r r 3 y oy oy w w w el 1 1 1 em m m m en n n n a ax rj ng ng ix r dx axr silence h# h# Table 2: Baseform phone set used was the ARPA- BET. This was expanded to include syllabics, stop closures, and reduced vowels, alveolar flap, and voiced h. We represent pronunciations with the set of 54 ARPAbet-like phones detailed in Table 2. All the lexicon sources except LIMSI use ARPABET-like phone sets 3. CMU, BRITPRON, and PRONLEX phone sets include three levels of vowel stress. The pronunciations from all these sources were mapped into our phone set using a set of obligatory rules for stop closures [bcl, dcl, gcl, pcl, tcl, kcl], and op- tional rules to introduce the syllabic consonants [el, em, en], reduced vowels [ax, ix, axr], voiced h [hv], and alveolar flap [dx]. 2.2 Applying Phonological Rules to Build a Surface Lexicon We next apply phonological rules to our base lexi- con to produce the surface lexicon. Since the rules 3The LIMSI pronunciations already included the syl- labic consonants and reduced vowels. For this reason, the words found only in the LIMSI source lexicon did not participate in the probability estimates for the syl- labic and reduced vowel rules. 2 Name Code Rule Reductions Mid vowels RV1 High vowels RV2 R-vowel RV3 Syllabic n SL1 Syllabic m SL2 Syllabic 1 SL3 Syllabic r SL4 Flapping FL1 Flapping-r FL2 H-voicing VH1 Table -stress [aa ae ah ao eh er ey ow uh]---~ ax -stress [iy ih uw] --* ix -stress er --* axr [ax ix] n --* en [ax ix] m ~ em [ax ix] 1 ---* el [ax ix] r ~ ~xr [tcl dcl] [t d]--~ dx/V [ax ix axr] • [tcl dcl] [t d]--* dx/V r __ [ax ix axr] . hh ~ hv / [+voice] [+voice] 3: Phonological Rules are optional, the surface lexicon must contain each underlying pronunciation unmodified, as well as the pronunciation resulting from the application of each relevant phonological rule. Table 3 gives the 10 phonological rules used in these experiments. One goal of our rule-application procedure was to build a tagged lexicon to avoid having to imple- ment a phonological-rule parser to p~rse the surface pronunciations. In a tagged lexicon, each surface pronunciation is annotated with the names of the phonological rules that applied to produce it. Thus when the speech recognizer finds a particular pro- nunciation in the speech input, the list of rules which applied to produce it can simply be looked up in the tagged lexicon. The algorithm applies rules to pronunciations re- cursively; when a context matches the left hand side of a phonological rule "RULE," two pronunciations are produced: one unchanged by the rule (marked -RULE), and one with the rule applied (marked +RULE). The procedure places the +RULE pro- nunciation on the queue for later recursive rule ap- plication, and continues trying to apply phonological rules to the -RULE pronunciation. See Figure 1 for details of the algorithm. While our procedure is not guaranteed to terminate, in practice the phonologi- cal rules we apply have a finite recursive depth. The nondeterministic mapping produces a tagged equiprobable multiple pronunciation lexicon of 510,000 pronunciations for 160,000 words. For ex- ample, Table 4 gives our base forms for the word "butter" : Source TTS BPU BPU CMU LIM PLX Pronunciation bah t axr b ah tax b ah t axr bah t er bah t axr bah t er Table 4: Base forms for "butter" For each lexical item, L, do: Place all base prons of L onto queue q While Q is not empty do: Dequeue pronunciation P from q For each phonological rule R, do: If context of R could apply to P Apply R to P, giving P' Tag P' with +R, put on queue Tag P with -R Output P with tags Figure 1: Applying Rules to the Base Lexicon The resulting tagged surface lexicon would have the entries in Table 5. 2.3 Filtering with forced-Viterbi Given a lexicon with tagged surface pronunciations, the next required step is to count how many times each of these pronunciations occurs in a speech corpus. The algorithm we use has two steps; PHONETIC LIKELIHOOD ESTIMATION and FORCED- VITERBI ALIGNMENT. In the first step, PHONETIC LIKELIHOOD ESTI- MATION, we examine each 20ms frame of speech data, and probabilistically label each frame with the phones that were likely to produce the data. That is, for each of the 54 phones in our phone-set, we compute the probability that the slice of acoustic data was produced by that phone. The result of this labeling is a vector of phone-likelihoods for each acoustic frame. Our algorithm is based on a multi-layer percep- tron (MLP) which is trained to compute the condi- tional probability of a phone given an acoustic fea- ture vector for one frame, together with 80 ms of surrounding context. Bourlard ~ Morgan (1991) 3 bcl b ah dx ax:+BPU +FL1; +CWtl +FL1 +RVl; +PLX +FL1 +RVl bcl bah dx axr: +TTS +FL1; +BPU +FL1; +CI~J +FL1 -RVl +RV3; +LIM +FL1; +PLX +FL1 -RV1 +RV3 bcl b ah tel t ax:+BPU -FL1; +C~d -FL1 +RV1; +PLX -FL1 +RV1 bcl bah tel t axr:÷TT$ -FL1; +BPU -FL1; +C/fiLl -FL1 -RVl +RV3; +LIM -FL1; +PLX -FL1 -RVl +KV3 bcl bah tcl t er:+CMrd -RVl -RV3; +PLX -RVl -RV3 Table 5: Resulting tagged entries and Renals et al. (1991) show that with a few as- sumptions, an MLP may be viewed as estimating the probability P(ql x) where q is a phone and x is the input acoustic speech data. The estimator consists of a simple three-layer feed forward MLP trained with the back-propagation algorithm (see Figure 2). The input layer consists of 9 frames of in- put speech data. Each frame, representing 10 msec of speech, is typically encoded by 9 PLP (Hermansky 1990) coefficients, 9 delta-PLP coefficients, 9 delta- delta PLP coefficients, delta-energy and delta-delta- energy terms. Typically, we use 500-4000 hidden units. The output layer has one unit for each phone. The MLP is trained on phonetically hand-labeled speech (TIMIT), and then further trained by an it- erative Viterbi procedure (forced-Viterbi providing the labels) with Wall Street Journal corpora. v b m r z Output: ~ 54 Phones ~r...-.,-'~f-,~--.,---,f'xr-.~-.~-"'~"~ Hidden Layer: 500-4000 Fully Connected Units Input Layer: 9 Frames ~_ 0, - - - - ,., " of 20RASTA features, '- . . . . total 180 units L e f t ~ Current l F r a m e - ~ ~ , ,. r_ --" ......... ", , , , t . . . . Right Context I I I I I I I I I -~,,:-Y~a: -Zor., -tam tats 2ores Saw ~t~as Figure 2: Phonetic Likelihood Estimator The probability P(qlx) produced by the MLP for each frame is first converted to the likelihood P(xlq ) by dividing by the prior P(q), according to Bayes' rule; we ignore P(z) since it is constant here: P(x l q) - P(q l x)P(z) P(q) The second step of the algorithm, FORCED- VITERBI ALIGNMENT, takes this vector of likelihoods for each frame and produces the most likely phonetic string for the Sentence. If each word had only a sin- gle pronunciation and if each phone had some fixed duration, the phonetic string would be completely determined by the word string. However, phones vary in length as a function of idiolect and rate of speech, and of course the very fact of optional phono- logical rules implies multiple possible pronunciations for each word. These pronunciations are encoded in a hidden Markov model (HMM) for each word. The Viterbi algorithm is a dynamic programming search, which works by computing for each phone at each frame the most likely string of phones ending in that phone. Consider a sentence whose first two words are "of the", and assume the simplified lexicon in Figure 3. P( ax I start }-......,, ~ . 0 ~'~ 66~-0~9 ~ ~1.0 ~the ~ Figure 3: Pronunciation models for "of" and "the" Each pronunciation of the words 'of' and 'the' is represented by a path through the probabilistic automaton for the word. For expository simplic- ity, we have made the (incorrect) assumption that consonants have a duration of i frame, and vowel a duration of 2 or 3 frames. The algorithm analyzes the input frame by frame, keeping track of the best path of phones. Each path is ranked by its proba- bility, which is computed by multiplying each of the transition probabilities and the phone probabilities for each frame. Figure 4 shows a schematic of the path computation. The size of each dot indicates the magnitude of the local phone likelihood. The max- imum path at each point is extended; non-maximal paths are pruned. The result of the forced-Viterbi alignment on a single sentence is a phonetic labeling for the sen- tence (see Figure 5 for an example), from which we 4 ah -ah-v-dh-ax-ax-ax END six .~.~~ P(ax I dh)= .7 ly dh P(v J acoustlcs) = .9 ~ 0 )~ax'ax'ax'v'dx-iy-iy v x ''~" / "" ~,, \P(v I oh)= .4 START P(ah I START)= .5 Figure 4: Computing most-likely phone paths in a Forced-Viterbi alignment of 'of the' new york city's fresh nyuw yaorkclk sihtcltiyz frehsh kills landfill on kclkihlz laendclfihl aan staten island for one steltaetclten aylaxndcl faor wahn dumps four million dcldahmpclps faor mihlyixn gallons of toxic gclgaelaxnz axf tcltaakclksixkcl liquid into nearby lihkclkwihdcl entclt uw nihrbclbay freshwater streams every frehshwaodxaxr stclt riymz eh vriy day dcl d ey Figure 5: A forced-Viterbi phonetic labelling for a Wall Street Journal sentence can produce a phonetic pronunciation for each word. By running this algorithm on a large corpus of sen- tences, we produce a list of "bottom-up" pronunci- ations for each word in the corpus. 2.4 Rule probability estimation The rule-tagged surface lexicon described in §2.1 and the counts derived from the forced-Viterbi described in §2.3 can be combined to form a tagged lexicon that also has counts for each pronunciation of each word. Following is a sample entry from this lexicon for the word Adams which shows the five derivations for its single pronunciation: Adams: ae dz az m z: count=2 derivation 1: +ATS +FL1 -SL2 derivation 2: +BPU +FL1 -$L2 derivation 3: +¢MU +FL1 +RV1 -SL2 derivation 4: +LIH +FL1 -SL2 derivation 5: +PLX +FL1 -SL2 Each pronunciation of each word in this lexicon is annotated with rule tags. Since each pronunciation may be derived from different source dictionaries or via different rules, each pronunciation of a word may contain multiple derivations, each consisting of the list of rules which applied to give the pronunciation from the base form. These tags are either positive, indicating that a rule applied, or negative, indicating that it did not. To produce the initial rule probabilities, we need to count the number of times each rule applies, out of the number-of times it had the potential to apply. If each pronunciation only had a single derivation, this would be computed simply as follows: P(R) = Z v~PRON Ct (Rule R applied in p) Ct (Rule R could have applied in p) This could be computed from the tags as : Ct(+R tags in p) --P-(-R) = Z Ct(-I-R tags in p) -I- Ct(-R tags in p) v~PRON However, since each pronunciation can have mul- tiple derivations, the counts for each rule from each derivation need to be weighted by the probability of the derivation. The derivation probability is com- puted simply by multiplying together the probability of each of the applications or non-applications of the rule. Let • DERIVS(p} be the set of all derivations of a pronunciation p, • POSR ULES(p, r, d) be 1.0 if derivation d of pro- nunciation p uses rule r, else 0. • ALLRULES(p,r) be the count of all derivations of p in which rule r could have applied (i.e. in which d has either a +R or -R tag). • P(d]p) be the probability of the derivation d of pronunciation p. • PRON be the set of pronunciations derived from the forced-Viterbi output. Now a single iteration of the rule-probability al- gorithm must perform the following computation: POSRULES(p,r,d) P(r) = ~_~ ~ P(dlP) ALLRULES(p,r) pePRON aeDERIVS(p) Since we have no prior knowledge, we make the zero-knowledge initial assumption that P(d[p) = 1 The algorithm can the be run as a [DERIVS(p)I" successive estimation-maximization to provide suc- cessive approximations to P(dlp ). For efficiency rea- sons, we actually compute the probabilities of all rules in parallel, as shown in Figure 6. For each word/pron pair P E PRON from -- - forced-Viterbi alignment Let DERIVS(P) be the set of rule derivations of P For every d q DERIVS(P) For every rule R 6 d if (R = +RULE) then 1 ruleapp{RULE} += [DERIVS(P)[ else rulenoapp{RULE} += 1 [DERIVS(P)I For every rule RULE P( RU L E) = r,te,pp( RU L~) ruleapp( RU L E )Truleapp( RU L E ) Figure 6: Parallel computation of rule probabilities 3 Results We ran the estimation algorithm on 7203 sea, noes (129,864 words) read from the Wall Street Journal. The corpus (!993 WSJ Hub 2 (WSJ 0) training data) -consisted of 12 hours of speech, and had 8916 unique words. Table 6 shows the probabilities for the ten phonological rules described in §2.2. Note that all of the rules are indeed quite op- tional; even the most commonly-employed rules, like flapping and h-voicing, only apply on average about 90% of the time. Many of the other rules, such as the reduced-vowel or reduced-liquid rules, only ap- ply about 50% of the time. We next attempted to judge the reliability of our automatic rule-probability estimation algorithm by comparing it with hand transcribed pronuncia- tions. We took the hand-transcribed pronunciations of each word in TIMIT, and computed rule probabil- ities by the same rule-tag counting procedure used for our forced-Viterbi output. Figure 7 shows the fit between the automatic and hand-transcribed proba- bilities. Since the TIMIT pronunciations were from a completely different data collection effort with a very different corpus and speakers, the closeness of the probabilities is quite encouraging. Figure 8 breaks down our automatically generated rule probabilities for the Wall Street Journal corpus Percent of Phonological Rule Use, WSJO vs. TIMIT Percent ' 00 I" I 90.00 80.00 i 70.00 i 50.00 20.00 10.00 0.00 VHI Rule Figure 7: Automatic vs Hand-transcribed Probabil- ities for Phonological Rules into male and female speakers. Notice that many of the rules seem to be employed more often by men than by women. For example, men are about 5% more likely to flap, more likely to reduce vowels ih ._." 1 and er, and slightly more likely to reduce Lqums and nasals. --~ Since ~'- ~,~ese are coarticulation or fast-speech ef- fects, our initial hypothesis was that the differ- ence between male and female speakers was due to a faster speech-rate by males. By computing the weighted average seconds per phone for male and female speakers, we found that females had an av- erage of 71 ms/phone, while males had an average of 68 ms/phone, a difference of about 4%, quite cor- related with the similar differences in reduction and flapping. 4 Related Work Our algorithm for phonological rule probability esti- mation synthesizes and extends earlier work by (Co- hen 1989) and (Wooters 1993). The idea of using optional phonological rules to construct a speech- recognition lexicon derives from Cohen (1989), who applied optional phonological rules to a baseform dictionary to produce a surface lexicon and then used TIMIT to assign probabilities for each pronun- ciation. The use of a forced-Viterbi speech decoder to discover pronunciations from a corpus was pro- posed by Wooters (1993). Weseniek & Sehiel (1994) independently propose a very similar forced-Viterbi- decoder-based technique which they use for measur- ing the accuracy of hand-written phonology. 6 Name Code Reductions Mid vowels RV1 High vowels RV2 R-vowel RV3 Syllabic n SL1 Syllabic m SL2 Syllabic 1 SL3 Syllabic r SL4 Flapping FL1 Flapping-r FL2 H-voicing VH 1 Rule -stress [aa ae ah ao eh er ey ow uh]--~ ax -stress [iy ih uw] ---* ix -stress er ---* axr [ax ix] n -+ en [ax ix] in ---. em [ax ix] 1 ~ el [ax ix] r ~ axr [tcl dcll It d]---* dx/V __ [ax ix axr] i [tcl dcl] It d]-~ dx/Vr ~ lax ix axr] , • hh --* hv / [+voice] __ [+voice] Table 6: Results of the Rule-Probability-Estimation Algorithm Pr .60 .57 .74 .35 .35 .72 .77 .87 .92 .92 Percent of Phonological Rule Use Percent 90.00 m .... 80.00 ,i 70.00 .... 1 60.00 m 50.00 40.00 20.00 I0.00 0.00 1 1 2 3 1 Rule m female llllll Figure 8: Male vs Female Probabilities for Phono- logical Rules Chen (1990) and Riley (1991) model the relation- ship between phonemes and their Mlophonic realiza- tions by training decision trees on TIMIT data. A decision tree is learned for each underlying phoneme specifying its .surface realization in different con- texts. These completely automatic techniques, re- quiring no hand-written rules, can allow a more fine-grained analysis than our rule-based algorithm. However, as a consequence, it is more difficult to extract generalizations across classes of phonemes to which rules can apply. We think that a hybrid between a rule-based and a decision-tree approach could prove quite powerful. 5 Conclusion and Future Work Although the paradigm of exploratory computa- tional phonology is only in its infancy, we believe our rule-probability estimation algorithm to be a new and useful instance of the use of probabilistic techniques and spoken-language corpora in compu- tational linguistics. In Tajchman et al. (1995) we report on the results of our algorithm on speech recognition performance. We plan in future work to address a number of shortcomings of these ex- periments, for example including some spontaneous speech corpora, and looking at a wider variety of rules. In addition, we have extended our algorithm to in- duce new pronunciations which generalize over pro- nunciations seen in the corpus (Wooters & Stolcke 1994). We now plan to augment our probability es- timation to use the pronunciations from this new HMM-induction-based generalization step. This will require extending our tag-based probability estima- tion step to parse the phone strings from the forced- Viterbi. In other current work we have also been using this algorithm to model the phonological component of the accent of non-native speakers. Finally, we hope in future work to be able to combine our rule- based approach with more bottom-up methods like the decision-tree or phonological parsing algorithms to induce rules as well as merely training their prob- abilities• Acknowledgments Thanks to Mike Hochberg, Nelson Morgan, Steve Re- nals, Tony Robinson, Florian Schiel, Andreas Stolcke, and Chuck Woofers. This work was partially funded by ICSI and an SRI subcontract from ARPA contract MDA904-90-C-5253. Partial funding also came from ES- PRIT project 6487 (The Wernicke project). References BOURLARD, H., & N. MORGAN. 1991. Merging mul- tilayer perceptrons & Hidden Markov Models: 7 Some experiments in continuous speech recog- nition. In Artificial Neural Networks: Advances and Applications, ed. by E. Gelenbe. North Hol- land Press. CHEN, F. 1990. Identification of contextual factors for pronounciation networks. In IEEE ICASSP- 90,753-756. CMU, 1993. The Carnegie Mellon Pronouncing Dic- tionary v0.1. Carnegie Mellon University. COHEN, M. H., 1989. Phonological Structures for Speech Recognition. University of California, Berkeley dissertation. COLE, R. A., K. ROGINSKI, ~5 M. FANTY., 1994. The OGI Numbers Database. Oregon Graduate Institute. COMLEX, 1994. The COMLEX English Pronounc- ing Dictionary. copyright Trustees of the Uni- versity of.Pennsylvania. DAELEMANS, WALTER, STEVEN GILLIS, ~ GERT DURmUX. 1994. The acquisition of stress: A data-oriented approach. Computational Lin- guistics 208.421-451. ELLISON, T. MARK, 1992. The Machine Learning of Phonological Structure. University of Western Australia dissertation. GASSER, MICHAEL, 1993. Learning words in time: Towards a modular connectionist account of the acquisition of receptive morphology. Draft. HERMANSKY, H. 1990. Perceptual linear predictive (pip) analysis of speech. J. Acoustical Society of America 87. LAMEL, LORI, 1993. The Limsi Dictionary. NIST, 1993. Continuous Speech Recognition Corpus (WSJ 0). National Institute of Standards and Technology Speech Disc 11-1.1 to 11-3.1. RENALS, S., N. MORGAN, H. BOURLARD, M. CO- HEN, H. FRANCO, C. WOOTERS, ~ P. KOHN. 1991. Connectionist speech recognition: Sta- tus and prospects. Technical Report TR-91-070, ICSI, Berkeley, CA. RILEY, MICHAEL D. 1991. A statistical model for generating pronunciation networks. In IEEE ICASSP-91, 737-740. ROBINSON, ANTHONY, 1994. The British English Example Pronunciation Dictionary, v0.1. Cam- bridge University. TAJCHMAN, GARY, ERIC FOSLER, ~ DANIEL JU- RAFSKY. 1995. Building multiple pronunciation models for novel words using exploratory com- putational phonology. To appear in Eurospeech- 95. TIMIT, 1990. TIMIT Acoustic-Phonetic Continuous Speech Corpus. National Institute of Standards and Technology Speech Disc 1-1.1. NTIS Order No. PB91-505065. WESENICK, MARIA-BARBARA, ~ FLORIAN SCHIEL. 1994. Applying speech verification to a large data base of German to obtain a statistical sur- vey about rules of pronunciation. In ICSLP-9~, 279-282. WOOTERS, CHARLES C., 1993. Lexical Modeling in a Speaker Independent Speech Understand- ing System. Berkeley: University of California dissertation. Available as ICSI TR-92-062. WOOTERS, CHUCK, ~5 ANDREAS STOLCKE. 1994. Multiple-pronunciation lexical modeling in a speaker-independent speech understanding sys- tem. In ICSLP-94. 8 | 1995 | 1 |
Features and Agreement Sam Bayer and Mark Johnson* Cognitive and Linguistic Sciences, Box 1978 Brown University { b ayer,mj } @cog.brown.edu Abstract This paper compares the consislency- based account of agreement phenomena in 'unification-based' grammars with an implication-based account based on a sim- ple feature extension to Lambek Catego- rim Grammar (LCG). We show that the LCG treatment accounts for constructions that have been recognized as problematic for 'unification-based' treatments. 1 Introduction This paper contrasts the treatment of agreement phenomena in standard complex feature structure or 'unification-based' grammars such as HPSG (Pol- lard and Sag, 1994) with that of perhaps the sim- plest possible feature extension to Lambek Catego- rial Grammar (LCG) (Lambek, 1958). We iden- tify a number of situations where the two accounts make different predictions, and find that gener- ally the LCG account is superior. In the pro- cess we provide analyses for a number of construc- tions that have been recognized as problematic for 'unification-based' accounts of agreements (Zaenen and Karttunen, 1984; Pullum and Zwicky, 1986; In- gria, 1990). Our account builds on the analysis of coordination in applicative categorial grammar in Bayer (1994) and the treatment of Boolean connec- tives in LCG provided by Morrill (1992). Our anal- ysis is similiar to that proposed by Mineur (1993), but differs both in its application and details. The rest of the paper is structured as follows. The next section describes the version of LCG we use in this paper; for reasons of space we assume familiar- ity with the treatment of agreement in 'unification- based' grammars, see Shieber (1986) and Pollard and Sag (1994) for details. Then each of the follow- *We would like to thank Bob Carpenter, Pauline Ja- cobson, John Maxwell, Glynn Morrill and audiences at Brown University, the University of Pennsylvania and the Universit£t Stuttgart for helpful comments on this work. Naturally all errors remain our own. ing sections up to the conclusion discusses an impor- tant difference between the two approaches. 2 Features in Lambek Categorial Grammar In LCG semantic interpretation and long distance dependencies are handled independently of the fea- ture system, so agreement phenomena seem to be the major application of a feature system for LCG. Since only a finite number of feature distinctions need to be made in all the cases of agreement we know of, we posit only a very simple feature system here. Roughly speaking, features will be treated as atomic propositions (we have no need to separate them into attributes and values), and a simple cat- egory will be a Boolean combination of such atomic 'features' (since we have no reason to posit a re- cursive feature structures either). In fact we are agnostic as to whether more complex feature sys- tems for LCG are linguistically justified; in any event Dorre et. al. (1994) show how a full attribute-value feature structure system having the properties de- scribed here can be incorporated into LCG. Following the standard formulation of LCG, we regard the standard LCG connectives '/' and 'V as directed implications, so we construct our system so that a//~ fl~ can combine to form a if fl' is logically stronger than/~. Formally, we adopt Morrill's treatment (Morrill, 1992) of the (semantically impotent) Boolean con- nectives '^' and 'v' (Morrill named these 'lq' and '11' respectively). Given a set of atomic features 5, we define the set of feature terms 7- and categories g as follows, where '/' and 'V are the standard LCG forward and backward implication operators. 7- ::= Y= + 7-^7- + 7-v7- C ::= 7- + C/C + C\¢ In general, atomic categories in a standard catego- rim grammar will be replaced in our analyses with formulae drawn from 7-. For example, the NP Kim might be assigned by the lexicon to the category np^sg^3, the verb sleeps to the category s\npnsg^3, 70 and the verb slept (which does not impose person or number features on its subject) to the category s\np. To simplify the presentation of the proofs, we for- mulate our system in natural deduction terms, and specify the properties of the Boolean connectives us- ing the single inference rule P, rather than providing separate rules for each connective. ~P where I- in the calculus. 1 ¢ ¢ propositional The rule P allows us to replace any formula in T with a logically weaker one. For example, since Kim is assigned to the category np^sgA3, then by rule P it will belong to np as well. Finally, we assume the standard LCG introduc- tion and elimination rules for the directed implica- tion operators. A/B B B A\B A /~ A [B]" [B] n A A A/B ~in A\B \i~ For example, the following proof of the well- formedness of the sentence Kim slept can be derived using the rules just given and the lexical assignments described above. Kim np^sg^3 slept P up s\np 8 This example brings out one of the fundamental dif- ferences between the standard treatment of agree- ment in 'unification-based' grammar and this treat- ment of agreement in LCG. In the 'unification-based' accounts agreement is generally a symmetric rela- tionship between the agreeing constituents: both agreeing constituents impose constraints on a shared agreement value, and the construction is well-formed iff these constraints are consistent. However, in the LCG treatment of agreement pro- posed here agreement is inherently asymmetric, in 1Because conjunction and disjunction are the only connectives we permit, it does not matter whether we use the classical or intuitionistic propositional calcu- lus here. In fact, if categories such as np and ap are 'decomposed' into the conjunctions of atomic features +nounA--verb and q-noun^+verb respectively as in the Sag et. at. (1985) analysis discussed below, disjunction is not required in any of the LCG analyses below. How- ever, Bayer (1994) argues that such a decomposition is not always plausible. that an argument must logically imply, or be sub- sumed by, the antecedent of the predicate it com- bines with. Thus in the example above, the rule P could be used to 'weaken' the argument from npAsgA3 to rip, but it would not allow np (with- out agreement features) to be 'strengthened' to, say, npA SgA 3. Abstracting from the details of the feature sys- tems, we can characterize the 'unification-based' ap- proach as one in which agreement is possible be- tween two constituents with feature specifications ¢ and ¢ iff ¢ and ¢ are consistent, whereas the LCG approach requires that the argument ¢ implies the corresponding antecedent ¢ of the predicate (i.e., Interestingly, in cases where features are fully specified, these subsumption and consistency re- quirements are equivalent. More precisely, say that a formula ¢ from a feature constraint language fixes an atomic feature constraint X iff ¢ ~ X or ¢ -~X- For example, in single-valued feature systems (person) = 1 and (person) = 3 both fix (person) = 1, (person) = 2, (person) = 3, etc., and in general all fully-specified agreement constraints fix the same set of formulae. Now let ¢ and ¢ be two satisfiable formulae that fix the same set of atomic feature constraints. Then A ¢ is consistent iff ¢ ~ ¢. To see this, note that because ¢ and ¢ fix the same set of formulae, each condition holds iff ¢ and ¢ are elementarily equivalent (i.e., for each feature constraint X, ¢ ~ X iff ¢ ~ X)- However, the role of partial agreement feature specifications in the two systems is very different. The following sections explore the empirical conse- quences of these two approaches. We focus on co- ordination phenomena because this is the one area of the grammar where underspecified agreement fea- tures seem to play a crucial linguistic role, and can- not be regarded merely as an abbreviatory device for a disjunction of fully-specified agreement values. 3 Coordination and agreement asymmetries Interestingly, the analysis of coordination is the one place where most 'unification-based' accounts aban- don the symmetric consistency-based treatment of agreement and adopt an asymmetric subsumption- based account. Working in the GPSG framework Sag et. al. (1985) proposed that the features on a conjunction must be the most specific category which subsumes each conjunct (called the general- ization by Shieber (1992)). Shieber (1986) proposed a weaker condition, namely that the features on the conjunction must subsume the features on each con- junct, as expressed in the annotated phrase struc- 71 VP bec~rae wealthy and a Republican wealthy a Republican and np ap P became npvap eonj npvap vp/npvap npvap vp Figure 2: The LCG analysis of (2b). ,p GO Figure 1: The feature structure subsumption analy- sis of (2b). ture rule below (Shieber, 1992).2 In all of the exam- pies we discuss below, the features associated with a conjunction is the generalization of the features associated with each of its conjuncts, so our conclu- sions are equally valid for both the generalization and subsumption accounts of coordination. (1) Xo , Xl conj X2 where X0 E X1 and X0 E X2 Consider the sentences in (2). Decomposing the cat- egories N(oun) and A(djective) into the Boolean- valued features {(noun) = +,(verb) = -} and {(noun) = +, (verb) = +} respectively, the fact that became can select for either an NP or an AP comple- ment (2a) can be captured by analysing it as subcat- egorizing for a complement whose category is under- specified; i.e., its complement satisfies (noun) = +, and no constraint is imposed on the verb feature. (2) a. Kim [v became ] [hv wealthy ] / [NP a Re- publican ] b. Kim [vP [v became ] lAP wealthy ] and [NP a Republican ] ] Now consider the coordination in (2b). Assum- ing that became selects the underspecified category (noun) = +, the features associated with the coor- dination subsume the features associated with each coordinate, as required by rule (1), so (2b) has the well-formed structure shown in Figure 1. On the other hand, a verb such as grew which selects solely AP complements (3a) requires that its complement satisfies (noun) = +, (verb) = +. Thus the features on the coordinate structure in (3b) must include (verb) = + and so do not subsume the (verb) = - feature on the NP complement, correctly predicting the ungrammatieality of (3b). (3) a. Kim grew lAP wealthy]/*[Np a Republican] 2Note that the LFG account of coordination provided by Kaplan and Maxwell (1988) differs significantly from both the generalization and the subsumption accounts of coordination just mentioned, and does not generate the incorrect predictions described below. wealthy a Republican ap and np .p p grew npvap conj npvap 'CO vp/ap npvap Figure 3: A blocked LCG analysis of the ungram- matical (3b) b. *Kim [vP [v grew ] [hP wealthy ] and [r~P a Republican ] ] Our LCG account analyses these constructions in a similar way. Because the LCG account of agree- ment has subsumption 'built in', the coordination rule merely requires identity of the conjunction and each of the conjuncts. A conj A CO A Condition: No undischarged assumptions in any conjunct. 3 We provide an LCG derivation of (2b) in Fig- ure 2. Roughly speaking, rule P allows both the AP wealthy and the NP a Republican to 'weaken' to npvap, so the conjunction satisfies the antecedent of the predicate became. (This weakening also takes place in non-coordination examples such as Kim be- came wealthy). On the other hand, (3b) is correctly predicted to be ill-formed because the strongest pos- sible category for the coordination is npvap, but this does not imply the 'stronger' ap antecedent of grew, so the derivation in Figure 3 cannot proceed to form a vp. Thus on these examples, the feature-based sub- sumption account and the LCG of complement co- ordination constructions impose similiar feature con- straints; they both require that the predicate's fea- ture specification of the complement subsumes the features of each of the arguments. In the feature- based account, this is because the features associ- ated with a conjunction must subsume the features 3This condition in effect makes conjunctions into is- lands. Morrill (1992) shows how such island constraints can be expressed using modal extensions to LCG. 72 associated with each conjunct, while in the LCG ac- count the features associated with the complement specification in a predicate must subsume those as- sociated with the complement itself. Now consider the related construction in (4) in- volving conjoined predicates as well conjoined argu- ments. Similar constructions, and their relevance to the GPSG treatment of coordination, were first discussed by Jacobson (1987). In such cases, the feature-based subsumption account requires that the features associated with the predicate conjunction subsume those associated with each predicate con- junct. This is possible, as shown in Figure 4. Thus the feature structure subsumption account incor- rectly predicts the well-formedness of (4). (4) *Kim [ grew and remained ] [ wealthy and a Republican ]. Because the subsumption constraint in the LCG analysis is associated with the predicate-argument relationship (rather than the coordination construc- tion, as in the feature-based subsumption account), an LCG analysis paralleling the one given in Figure 4 does not exist. By introducing and withdrawing a hypothetical ap constituent as shown in Figure 5 it is possible to conjoin grew and remained, but the re- sulting conjunction belongs to the category vp/ap, and cannot combine with the wealthy and a Repub- lican, which belongs to the category npvap. Informally, while rule P allows the features associ- ated with an argument to be weakened, together with the introduction and elimination rules it permits the argument specifications of predicates to be strength- ened (e.f. the subproof showing that remained be- longs to category vp/ap in Figure 5). As we re- marked earlier, in LCG predicates are analysed as (directed) implicational formulae, and the argument features required by a predicate appear in the an- tecedent of such formulae. Since strengthening the antecedent of an implication weakens the implica- tion as a whole, the combined effect of rule P and the introduction and elimination rules is to permit the overall weakening of a category. 4 Consistency and agreement Complex feature structure analyses of agreement require that certain combinations of feature con- straints are inconsistent in order to correctly reflect agreement failure. For example, the agreement fail- ure in him runs is reflected in the inconsistency of the constraints (case) = acc and (case) = nora. In the LCG account presented above, the agreement fail- ure in him runs is reflected by the failure of acc to imply nora, not by the inconsistency of the features acc and nora. Thus in LCG there is no principled reason not to assign a category an apparently con- tradictory feature specification such as np^nom^acc (this might be a reasonable lexical category assign- ment for an NP such as Kim). COMP = V V finder und hilft VP NP ~OBJ = + ] Frauen Figure 6: The feature structure subsumption analy- sis of (5c). Consider the German examples in (5), cited by Pullum and Zwicky (1986) and Ingria (1990). These examples show that while the conjunction finder und hilft cannot take either a purely accusative (5a) or dative complement (5b), it can combine with the NP Frauen (5c), which can appear in both accusative and dative contexts. (5) a. * Er findet und hilft Miinner he find-ACC and help-DAT men-ACC b. * Er findet und hilft Kindern he find-ACC and help-DAT children-DAT c. Er findet und hilft he find-ACC and help-DAT Frauen women-ACC+DAT Contrary to the claim by Ingria (1990), these exam- ples can be accounted for straight-forwardly using the standard feature subsumption-based account of coordination. Now, this account presupposes the ex- istence of appropriate underspecified categories (e.g., in the English example above it was crucial that ma- jor category labels were decomposed into the fea- tures noun and verb). Similarly, we decompose the four nominal cases in German into the 'subcase' fea- tures obj (abbreviating 'objective') and dir (for 'di- rect') as follows. Nominative Accusative Dative Genetive {(air) = +, (obj) = -} = +, (obj) = +} {(air) = -, (obj) = +} {(d,r) = -, (obj) = -} By assigning the NPs Mh'nner and Kindern the fully specified case features shown above, and Frauen the underspecified case feature (obj) = +, both the fea- ture structure generalization and subsumption ac- counts of coordination fail to generate the ungram- matical (5a) and (hb), and correctly accept (5c), as shown in Figure 6. 73 VP COMP -- [ V ~ , coN, v 7 1 I I-VERB = +7 FVE = - 1 L NOUN=+IJ L I - j I L I- 'j I I ouN-+ NooN-+ grew and remained wealthy and a Republican Figure 4: The feature structure subsumption analysis of the ungrammatical (4). remained [ap] 1 .p vp/npvap npvap/e wealthy a Republican grew and vp ap and np vp/ap conj vp/ap /il npvap P conj npvap "P vp/ap eo npvap eo Figure 5: A blocked LCG analysis of the ungrammatical (4). As in the previous example, the LCG approach does not require the case feature to be decom- posed. However, as shown in Figure 7 it does as- sign the conjunction finder und hilfl to the cat- egory vp/np^ace^dat; hence the analysis requires that Frauen be assigned to the 'inconsistent' cat- egory np^accAdat. Such overspecified or 'inconsis- tent' features may seem ad hoc and unmotivated, but they arise naturally in the formal framework of Morrill's extended LCG. In fact, they seem to be necessary to obtain a linguistically correct description of coordination in German. Consider the ungrammatical 'double coor- dination' example in (6). Both the feature structure generalization and subsumption accounts incorrectly predict it to be well-formed, as shown in Figure 8. (6) * Er findet und hilft M~nner und he find-ACC and help-DAT men-ACC and Kindern children-DAT However, the LCG analysis systematically distin- guishes between Frauen, which is assigned to the cat- egory npAaccAdat, and Mdnner und Kindern, which is assigned to the weaker category np^(accvdat). Thus the LCG analysis correctly predicts (6) to be ungrammatical, as shown in Figure 9. The distinction between the categories npAacc^dat and np^(accvdat), and hence the existence of the appar- ently inconsistent categories, seems to be crucial to the ability to distinguish between the grammatical (5c) and the ungrammatical (6). 5 Conclusion This paper has examined some of the differences between a standard complex feature-structure ac- count of agreement, which is fundamentally orga- nized around a notion of consistency, and an ac- count in an extended version of LCG, in which agree- ment is fundamentally an asymmetric relationship. We have attempted to show that the LCG account of agreement correctly treats a number of cases of coordination which are problematic for the stan- dard feature-based account. Although we have not shown this here, the LCG account extends straight- forwardly to the cases of coordination and morpho- logical neutralization discussed by Zaenen and Kar- tunen (1984), Pullum and Zwicky (1986) and In- gria (1990). The nature of an appropriate feature system for LCG is still an open question. It is perhaps surpris- ing that the simple feature system proposed here can handle such complex linguistic phenomena, but additional mechanisms might be required to treat other linguistic constructions. The standard account of adverbial modification in standard LCG, for in- stance, treat.~ adverbs as functors. Because the verb 74 findet [npAaccAdat] I hilft [npAaccAdat] ~ P P vp/npAacc npAacc /~ vp/npAdat npAdat /e vp und vp vp/npAaccAdat /il conj vp/npAaccAdat ~iS Frauen vp/npAaccAdat ~o npaaccAdat vp Figure 7: The LCG analysis of (5c) VP OMP = v v [ ro~##+ll c]~J [ ro~+l l ~ N~ COMP = COMP = FOBJ = + F OBJ = + l CONJ LDIR =_ LDm=+ JJ LDm=-JJ LD,~=+J ] I I I I findet und hilft Manner und Kindern Figure 8: The feature structure subsumption analysis of the ungrammatical (6). findet [npAaccAdat] 1 hilft [npAaccAdat] 2 P P vp/npAacc npAacc vp/npAdat npAdat Miinner vp und vp npAacc und vp/npAaccAdat /il conj vp/npAaccAdat /i2 npA(accvdat)P conj vp/npAaccAdat Kindern npAdat npA(accvdat) P npA(accvdat) Figure 9: The blocked LCG analysis of the ungrammatical (6) 75 heading an adverbial modified VP agrees in number with its subject, the same number features will have to appear in both the antecedent and consequent of the adverb. Using the LCG account described above it is necessary to treat adverbs as ambiguous, assign- ing them to the categories (s\np^sg)\(s\np^sg) and ( s\ np^pl) \ ( s\ np^pl). There are several approaches which may eliminate the need for such systematic ambiguity. First, if the language of (category) types is extended to permit universally quantified types as suggested by Mor- rill (Morrill, 1992), then adverbs could be assigned to the single type VX.((s\np^X)\(s\np^X)). Second, it might be possible to reanalyse adjunction in such a way that avoids the problem altogether. For example, Bouma and van Noord (1994) show that assuming that heads subcategorize for adjuncts (rather than the other way around, as is standard) permits a particularly elegant account of the double infinitive construction in Dutch. If adjuncts in gen- eral are treated as arguments of the head, then the 'problem' of 'passing features' through adjunction disappears. The comparative computational complexity of both the unification-based approach and the LCG accounts is also of interest. Despite their simplic- ity, the computational complexity of the kinds of feature-structure and LCG grammars discussed here is largely unknown. Dorre et. al. (1992) showed that the satisfiability problem for systems of feature- structure subsumption and equality constraints is undecidable, but it is not clear if such problems can arise in the kinds of feature-structure gram- mars discussed above. Conversely, while terminat- ing (Gentzen) proof procedures are available for ex- tended LCG systems of the kind we presented here, none of these handle the coordination schema, and as far as we are aware the computational proper- ties of systems which include this schema are largely unexplored. References Samuel Bayer. 1994. The coordination of unlike cat- egories. Cognitive and Linguistic Sciences, Brown University. Gosse Bouma and Gertjan van Noord. 1994. Constraint-based categorial grammar. In The Proceedings of the 32nd Annual Meeting of the Association for Computational Linguistics, pages 147-154, New Mexico State University - Las Cruces. Jochen DSrre and William C. Rounds. 1992. On subsumption and semiunification in feature alge- bras. Journal of Symbolic Computation, 13:441- 461. Jochen DSrre, Dov Gabbay, and Esther KSnig. 1994. Fibred semantics for feature-based grammar logic. Technical report, Institute for Computational Lin- guistics, The University of Stuttgart. Robert J. P. Ingria. 1990. The limits of unification. In The Proceedings of the 28th Annual Meeting of the Association for Computational Linguistics, pages 194-204, University of Pittsburgh. Pauline Jacobson. 1987. Review of generalized phrase structure grammar. Linguistics and Phi- losophy, 10(3):389-426. Ronald Kaplan and John T. Maxwell. 1988. Con- stituent coordination in lexical functional gram- mar. In The Proceedings of the 12th Interna. tional Conference on Computational Linguistics, page 297302. Joachim Lambek. 1958. The mathematics of sen- tence structure. American Mathematical Monthly, 65:154-170. Anne-Marie Mineur. 1993. Disjunctive gender features--a comparison between HPSG and CG. DFKI, Saarbriicken. Glyn V. Morrill. 1992. Type-logical grammar. Technical Report Report LSI-92-5-1~, Departa- ment de Llenguatges i sistemes informktics. Carl Pollard and Ivan Sag. 1994. Head-driven Phrase Structure Grammar. The University of Chicago Press, Chicago. Geoffrey K. Pullum and Arnold M. Zwicky. 1986. Phonological resolution of syntactic feature con- flict. Language, 62(4):751-773. Ivan A. Sag, Gerald Gazdar, Thomas Wasow, and Steven Weisler. 1985. Coordination and how to distinguish categories. Natural Language and Lin- guistic Theory, 3(2):117-171. Stuart M. Shieber. 1986. An Introduction to Unification-based Approaches to Grammar. CSLI Lecture Notes Series, The University of Chicago Press, Chicago. Stuart M. Shieber. 1992. Constraint-based Gram- mar Formalisms. The MIT Press, Cambridge, Massachusetts. Annie Zaenen and Lauri Karttunen. 1984. Morpho- logical non-distinctiveness and coordination. In Proceedings of the Eastern Slates Conference on Linguistics, volume 1, pages 309-320. 76 | 1995 | 10 |
Encoding Lexicalized Tree Adjoining Grammars with a Nonmonotonic Inheritance Hierarchy Roger Evans Information Technology Research Institute University of Brighton rpe©itri, bton. ac. uk Gerald Gazdar School of Cognitive Computing Sciences University of Sussex geraldg©cogs, susx. ac. uk David Weir School of Cognitive ~z Computing Sciences University of Sussex dav±dw©cogs, susx. ac. uk Abstract This paper shows how DATR, a widely used formal language for lexical knowledge re- presentation, can be used to define an I_TAG lexicon as an inheritance hierarchy with in- ternal lexical rules. A bottom-up featu- ral encoding is used for LTAG trees and this allows lexical rules to be implemen- ted as covariation constraints within fea- ture structures. Such an approach elimina- tes the considerable redundancy otherwise associated with an LTAG lexicon. 1 Introduction The Tree Adjoining Grammar (lAG) formalism was first introduced two decades ago (3oshi et al., 1975), and since then there has been a steady stream of theoretical work using the formalism. But it is only more recently that grammars of non-trivial size have been developed: Abeille, Bishop, Cote & Scha- bes (1990) describe a feature-based Lexicalized Tree Adjoining Grammar ([_'lAG) for English which sub- sequently became the basis for the grammar used in the XTAG system, a wide-coverage [_TAG parser (Do- ran et al., 1994b; Doran et al., 1994a; XTAG Rese- arch Group, 1995). The advent of such large gram- mars gives rise to questions of efficient representa- tion, and the fully lexicalized character of the [TAG formalism suggests that recent research into lexical representation might be a place to look for answers (see for example Briscoe ef a/.(1993); Daelemans & Gazdar(1992)). In this paper we explore this sugge- stion by showing how the lexical knowledge repre- sentation language (LKRL) DA'lR (Evans & Gazdar, 1989a; Evans & Gazdar, 1989b) can be used to for- mulate a compact, hierarchical encoding of an [-'lAG. The issue of efficient representation for I_'rAG 1 is discussed by Vijay-Shanker & Schabes (1992), who 1As with all fully lexicMized grammar formalisms, there is really no conceptual distinction to be drawn in I_TAG between the lexicon and the grammar: tile gram- rnatical rules are just lexical properties. draw attention to the considerable redundancy in- herent in [-TAG lexicons that are expressed in a flat manner with no sharing of structure or properties across the elementary trees. For example, XTAG cur- rently includes over 100,000 lexemes, each of which is associated with a family of trees (typically around 20) drawn from a set of over 500 elementary trees. Many of these trees have structure in common, many of the lexemes have the same tree families, and many of the trees within families are systematically rela- ted in ways which other formalisms capture using transformations or metarules. However, the [TAG formalism itself does not provide any direct support for capturing such regularities. Vijay-Shanker & Schabes address this problem by introducing a hierarchical lexicon structure with mo- notonic inheritance and lexical rules, using an ap- proach loosely based on that of Flickinger (1987) but tailored for [TAG trees rather than HPSG sub- categorization lists. Becker (1993; 1994) proposes a slightly different solution, combining an inheritance component and a set of metarules 2. We share their perception of the problem and agree that adopting a hierarchical approach provides the best available solution to it. However, rather than creating a hier- archical lexical formalism that is specific to the [_TAG problem, we have used DATR, an LKR.L that is al- ready quite widely known and used. From an [TAG perspective, it makes sense to use an already availa- ble LKRL that was specifically designed to address these kinds of representational issues. From a DATR perspective, I_TAG presents interesting problems ari- sing from its radically lexicalist character: all gram- matical relations, including unbounded dependency constructions, are represented lexically and are thus open to lexical generalization. There are also several further benefits to be gai- ned from using an established general purpose LKRL such as DATR. First, it makes it easier to compare the resulting [TAG lexicon with those associated with other types oflexical syntax: there are existing DATR 2See Section 6 for further discussion of these approaches. 77 lexicon fragments for HPSG, PATR and Word Gram- mar, among others. Second, DATR is not restricted to syntactic description, so one can take advantage of existing analyses of other levels of lexical descrip- tion, such as phonology, prosody, morphology, com- positional semantics and lexical semantics 3. Third, one can exploit existing formal and implementation work on the language 4. 2 Representing LTAG trees S NPI VP V o NPI PP P o NPI Figure 1: An example LTAG tree for give The principal unit of (syntactic) information asso- ciated with an LTAG entry is a tree structure in which the tree nodes are labeled with syntactic categories and feature information and there is at least one leaf node labeled with a lexical category (such lexi- cal leaf nodes are known as anchors). For example, the canonical tree for a ditransitive verb such as give is shown in figure 1. Following LTAG conventions (for the time being), the node labels here are gross syntactic category specifications to which additional featural information may be added 5, and are anno- tated to indicate node type: <> indicates an anchor node, and I indicates a substitution node (where a 3See, for example, Bleiching (1992; 1994), Brown & Hippisley (1994), Corbett & Fraser (1993), Cahill (1990; 1993), Cahill &: Evans (1990), Fraser &= Corbett (in press), Gibbon (1992), Kilgarriff (1993), Kilgarriff & Gazdar (1995), Reinhard & Gibbon (1991). 4See, for example, Andry et al. (1992) on compila- tion, Kilbury et al. (1991) on coding DAGs, Duda & Geb- hardi (1994) on dynamic querying, Langer (1994) on re- verse querying, and Barg (1994), Light (1994), Light et al. (1993) and Kilbury et al. (1994) on automatic ac- quisition. And there are at least a dozen different DATR implementations available, on various platforms and pro- gramming languages. Sin fact, [TAG commonly distinguishes two sets of features at each node (top and bottota), but for simpli- city we shall assume just one set in this paper. fully specified tree with a compatible root label may be attached) 6. In representing such a tree in DATR, we do two things. First, in keeping with the radically lexica- list character of LTAG, we describe the tree structure from its (lexical) anchor upwards 7, using a variant of Kilbury's (1990) bottom-up encoding of trees. In this encoding, a tree is described relative to a parti- cular distinguished leaf node (here the anchor node), using binary relations paxent, left and right, re- lating the node to the subtrees associated with its parent, and immediate-left and -right sisters, enco- ded in the same way. Second, we embed the resulting tree structure (i.e., the node relations and type in- formation) in the feature structure, so that the tree relations (left, right and parent) become features. The obvious analogy here is the use of first/rest features to encode subcategorisation lists in frame- works like HPSG. Thus the syntactic feature information directly as- sociated with the entry for give relates to the label for the v node (for example, the value of its cat fea- ture is v, the value of type is emchor), while speci- fications of subfeatures of parent relate to the label of the vP node. A simple bottom-up DATR represen- tation for the whole tree (apart from the node type information) follows: Give: <cat> -- v <parent cat> = vp <parent left cat> =np <parent parent cat> = s <right cat> =np <right right cat> = p <right right parent cat> = pp <right right right cat> =np. This says that Give is a verb, with vp as its pa- rent, an s as its grandparent and an NP to the left of its parent. It also has an NP to its right, and a tree rooted in a P to the right of that, with a PP parent and NP right sister. The implied bottom-up tree structure is shown graphically in figure 2. Here the nodes are laid out just as in figure 1, but rela- ted via parent, left and right links, rather than the more usual (implicitly ordered) daughter links. Notice in particular that the right link from the object noun-phrase node points to the preposition node, not its phrasal parent - this whole subtree is itself encoded bottom-up. Nevertheless, the full tree structure is completely and accurately represented by this encoding. s LTAG's other tree-building operation is adjunetion, which allows a tree-fragment to be spliced into the body of a tree. However, we only need to concern ourselves here with the representation of the trees involved, not with the substitution/adjunction distinction. rThe tree in figure 1 has more than one anchor - in such cases it is generally easy to decide which anchor is the most appropriate root for the tree (here, the verb anchor). 78 np ° s arent vp l e f t / parent " np right ~ right k P PP arent np right Figure 2: Bottom-up encoding for Give Once we adopt this representational strategy, wri- ting an LTAG lexicon in DATR becomes similar to writing any other type of lexicalist grammar's le- xicon in an inheritance-based LKRL. In HPSG, for example, the subcategorisation frames are coded as lists of categories, whilst in LTAG they are coded as trees. But, in both cases, the problem is one of con- cisely describing feature structures associated with lexical entries and relationships between lexical ent- ries. The same kinds of generalization arise and the same techniques are applicable. Of course, the pre- sence of complete trees and the fully lexicalized ap- proach provide scope for capturing generalizations lexically that are not available to approaches that only identify parent and sibling nodes, say, in the lexical entries. 3 Encoding lexical entries Following conventional models of lexicon organisa- tion, we would expect Give to have a minimal syn- tactic specification itself, since syntactically it is a completely regular ditransitive verb. In fact none of the information introduced so far is specific to Give. So rather than providing a completely expli- cit DATR definition for Give, as we did above, a more plausible account uses an inheritance hierarchy defi- ning abstract intransitive, transitive and ditransitive verbs to support Give (among others), as shown in figure 3. This basic organisational structure can be expres- sed as the following DATR fragmentS: 8To gain the intuitive sense of this fragment, read a line such as <> --= VERB as "inherit everything from the definition of VERB", and a line such as <parent> == PPTREE:<> as "inherit the parent subtree from the de- finition of PPTREE'. Inheritance in DATR is always by default - locally defined feature specifications take prio- rity over inherited ones. VERB Die VERB+NP Eat VEKB+NP+PP VERB+NP+NP Give Spare Figure 3: The principal lexical hierarchy VERB: <> -- TREENODE <cat> == v <type> == anchor <parent> =s VPTREE:<>. VERB+NP: <> == VERB <right> == NPCOMP:<>. VERB+NP+PP: <> -= VERB+NP <right right> == PTKEE:<> <right right root> == to. VERB+NP+NP: <> == VEBB+NP <right right> == NPCOMP:<>. Die: <> == VERB <root> == die. Eat: <> == VEKB+NP <root> == eat. Give: <> == VERB+NP+PP <root> == give. Spare: <> == VERB+NP+NP <root> == spare. Ignoring for the moment the references to TREENODE, VPTREE, NPCOMP and PTREE (which we shall define shortly), we see that VERB defines basic features for all verb entries (and can be used directly for intransitives such as Die), VERB+NP inherits ~om VERB butadds an NP complement to the right of the verb (for transitives), VEKB+NP+PP inherits ~om VERB+NP but adds a further PP complement and so 79 on. Entries for regular verb lexemes are then mi- nimal - syntactically they just inherit everything from the abstract definitions. This DATR fragment is incomplete, because it neg- lects to define the internal structure of the TREEtlODE and the various subtree nodes in the lexical hierar- chy. Each such node is a description of an LTAG tree at some degree of abstraction 9. The following DATR statements complete the fragment, by providing de- finitions for this internal structure: TREENODE : <> == under <type> == internal. STREE: <> == TREENODE <cat> == s. VPTREE: <> == TREENODE <cat> ==vp <parent> == STREE:<> <left> == NPCOMP:<>. NPCOMP: <> == TREENODE <cat> -- np <type> == substitution. PPTREE: <> == TREENODE <cat> == pp. PTREE: <> == TREENODE <cat> I= p <type> == anchor <parent> == PPTREE:<> Here, TREENODE represents an abstract node in an LTAG tree and provides a (default) type of internal. Notice that VERB is itself a TREENODE (but with the nondefault type anchor), and the other definitions here define the remaining tree nodes that arise in our small lexicon: VPTREE is the node for VERB's pa- rent, STREE for VEKB's grandparent, NPCOMP defines the structure needed for NP complement substitution nodes, etc. 1° Taken together, these definitions provide a speci- fication for Give just as we had it before, but with the addition of type and root features. They also support some other verbs too, and it should be clear that the basic technique extends readily to a wide range of other verbs and other parts of speech. Also, although the trees we have described are all initial 9Even the lexeme nodes are abstract - individual word forms might be represented by further more specific nodes attached below the lexemes in the hierarchy. 1°Our example makes much use'of multiple inheritance (thus, for example, VPTREE inherits from TREENODE, STREE and NPCOMP) but a/l such multiple inheritance is orthogonal in DATR: no path can inherit from more than one node. trees (in LTAG terminology), we can describe auxi- liary trees, which include a leaf node of type foot just as easily. A simple example is provided by the following definition for auxiliary verbs: AUXVERB : <> == TREENODE <cat> --= V <type> == anchor <parent cat> == vp <right cut> == vp <right type> == foot. 4 Lexical rules Having established a basic structure for our LTAG lexicon, we now turn our attention towards captu- ring other kinds of relationship among trees. We noted above that lexical entries are actually associa- ted with tree families, and that these group to- gether trees that are related to each other. Thus in the same family as a standard ditransitive verb, we might find the full passive, the agentless passive, the dative alternation, the various relative clauses, and so forth. It is clear that these families correspond closely to the outputs of transformations or metaru- les in other frameworks, but the XTAG system cur- rently has no formal component for describing the relationships among families nor mechanisms for ge- nerating them. And so far we have said nothing about them either - we have only characterized sin- gle trees. However, LTAG's large domain of locality means that all such relationships can be viewed as directly lexical, and ~hus expressible by lexical rules. In fact we can go further than this: because we have em- bedded the domain of these lexical rules, namely the LTAG tree structures, within the feature structures, we can view such lexical rules as covariation cons- traints within feature structures, in much the same way that the covariation of, say, syntactic and mor- phological form is treated. In particular, we can use the mechanisms that DATR already provides for fea- ture covariation, rather than having to invoke in ad- dition some special purpose lexical rule machinery. We consider six construction types found in the XTAG grammar: passive, dative, subject-auxiliary inversion, wh-questions, relative clauses and topica- lisation. Our basic approach to each of these is the same. Lexical rules are specified by defining a deri- ved output tree structure in terms of an input tree structure, where each of these structures is a set of feature specifications of the sort defined above. Each lexical rule has a name, and the input and output tree structures for rule foo are referenced by pre- fixing feature paths of the sort given above with <input foo . .> or <output foo . .>. So for ex- ample, the category of the parent tree node of the output of the passive rule might be referenced as <output passive parent cat>. We define a very general default, stating that the output is the same 80 as the input, so that lexical relationships need only concern themselves with components they modify. This approach to formulating lexical rules in DAIR is quite general and in no way restricted to/TAG: it can be readily adapted for application in the context of any feature-based lexicalist grammar formalism. Using this approach, the dative lexical rule can be given a minimalist implementation by the addition of the following single line to VERB+NP+PP, defined above. VERB+NP+PP : <output dative right right> == NPCOMP:<>. This causes the second complement to a ditran- sitive verb in the dative alternation to be an NP, rather than a PP as in the unmodified case. Subject- auxiliary inversion can be achieved similarly by just specifying the output tree structure without refe- rence to the input structure (note the addition here of a form feature specifying verb form): AUXVERB : <output auxinv form> == finite-inv <output auxinv parent cat> == s <output auxinv right cat> == s. Passive is slightly more complex, in that it has to modify the given input tree structure rather than simply overwriting part of it. The definitions for pas- sive occur at the VERB+NP node, since by default, any transitive or subclass of transitive has a passive form. Individual transitive verbs, or whole subclasses, can override this default, leaving their passive tree struc- ture undefined if required. For agentless passives, the necessary additions to the VERB+NP node are as followsn: VERB+NP : <output passive form> == passive <output passive right> == "<input passive right right>". Here, the first line stipulates the form of the verb in the output tree to be passive, while the second line redefines the complement structure: the output of passive has as its first complement the second com- plement of its input, thereby discarding the first complement of its input. Since complements are daisy-chained, all the others move up too. Wh-questions, relative clauses and topicalisation are slightly different, in that the application of the lexical rule causes structure to be added to the top of the tree (above the s node). Although these con- structions involve unbounded dependencies, the un- boundedness is taken care of by the [TAG adjunction mechanism: for lexical purposes the dependency is local. Since the relevant lexical rules can apply to sentences that contain any kind of verb, they need to be stated at the VERB node. Thus, for exam- ple, topicalisation and wh-questions can be defined as follows: 11Oversimplifying slightly, the double quotes in "<input passive right right>" mean that that DATR path will not be evaluated locally (i.e., at the VERB+NP node), but rather at the relevant lexeme node (e.g., Eat or Give). VERB : <output topic parent parent parent cat> <output topic parent "parent left cat> ==np <output topic parent parent left form> == normal <output whq> == "<output topic>" <output whq parent parent left form> == vh. Here an additional NP and s are attached above the original s node to create a topicalised struc- ture. The wh-rule inherits from the topicalisation rule, changing just one thing: the form of the new NP is marked as wh, rather than as normal. In the full fragment 12, the NP added by these rules is also syntactically cross-referenced to a specific NP mar- ked as null in the input tree. However, space does not permit presentation or discussion of the DATR code that achieves this here. 5 Applying lexical rules As explained above, each lexical rule is defined to operate on its own notion of an input and produce its own output. In order for the rules to have an ef- fect, the various input and output paths have to be linked together using inheritance, creating a chain of inheritances between the base, that is, the canonical definitions we introduced in section 3, and surface tree structures of the lexical entry. For example, to 'apply' the dative rule to our Give definition, we could construct a definition such as this: Give-dat : <> ffi= Give <input dative> == <> <surface> == <output dative>. Values for paths prefixed with surface inherit from the output of the dative rule. The input of the dative rule inherits from the base (unprefixed) case, which inherits from Give. The dative rule de- finition (just the oneline introduced above, plus the default that output inherits from input) thus media- tes between qive and the surface of Give-dat. This chain can be extended by inserting additional in- heritance specifications (such as passive). Note that surface defaults to the base case, so all entries have a surface defined. However, in our full fragment, additional support is provided to achieve and constrain this rule chai- ning. Word definitions include boolean features in- dicating which rules to apply, and the presence of these features trigger inheritance between appro- priate input and output paths and the base and surface specifications at the ends of the chain. For example, Wordl is an alternative way of specifying the dative alternant of Give, but results in inhe- ritance linking equivalent to that found in Give-dat above: 12The full version of this DAIR fragment includes all the components discussed above in a single coherent, but slightly more complex account. It is available on request from the authors. 81 Wordl : <> == Give <alt dative> == true. More interestingly, Nord2 properly describes a wh- question based on the agentless passive of the dative of Give. Word2 : <> == Give <alt whq> == true <alt dative> == true <alt passive> == true. <parent left form> =- null Notice here the final line of Nord2 which specifies the location of the 'extracted' NP (the subject, in this case), by marking it as null. As noted above, the full version of the whq lexical rule uses this to specify a cross-reference relationship between the wh-NP and the null NP. We can, if we wish, encode constraints on the app- licability of rules in the mapping from boolean flags to actual inheritance specifications. Thus, for exam- ple, whq, tel, and topic are mutually exclusive. If such constraints are violated, then no value for surface gets defined. Thus Word3 improperly att- empts topicalisation in addition to wh-question for- mation, and, as a result, will fail to define a surface tree structure at all: Word3 : <> == Give <alt whq> m= true <alt topic> == true <alt dative> -~, true <alt passive> -= true <parent left form> == null. This approach to lexical rules allows them to be specified at the appropriate point in the lexicM hier- archy, but overridden or modified in subclasses or lexemes as appropriate. It also allows default gene- ralisation over the lexical rules themselves, and con- trol over their application. The last section showed how the whq lexical rule could be built by a single mi- nor addition to that for topicalisation. However, it is worth noting that, in common with other DATR spe- cifications, the lexical rules presented here are rule instances which can only be applied once to any given lexeme - multiple application could be sup- ported, by making multiple instances inherit from some common rule specification, but in our current treatment such instances would require different rule names. 6 Comparison with related work As noted above, Vijay-Shanker & Schabes (1992) have also proposed an inheritance-based approach to this problem. They use monotonic inheritance to build up partial descriptions of trees: each descrip- tion is a finite set of dominance, immediate domi- nance and linear precedence statements about tree nodes in a tree description language developed by Rogers & Vijay-Shanker (1992), and category infor- mation is located in the node labels. This differs from our approach in a number of ways. First, our use of nonmonotonic inheritance allows us to manipulate total instead of partial de- scriptions of trees. The abstract verb class in the Vijay-Shanker & Schabes account subsumes both in- transitive and transitive verb classes but is not iden- tical to either - a minimal-satisfying-model step is required to map partial tree descriptions into actual trees. In our analysis, VERB is the intransitive verb class, with complements specifically marked as un- defined: thus VERB : <right> == under is inherited from TREENODE and VERB+NP just overrides this com- plement specification to add an NP complement. Se- cond, we describe trees using only local tree relations (between adjacent nodes in the tree), while Vijay- Shanker &5 Schabes also use a nonlocal dominance relation. Both these properties are crucial to our embed- ding of the tree structure in the feature structure. We want the category information at each tree node to be partial in the conventional sense, so that in actual use such categories can be extended (by uni- fication or whatever). So the feature structures that we associate with lexical entries must be viewed as partial. But we do not want the tree structure to be extendible in the same way: we do not want an intransitive verb to be applicable in a transitive con- text, by unifying in a complement NP. So the tree structures we define must be total descriptions 13. And of course, our use of only local relations al- lows a direct mapping from tree structure to feature path, which would not be possible at all if nonlocal relations were present. So while these differences may seem small, they al- low us to take this significant representational step - significant because it is the tree structure embedding that allows us to view lexical rules as feature cova- riation constraints. The result is that while Vijay- Shanker & Schabes use a tree description language, a category description language and a further for- malism for lexical rules, we can capture everything in one framework all of whose components (non- monotonicity, covariation constraint handling, etc.) have already been independently motivated for other aspects of lexical description 14. Becket's recent work (1993; 1994) is also directed at exactly the problem we address in the present paper. Like him, we have employed an inheritance hierarchy. And, like him, we have employed a set of lexical rules (corresponding to his metarules). The key differences between our account and his are (i) 13Note that simplified fragment presented here does not get this right. It makes all feature specifications total descriptions. To correct this we would need to change TREENODE so that only the values of <right>, <left> and <parent> default to under. 14As in the work cited in footnote 3, above. 82 that we have been able to use an existing lexical knowledge representation language, rather than de- signing a formal system that is specific to [TAG, and (ii) that we have expressed our lexical rules in ex- actly the same language as that we have used to define the hierarchy, rather than invoking two quite different formal systems. Becket's sharp distinction between his metarules and his hierarchy gives rise to some problems that our approach avoids. Firstly, he notes that his meta- rules are subject to lexical exceptions and proposes to deal with these by stating "for each entry in the (syntactic) lexicon .. which metarules are applica- ble for this entry" (1993,126). We have no need to carry over this use of (recta)rule features since, in our account, lexical rules are not distinct from any other kind of property in the inheritance hierarchy. They can be stated at the most inclusive relevant node and can then be overridden at the exceptional descendant nodes. Nothing specific needs to be said about the nonexceptional nodes. Secondly, his metarules may themselves be more or less similar to each other and he suggests (1994,11) that these similarities could be captured if the metarules were also to be organized in a hier- archy. However, our approach allows us to deal with any such similarities in the main lexical hierarchy itself 15 rather than by setting up a separate hierar- chical component just for metarules (which appears to be what Becket has in mind). Thirdly, as he himself notes (1993,128), because his metarules map from elementary trees that are in the inheritance hierarchy to elementary trees that are outside it, most of the elementary trees actually used are not directly connected to the hierarchy (alt- hough their derived status with respect to it can be reconstructed). Our approach keeps all elementary trees, whether or not they have been partly defined by a lexical rule, entirely within the lexical hierarchy. In fact, Becker himself considers the possibility of capturing all the significant generalizations by using just one of the two mechanisms that he pro- poses: "one might want to reconsider the usage of one mechanism for phenomena in both dimensions" (1993,135). But, as he goes on to point out, his exi- sting type of inheritance network is not up to taking on the task performed by his metarules because the former is monotonic whilst his metarules are not. However, he does suggest a way in which the hierar- chy could be completely replaced by metarules but argues against adopting it (1993,136). As will be apparent from the earlier sections of this paper, we believe that Becker's insights about the organization of an ['lAG lexicon can be better expressed if the metarule component is replaced by lSAs illustrated by the way in which the whq lexical rule inherits from that for topicalisation in the example given above. an encoding of (largely equivalent) lexical rules that are an integral part of a nonmonotonic inheritance hierarchy that stands as a description of all the ele- mentary trees. Acknowledgements A precursor of th'is paper was presented at the Sep- tember 1994 TAG+ Workshop in Paris. We thank the referees for that event and the ACL-95 referees for a number of helpful comments. We are also gra- teful to Aravind Joshi, Bill Keller, Owen Rambow K. Vijay-Shanker and The XTAG Group. This rese- arch was partly supported by grants to Evans from SERC/EPSt~C (UK) and to Gazdar from ESRC (UK). References Anne Abeille, Kathleen Bishop, Sharon Cote, & Yves Schabes. 1990. A lexicalized tree adjoining grammar for english. Technical Report MS-CIS-90-24, Depart- ment of Computer & Information Science, Univ. of Pennsylvania. Francois Andry, Norman Fraser, Scott McGlashan, Si- mon Thornton, & Nick Youd. 1992. Making DATR work for speech: lexicon compilation in SUNDIA. Comput. Ling., 18(3):245-267. Petra Barg. 1994. Automatic acquisition of datr theo- ries from observations. Theories des lexicons: Arbei- ten des sonderforschungsbereichs 282, Heinrich-Heine Univ. of Duesseldorf, Duesseldorf. Tilman Becker. 1993. HyTAG: A new type of Tree Ad- joining Grammar for hybrid syntactic representation of free word order languages. Ph.D. thesis, Univ. des Saarlandes. Tflman Becker. 1994. Patterns in metarules. In Pro- ceedings of the Third International Workshop on Tree Adjoining Grammars, 9-11. Doris Bleiching. 1992. Prosodisches wissen in lexicon. In G. Goerz, ed., KONVENS-92, 59-68. Springer-Verlag. Doris Bleiching. 1994. Integration yon morphophono- logic und prosodie in ein hierarchisches lexicon. In H. Trost, ed., Proceedings of KONVENS-9.t, 32-41. Ted Briscoe, Valeria de Paiva, & Ann Copestake. 1993. Inheritance, Defaults, ¢J the Lexicon. CUP. Dunstan Brown & Andrew Hippisley. 1994. Conflict in russian genitive plural assignment: A solution repre- sented in DATR. J. of Slavic Linguistics, 2(1):48-76. Lynne Cahill & Roger Evans. 1990. An application of DATR: the TIC lexicon. In ECAI-90, 120-125. Lynne Cahill. 1990. Syllable-based morphology. In COLING-90, volume 3, 48-53. Lynne Cahill. 1993. Morphonology in the lexicon. In EA CL-93, 37-96. Greville Corbett & Norman Fraser. 1993. Network mor- phology: a DATR account of Russian nominal inflec- tion. J. of Linguistics, 29:113-142. 83 Walter Daelemans & Gerald Gazdar, eds. 1992. Special issues on inheritance. Gomput. Ling., 18(2 & 3). Christy Doran, Dania Egedi, Beth Ann Hockey, & B. Sri- nivas. 1994a. Status of the XTAG system. In Pro- ceedings of the Third International Workshop on Tree Adjoining Grammars, 20-23. Christy Doran, Dania Egedi, Beth Ann Hockey, B. Sri- nivas, & Martin Zaldel. 1994b. XTAG system -- a wide coverage grammar for english. In COLING-94, 922-928. Markus Duds & Gunter Gebhardi. 1994. DUTR - a DATR-PATR interface formalism. In H. Trost, ed., Proceedings o] KONVENS.9~, 411-414. Roger Evans & Gerald Gazdar. 1989a. Inference in DATR. In EACL.89, 66-71. Roger Evans & Gerald Gazdar. 1989b. The semantics of DATR. In AISB-89, 79-87. Daniel P. Flickinger. 1987. Le~ical Rules in the Hierar- chical Lexicon. Ph.D. thesis, Stanford Univ. Norman Fraser & Greville Corbett. in press. Gender, animacy, & declensional class assignment: a unified account for russian. In Geert Booij & Jaap van Marie, ed., Yearbook o[ Morphology 1994. Kluwer, Dordrecht. Dafydd Gibbon. 1992. ILEX: a linguistic approach to computational lexica. In Ursula Klenk, ed., Com- putatio Linguae: Aulsa("tze zur algorithmischen und quantitativen Analyse der Sprache (Zeitsehrilt lu("r Dialektologie und Linguistik, Beihe[t 73), 32-53. Franz Steiner Veflag, Stuttgart. A. K. Joshi, L. S. Levy, & M. Takahashi. 1975. Tree adjunct grarnmaxs. J. Comput. Syst. Sci., 10(1):136- 163. James Kilbury, Petra [Barg] Naerger, & Ingrid Renz. 1991. DATR as a lexical component for PATR. In EACL-91, 137-142. James Kilbury, Petra Barg, ~: Ingrid Renz. 1994. Simu- lation lexiealischen erwerbs. In Christopher Habel Gert Rickheit Sascha W. Felix, ed, Kognitive Lingui- stik: Repraesentation und Prozesse, 251-271. West- deutscher Verlag, Opladen. James Kilbury. 1990. Encoding constituent structure in feature structures. Unpublished manuscript, Univ. of Duesseldorf, Duesseldorf. Adam Kilgarriff & Gerald Ga~dar. 1995. Polysemous relations. In Frank Palmer, ed., Grammar ~ meaning: essays in honour o] Sir John Lyons, 1-25. CUP. Adam Kilgarriff. 1993. Inheriting verb alternations. In EACL-93, 213-221. Hagen Langer. 1994. Reverse queries in DATR. In COLING-94, 1089-1095. Marc Light, Sabine Reinhard, & Marie Boyle-Hinrichs. 1993. INSYST: an automatic inserter system for hier- archical lexica. In EACL-93, page 471. Marc Light. 1994. Classification in feature-based default inheritance hierarchies. In H. Trost, ed., Proceedings o[ KONVENS-94, 220-229. Sabine Reinhard & Dafydd Gibbon. 1991. Prosodic in- heritance & morphological generalisations. In EACL- 91, 131-136. James Rogers & K. Vijay-Shanker. 1992. Reasoning .with descriptions of trees. In ACL-92, 72-80. K. Vijay-Shanker & Yves Schabes. 1992. Structure sharing in lexicalized tree-adjoining grammar. In COLING-92, 205-211. The XTAG Research Group. 1995. A lexicalized tree ad- joining grammar for English. Technical Report IRCS Report 95-03, The Institute for Research in Cognitive Science, Univ. of Pennsylvania. 84 | 1995 | 11 |
Compiling HPSG type constraints into definite clause programs Thilo G~tz and Walt Detmar Meurers* SFB 340, Universit£t Tfibingcn Kleine Wilhelmstrat~e 113 72074 Tfibingen Germany ~tg, dm}©sf s. nphil, uni-tuebingen, de Abstract We present a new approach to HPSG pro- cessing: compiling HPSG grammars ex- pressed as type constraints into definite clause programs. This provides a clear and computationally useful correspondence between linguistic theories and their im- plementation. The compiler performs off- line constraint inheritance and code opti- mization. As a result, we are able to effi- ciently process with HPSG grammars with- out haviog to hand-translate them into def- inite clause or phrase structure based sys- tems. 1 Introduction The HPSG architecture as defined in (Pollard and Sag, 1994) (henceforth HPSGII) is being used by an increasing number of linguists, since the formally well-defined framework allows for a rigid and ex- plicit formalization of a linguistic theory. At the same time, the feature logics which provide the for- mal foundation of HPSGII have been used as basis for several NLP systems, such as ALE (Carpenter, 1993), CUF (DSrre and Dorna, 1993), Troll (Gerde- mann and King, 1993) or TFS (Emele and Zajac, 1990). These systems are - at least partly - intended as computational environments for the implementa- tion of HPSG grammars. HPSG linguists use the description language of the logic to express their theories in the form of im- plicative constraints. On the other hand, most of the computational setups only allow feature descriptions as extra constraints with a phrase structure or defi- nite clause based language. 1 From a computational point of view the latter setup has several advantages. It provides access to the pool of work done in the *The authors are listed alphabetically. 1One exception is the TFS system. However, the pos- sibility to express recursive relations on the level of the description language leads to serious control problems in that system. area of natural language processing, e.g., to efficient control strategies for the definite clause level based on tabelling methods like Earley deduction, or differ- ent parsing strategies in the phrase structure setup. The result is a gap between the description lan- guage theories of HPSG linguists and the definite clause or phrase structure based NLP systems pro- vided to implement these theories. Most grammars currently implemented therefore have no clear corre- spondence to the linguistic theories they originated from. To be able to use implemented grammars to provide feedback for a rigid and complete formal- ization of linguistic theories, a clear and computa- tionMly useful correspondence has to be established. This link is also needed to stimulate further devel- opment of the computational systems. Finally, an HPSGII style setup is also interesting to model from a software engineering point of view, since it permits a modular development and testing of the grammar. The purpose of this paper is to provide the de- sired link, i.e., to show how a HPSG theory formu- lated as implicative constraints can be modelled on the level of the relational extension of the constraint language. More specifically, we define a compilation procedure which translates the type constraints of the linguistic theory into definite clauses runnable in systems such as Troll, ALE, or CUF. Thus, we per- form constraint inheritance and code optimization off-line. This results in a considerable efficiency gain over a direct on-line treatment of type constraints as, e.g., in TFS. The structure of the paper is as follows: A short discussion of the logical setup for HPSGII provides the necessary formal background and terminology. Then the two possibilities for expressing a theory - using the description language as in HPSGII or the relational level as in the computational architectures - are introduced. The third section provides a simple picture of how HPSGII theories can be modelled on the relational level. This simple picture is then refined in the fourth section, where the compilation procedure and its implementation is discussed. A small example grammar is provided in the appendix. 85 2 Background 2.1 The HPSGII architecture A HPSG grammar consists of two components: the declaration of the structure of the domain of linguis- tic objects in a signature (consisting of the type hi- erarchy and the appropriateness conditions) and the formulation of constraints on that domain. The sig- nature introduces the structures the linguist wants to talk about. The theory the linguist proposes dis- tinguishes between those objects in a domain which are part of the natural language described, and those which are not. HPSGII gives a closed world interpretation to the type hierarchy: every object is of exactly one min- imal (most specific) type. This implies that every object in the denotation of a non-minimal type is also described by at least one of its subtypes. Our compilation procedure will adhere to this interpre- tation. 2.2 The theories of HPSGII: Directly constraining the domain A HPSGII theory consists of a set of descriptions which are interpreted as being true or false of an object in the domain. An object is admissible with respect to a certain theory iff it satisfies each of the descriptions in the theory and so does each of its substructures. The descriptions which make up the theory are also called constraints, since these de- scriptions constrain the set of objects which are ad- missible with respect to the theory. Figure 1 shows an example of a constraint, the head-feature principle of HPSGII. Throughout the paper we will be using HPSG style AVM notation for descriptions. phrase -..* DTRS headed-strut SYNSEM]LOC[CAT[HEAD DTRSIH AD DTRISYNSE I' OClC l" ' Figure 1: The Head-Feature Principle of HPSGII The intended interpretation of this constraint is that every object which is being described by type phrase and by [DTI~S h~aded-str~c] also has to be described by the consequent, i.e. have its head value shared with that of its head-daughter. In the HPSG II architecture any description can be used as antecedent of an implicative constraint. As shown in (Meurers, 1994), a complex description can be expressed as a type by modifying the signature and/or adding theory statements. In the following, we therefore only deal with implicative constraints with type antecedents, the type definitions. 2.3 Theories in constraint logic programming: expressing definite clause relations As mentioned in the introduction, in most computa- tional systems for the implementation of HPSG the- ories a grammar is expressed using a relational ex- tension of the description language 2 such as definite clauses or phrase structure rules. Figure 2 schemat- ically shows the embedding of HPSG II descriptions in the definition of a relation. relo (D1 ..... D~) :- tell(E1,..., Ej), re/n(Fl .... , Fh). Figure 2: Defining relation relo The HPSG description language is only used to specify the arguments of the relations, in the exam- ple noted as D, E, and F. The organization of the descriptions, i.e. their use as constraints to narrow down the set of described objects, is taken over by the relational level. This way of organizing descrip- tions in definite clauses allows efficient processing techniques of logic programming to be used. The question we are concerned with in the follow- ing is how a HPSG II theory can be modelled in such a setup. 3 Modelling HPSGII theories on a relational level: a simple picture There are three characteristics of HPSGII theories which we need to model on the relational level: one needs to be able to 1. express constraints on any kind of object, 2. use the hierarchical structure of the type hier- archy to organize the constraints, and 3. check any structure for consistency with the theory. A straightforward encoding is achieved by express- ing each of these three aspects in a set of relations. Let us illustrate this idea with a simple example. As- sume the signature given in figure 3 and the HPSGII 2 For the logical foundations of relational extensions of arbitrary constraint languages see (HShfeld and Smolka, 1988). 86 style theory of figure 4. T /-= b c Figure 3: An example signature o _ b --. [Q°I Figure 4: An example theory in a HPSGII setup First, we define a relation to express the con- straints immediately specified for a type on the ar- gument of the relation: • a o,, ) :- T,vp,G). • b b :- • c°on,(c). For every type, the relation specifies its only argu- ment to bear the type information and the conse- quents of the type definition for that type. Note that the simple type assignment [G a] leads to a call to the relation atvp~ imposing all constraints for type a, which is defined below. Second, a relation is needed to capture the hier- archical organization of constraints: • ; .... • ahi,~(~):- a,o,,,([~]), ( bh,,~(~); chi,r([~) ). • bhi,r(]~]):- bco,,,(~). Each hierarchy relation of a type references the con- straint relation and makes sure that the constraints below one of the subtypes are obeyed. Finally, a relation is defined to collect all con- straints on a type: • atyp~(~) :- This,-( ri-1 a ). • bt,p~(E~ ]) :- Thief( [-i~b ). * ctvpe([~]) :- Thier( r-~c ). aA disjunction of the immediate subtypes of T. Compared to the hierarchy relation of a type which collects all constraints on the type and its subtypes, the last kind of relation additionally references those constraints which are inherited from a supertype. Thus, this is the relation that needs to be queried to check for grammaticality. Even though the simple picture with its tripartite definition for each type yields perspicuous code, it falls short in several respects. The last two kinds of relations (reltype and relhier) just perform inheri- tance of constraints. Doing this at run-time is slow, and additionally there are problems with multiple inheritance. A further problem of the encoding is that the value of an appropriate feature which is not mentioned in any type definition may nonetheless be implicitly constrained, since the type of its value is constrained. Consider for example the standard HPSG encoding of list structures. This usually involves a type he_list with appropriate features HD and TL, where under HD we encode an element of the list, and under TL the tail of the list. Normally, there will be no extra constraints on ne_list. But in our setup we clearly need a definite clause he_list ne_listcon,( HD ) :- Ttvp~([~), listtyp¢(~]). .TL since the value of the feature HD may be of a type which is constrained by the grammar. Consequently, since he_list is a subtype of list, the value of TL needs to be constrained as well. 4 Compiling HPSG type constraints into definite clauses After this intuitive introduction to the problem, we will now show how to automatically generate definite clause programs from a set of type definitions, in a way that avoids the problems mentioned for the simple picture. 4.1 The algorithm Before we can look at the actual compilation proce- dure, we need some terminology. Definition (type interaction) Two types interact if they have a common subtype. Note that every type interacts with itself. Definition (defined type) A defined type is a type that occurs as antecedent of an implicational constraint in the grammar. Definition (constrained type) A constrained type is a type that interacts with a defined type. 87 Whenever we encounter a structure of a constrained type, we need to check that the structure conforms to the constraint on that type. As mentioned in section 2.1, due to the closed world interpretation of type hierarchies, we know that every object in the denotation of a non-minimal type t also has to obey the constraints on one of the minimal subtypes of t. Thus, if a type t has a subtype t' in common with a defined type d, then t ~ is a constrained type (by virtue of being a subtype of d) and t is a constrained type (because it subsumes t'). Definition (hiding type) The set of hiding types is the smallest set s.t. if t is not a constrained type and subsumes a type to that has a feature f appropriate s.t. approp(to,f) is a con- strained type or a hiding type, then t is a hiding type. The type ne_list that we saw above is a hiding type. Definition (hiding feature) If t is a constrained or hiding type, then f is a hiding feature on t iff approp(t,f) is a constrained or hiding type. Definition (simple type) A simple type is a type that is neither a constrained nor a hiding type. When we see a structure of a simple type, we don't need to apply any constraints, neither on the top node nor on any substructure. Partitioning the types in this manner helps us to construct definite clause programs for type con- straint grammars. For each type, we compute a unary relation that we just give the same name as the type. Since we assume a closed world interpre- tation of the type hierarchy, we really only need to compute proper definitions for minimal types. The body of a definition for a non-minimal type is just a disjunction of the relations defining the minimal subtypes of the non-minimal type. When we want to compute the defining clause for a minimal type, we first of all check what sort of type it is. For each simple type, we just introduce a unit clause whose argument is just the type. For a constrained type t, first of all we have to perform constraint inheritance from all types that subsume t. Then we transform that constraint to some internal representation, usually a feature structure (FS). We now have a schematic defining clause of the form t(FS) :- ?. Next, we compute the missing right-hand side (RHS) with the following algorithm. 1. Compute HF, the set of hiding features on the type of the current node, then insert these fea- tures with appropriate types in the structure P':<.} /ARG2 list I e_list /HD T / /ARG3 iist~ I.TL ,i,tJ LGOALS list.] (FS) if they're not already there. For each node under a feature in HF, apply step 2. 2. Let t be the type on the current node and X its tag (a variable). (a) If t is a constrained type, enter t(X) into RHS (if it's not already there). (b) Elseif t is a hiding type, then check if its hiding features and the hiding features of all its hiding subtypes are identical. If they are identical, then proceed as in step 1. If not, enter t(X) into RHS. (c) Else (t is a simple type) do nothing at all. For hiding types, we do exactly the same thing, ex- cept that we don't have any structure to begin with. But this is no problem, since the hiding features get introduced anyway. 4.2 An example A formal proof of correctness of this compiler is given in (GStz, 1995) - here, we will try to show by ex- ample how it works. Our example is an encodin~ of a definite relation in a type constraint setup2 append_c appends an arbitrary list onto a list of con- stants. T constant Figure 5: The signature for the append_c example We will stick to an AVM style notation for our ex- amples, the actual program uses a standard feature term syntax. List are abbreviated in the standard HPSG manner, using angled brackets. append_c -* [A O1 ARG 2 ARG3 GOALS e_listJ "ARG 1 ARG2 ARG3 V GOALS 15q oo.,,..,i 5q ¢ [] IE]I[EI ARG 1 [~] ARG2 ARG3 Figure 6: A constraint on append_c Note that the set of constrained types is {append_c, 4This sort of encoding was pioneered by (Ait-Kaci, 1984), but see also (King, 1989) and (Carpenter, 1992). 88 T} and the set of hiding types is {list, ne_list}. Con- verting the first disjunct of append_c into a feature structure to start our compilation, we get something like 'append_c I ARG1 v--a[]e-list] append_c( ARG2 121 list ARG3 .GOALS e_list.I :-?. Since the values of the features of append_c are of type list, a hiding type, those features are hiding features and need to be considered. Yet looking at node [-i7, the algorithm finds e_list, a simple type, and does nothing. Similarly with node [~]. On node ~], we find the hiding type list. Its one hiding sub- type, ne_list, has different hiding features (list has no features appropriate at all). Therefore, we have to enter this node into the RHS. Since the same node appears under both ARG1 and ARG2, we're done and have [ 1 append_c ARG1 e_list append_c( I ARG3ARG2 ~__lisq):-]Jst(~). LGOALS e_list j which is exactly what we want. It means that a structure of type append_c is well-formed if it unifies with the argument of the head of the above clause and whatever is under ARG2 (and AR.G3) is a well- formed list. Now for the recursive disjunct, we start out with append_el "append_c rne_list ARGI E] l [] constant [] .st ARG2 [~] list rne-list t] he.list -append_c ] GOALS[] HD ~] ARG2 L.~J| [] 4,: mJ :-?. Node E] bears a hiding type with no subtypes. Therefore we don't enter that node in the RHS, but proceed to look at its features. Node [] bears a sim- ple type and we do nothing, but node [] is again a list and needs to be entered into the RHS. Similarly with nodes [] and ['~. append_c on node [] is a con- strained type and [] also has to go onto the RHS. The final result then is append_c( "append_c me-list constant] ARG2 [~] list me-list t] rne_list / rapP:-d_c "1 / IARG1 r31 | ._ list(~), list([~]), list([~]), append_c(~]). This is almost what we want, but not quite. Con- sider node ~]. Clearly it needs to be checked, but what about nodes ~], [] and E]? They are all em- bedded under node [] which is being checked any- way, so listing them here in the RHS is entirely re- dundant. In general, if a node is listed in the RHS, then no other node below it needs to be there as well. Thus, our result should really be append_c( "append_c rne-list constant] ARG2 r~1 list me-list t] rne-list I r append-e 1 IHD GOALS I I | LAFtG3 16~J LTL e_list :_ appendoc([~]). Our implementation of the compiler does in fact perform this pruning as an integrated part of the compilation, not as an additional step. It should be pointed out that this compilation re- sult is quite a dramatic improvement on more naive on-line approaches to ttPSG processing. By reason- ing with the different kinds of types, we can dras- tically reduce the number of goals that need to be checked on-line. Another way of viewing this would be to see the actual compilation step as being much simpler (just check every possible feature) and to subsequently apply program transformation tech- niques (some sophisticated form of partial evalua- tion). We believe that this view would not simplify the overall picture, however. 89 4.3 Implementation and Extensions The compiler as described in the last section has been fully implemented under Quintus Prolog. Our interpreter at the moment is a simple left to right backtracking interpreter. The only extension is to keep a list of all the nodes that have already been visited to keep the same computation from being repeated. This is necessary since although we avoid redundancies as shown in the last example, there are still cases where the same node gets checked more than once. This simple extension also allows us to process cyclic queries. The following query is allowed by our system. me_list ~] Query> [~] [THD Figure 7: A permitted cyclic query An interpreter without the above-mentioned exten- sion would not terminate on this query. The computationally oriented reader will now wonder how we expect to deal with non-termination anyway. At the moment, we allow the user to specify minimal control information. • The user can specify an ordering on type expan- sion. E.g., if the type hierarchy contains a type sign with subtypes word and phrase, the user may specify that word should always be tried before phrase. • The user can specify an ordering on feature ex- pansion. E.g., HD should always be expanded before TL in a given structure. Since this information is local to any given structure, the interpreter does not need to know about it, and the control information is interpreted as compiler directives. 5 Conclusion and Outlook We have presented a compiler that can encode HPSG type definitions as a definite clause program. This for the first time offers the possibility to ex- press linguistic theories the way they are formulated by linguists in a number of already existing compu- tational systems. The compiler finds out exactly which nodes of a structure have to be examined and which don't. In doing this off-line, we minimize the need for on-line inferences. The same is true for the control informa- tion, which is also dealt with off-line. This is not to say that the interpreter wouldn't profit by a more sophisticated selection function or tabulation tech- niques (see, e.g., (DSrre, 1993)). We plan to apply Earley deduction to our scheme in the near future and experiment with program transformation tech- niques and bottom-up interpretation. Our work addresses a similar problem as Carpen- ter's work on resolved feature structures (Carpen- ter, 1992, ch. 15). However, there are two major differences, both deriving form the fact that Car- penter uses an open world interpretation. Firstly, our approach can be extended to handle arbitrar- ily complex antecedents of implications (i.e., arbi- trary negation), which is not possible using an open world approach. Secondly, solutions in our approach have the so-called subsumption monotonicity or per- sistence property. That means that any structure subsumed by a solution is also a solution (as in Pro- log, for example). Quite the opposite is the case in Carpenter's approach, where solutions are not guar- anteed to have more specific extensions. This is un- satisfactory at least from an HPSG point of view, since HPSG feature structures are supposed to be maximally specific. Acknowledgments The research reported here was carried out in the context of SFB 340, project B4, funded by the Deutsche Forschungsgemeinschaft. We would like to thank Dale Gerdemann, Paul John King and two anonymous referees for helpful discussion and com- ments. References Hassan Ait-Kaci. 1984. A lattice theoretic approach to computation based on a calculus of partially or- dered type structures. Ph.D. thesis, University of Pennsylvania. Bob Carpenter. 1992. The logic of typed feature s~ructures, volume 32 of Cambridge Tracts in The- oretical Computer Science. Cambridge University Press. Bob Carpenter. 1993. ALE - the attribute logic engine, user's guide, May. Laboratory for Computational Linguistics, Philosophy Depart- ment, Carnegie Mellon University, Pittsburgh, PA 15213. Jochen DSrre and Michael Dorna. 1993. CUF - a formalism for linguistic knowledge representa- tion. In Jochen DSrre, editor, Computational as- pects of constraint based linguistic descriptions I, pages 1-22. DYANA-2 Deliverable R1.2.A, Uni- versit~t Stuttgart, August. Jochen DSrre. 1993. Generalizing earley deduction for constraint-based grammars. In Jochen DSrre, editor, Computational aspects of constraint based linguistic descriptions I, pages 25-41. DYANA- 2 Deliverable R1.2.A, Universit~t Stuttgart, Au- gust. Martin C. Emele and R~mi Zajac. 1990. Typed unification grammars. In Proceedings of the 13 'h 90 International Conference on Computational Lin- guistics. Dale Gerdemann and Paul John King. 1993. Typed feature structures for expressing and com- putationally implementing feature cooccurrence restrictions. In Proceedings of 4. Fachtagung der Sektion Computerlinguistik der Deutschen Gesellschafl fffr Sprachwissenschaft, pages 33-39. Thilo GStz. 1995. Compiling HPSG constraint grammars into logic programs. In Proceedings of the joint ELSNET/COMPULOG-NET/EAGLES workshop on computational logic for natural lan- guage processing. M. HShfeld and Gert Smolka. 1988. Definite rela- tions over constraint languages. LILOG technical report, number 53, IBM Deutschland GmbH. Paul John King. 1989. A logical formalism for head. driven phrase structure grammar. Ph.D. thesis, University of Manchester. W. Detmar Meurers. 1994. On implementing an HPSG theory - Aspects of the logical archi- tecture, the formalization, and the implementa- tion of head-driven phrase structure grammars. In: Erhard W. Hinrichs, W. Detmar Meurers, and Tsuneko Nakazawa: Partial- VP and Split-NP Topicalization in German - An HPSG Analysis and its Implementation. Arbeitspapiere des SFB 340 Nr. 58, Universit£t Tfibingen. Carl Pollard and Ivan A. Sag. 1994. Head-Driven Phrase Structure Grammar. Chicago: University of Chicago Press and Stanford: CSLI Publica- tions. Appendix A. A small grammar The following small example grammar, together with a definition of an append type, generates sen- tences like "John thinks cats run". It is a modified version of an example from (Carpenter, 1992). phrase --, A "CAT s DTRI IAGR LPHON I AGR , LPHO N A RG I GOALS ( I ARG2 [ARG3 "CAT vp AGR [~ r°AT V DTRI IAGR LPHON T .DTR2 [PHON ] word ---} V V V V "CAT PHON AGR ICAT :PHON AGR rCAT ~PHON AGR 'CAT PHON ~GR 'CAT PHON AGR p ] ( john V raary } singular ( cats V dogs ) plural up ( runs V jumps ) singular ( run v jump ) plural "" ] ( knows v thinks ) singular Here's an example query. Note that the feature GOALS has been suppressed in the result. Query> [PHON { john, runs )] Result> "phrase CAT PHON DTR1 DTR2 [~ job. I ~] ( ru.s ) ) "word t CAT np AGR {~ingular PHON ) "word AGR PHON For the next query we get exactly the same result. query> [DTR2 [FHON { runs 91 | 1995 | 12 |
Compilation of HPSG to TAG* Robert Kasper Dept. of Linguistics Ohio State University 222 Oxley Hall Columbus, OH 43210 U.S.A. kasper~ling.ohio-state.edu Bernd Kiefer Klaus Netter Deutsches Forschungszentrum ffir Kiinstliche Intelligenz, GmbH Stuhlsatzenhausweg 3 66123 Saarbrficken Germany (kieferlnetter}Qdfki.uni-sb.de K. Vijay-Shanker CIS Dept. University of Delaware Newark, DE 19716 U.S.A [email protected] Abstract We present an implemented compilation algorithm that translates HPSG into lex- icalized feature-based TAG, relating con- cepts of the two theories. While HPSG has a more elaborated principle-based theory of possible phrase structures, TAG pro- vides the means to represent lexicalized structures more explicitly. Our objectives are met by giving clear definitions that de- termine the projection of structures from the lexicon, and identify "maximal" pro- jections, auxiliary trees and foot nodes. 1 Introduction Head Driven Phrase Structure Grammar (HPSG) and Tree Adjoining Grammar (TAG) are two frame- works which so far have been largely pursued in par- allel, taking little or no account of each other. In this paper we will describe an algorithm which will com- pile HPSG grammars, obeying certain constraints, into TAGs. However, we are not only interested in mapping one formalism into another, but also in ex- ploring the relationship between concepts employed in the two frameworks. HPSG is a feature-based grammatical framework which is characterized by a modular specification of linguistic generalizations through extensive use of principles and lexicalization of grammatical informa- tion. Traditional grammar rules are generalized to schemata providing an abstract definition of gram- matical relations, such as head-of, complement-of, subject-of, adjunct-of, etc. Principles, such as the *We would like to thank A. Abeill6, D. Flickinger, A. Joshi, T. Kroch, O. Rambow, I. Sag and H. Uszko- reit for valuable comments and discussions. The reseaxch underlying the paper was supported by research grants from the German Bundesministerium fiir Bildung, Wis- senschaft, Forschung und Technologie (BMBF) to the DFKI projects DIsco, FKZ ITW 9002 0, PARADICE, FKZ ITW 9403 and the VERBMOB1L project, FKZ 01 IV 101 K/l, and by the Center for Cognitive Science at Ohio State University. Head-Feature-, Valence-, Non-Local- or Semantics- Principle, determine the projection of information from the lexicon and recursively define the flow of information in a global structure. Through this modular design, grammatical descriptions are bro- ken down into minimal structural units referring to local trees of depth one, jointly constraining the set of well-formed sentences. In HPSG, based on the concept of "head- domains", local relations (such as complement-of, adjunct-of) are defined as those that are realized within the domain defined by the syntactic head. This domain is usually the maximal projection of the head, but it may be further extended in some cas- es, such as raising constructions. In contrast, filler- gap relations are considered non-local. This local vs. non-local distinction in HPSG cuts across the relations that are localized in TAG via the domains defined by elementary trees. Each elementary tree typically represents all of the arguments that are dependent on a lexical functor. For example, the complement-of and filler-gap relations are localized in TAG, whereas the adjunct-of relation is not. Thus, there is a fundamental distinction between the different notions of localization that have been assumed in the two frameworks. If, at first sight, these frameworks seem to involve a radically differ- ent organization of grammatical relations, it is nat- ural to question whether it is possible to compile one into the other in a manner faithful to both, and more importantly, why this compilation is being ex- plored at all. We believe that by combining the two approaches both frameworks will profit. From the HPSG perspective, this compilation of- fers the potential to improve processing efficiency. HPSG is a "lexicalist" framework, in the sense that the lexicon contains the information that determines which specific categories can be combined. Howev- er, most HPSG grammars are not lexicalized in the stronger sense defined by Schabes et.al. (SAJ88), where lexicaiization means that each elementary structure in the grammar is anchored by some lex- ical item. For example, HPSG typically assumes a rule schema which combines a subject phrase (e.g. 92 NP) with a head phrase (e.g. VP), neither of which is a lexical item. Consider a sentence involving a transitive verb which is derived by applying two rule schemata, reducing first the object and then the sub- ject. In a standard HPSG derivation, once the head verb has been retrieved, it must be computed that these two rules (and no other rules) are applicable, and then information about the complement and subject constituents is projected from the lexicon according to the constraints on each rule schema. On the other hand, in a lexicalized TAG derivation, a tree structure corresponding to the combined in- stantiation of these two rule schemata is directly retrieved along with the lexical item for the verb. Therefore, a procedure that compiles HPSG to TAG can be seen as performing significant portions of an HPSG derivation at compile-time, so that the struc- tures projected from lexical items do not need to be derived at run-time. The compilation to TAG provides a way of producing a strongly lexicalized grammar which is equivalent to the original HPSG, and we expect this lexicalization to yield a compu- tational benefit in parsing (cf. (S J90)). This compilation strategy also raises several is- sues of theoretical interest. While TAG belongs to a class of mildly context-sensitive grammar formalisms (JVW91), the generative capacity of the formal- ism underlying HPSG (viz., recursive constraints over typed feature structures) is unconstrained, al- lowing any recursively enumerable language to be described. In HPSG the constraints necessary to characterize the class of natural languages are stat- ed within a very expressive formalism, rather than built into the definition of a more restrictive for- malism, such as TAG. Given the greater expressive power of the HPSG formalism, it will not be pos- sible to compile an aribitrary HPSG grammar into a TAG grammar. However, our compilation algo- rithm shows that particular HPSG grammars may contain constraints which have the effect of limiting the generative capacity to that of a mildly context- sensitive language.1 Additionally, our work provides a new perspective on the different types of con- stituent combination in HPSG, enabling a classifi- cation of schemata and principles in terms of more abstract functor-argument relations. From a TAG perspective, using concepts em- ployed in the HPSG framework, we provide an ex- plicit method of determining the content of the el- ementary trees (e.g., what to project from lexical items and when to stop the projection) from an HPSG source specification. This also provides a method for deriving the distinctions between initial and auxiliary trees, including the identification of 1We are only considering a syntactic fragment of HPSG here. It is not clear whether the semantic com- ponents of HPSG can also be compiled into a more con- strained formalism. foot nodes in auxiliary trees. Our answers, while consistent with basic tenets of traditional TAG anal- yses, are general enough to allow an alternate lin- guistic theory, such as HPSG, to be used as a basis for deriving a TAG. In this manner, our work also serves to investigate the utility of the TAG frame- work itself as a means of expressing different linguis- tic theories and intuitions. In the following we will first briefly describe the basic constraints we assume for the HPSG input grammar and the resulting form of TAG. Next we describe the essential algorithm that determines the projection of trees from the lexicon, and give formal definitions of auxiliary tree and foot node. We then show how the computation of "sub-maximal" projec- tions can be triggered and carried out in a two-phase compilation. 2 Background As the target of our translation we assume a Lexi- calized Tree-Adjoining Grammar (LTAG), in which every elementary tree is anchored by a lexical item (SAJ88). We do not assume atomic labelling of nodes, un- like traditional TAG, where the root and foot nodes of an auxiliary tree are assumed to be labelled iden- tically. Such trees are said to factor out recursion. However, this identity itself isn't sufficient to identi- fy foot nodes, as more than one frontier node may be labelled the same as the root. Without such atomic labels in HPSG, we are forced to address this issue, and present a solution that is still consistent with the notion of factoring recursion. Our translation process yields a lexicalized feature-based TAG (VSJ88) in which feature struc- tures are associated with nodes in the frontier of trees and two feature structures (top and bottom) with nodes in the interior. Following (VS92), the relationships between such top and bottom fea- ture structures represent underspecified domination links. Two nodes standing in this domination rela- tion could become the same, but they are necessarily distinct if adjoining takes place. Adjoining separates them by introducing the path from the root to the foot node of an auxiliary tree as a further specifica- tion of the underspecified domination link. For illustration of our compilation, we consid- er an extended HPSG following the specifications in (PS94)[404ff]. The rule schemata include rules for complementation (including head-subject and head- complement relations), head-adjunct, and filler-head relations. The following rule schemata cover the combina- tion of heads with subjects and other complements respectively as well as the adjunct constructions. 2 2We abstract from quite a number of properites and use the following abbreviations for feature names: S-----SYI"/SEM, L~LOChL, C~ChT, N-L----NON-LOChL, D-----DTRS, 93 Head-Sub j-Schema s L lCiS~ () L eo~ms I-;-] ( > I I EAD-DTR SILIC/SUBJ > D LCOMPS Leo~-DTR[-~ []] Head- Comps-Schema L I c |SUBJ LCOm, s ~AD-D~ slT.le |s~J [] D LCa~S union([], E]) c0.~-D=[.~ []] Head-Adjunct-Schema Leo~s ~AD-DTRIS [] I C |S~J D LCOm, S ADJ-DTRIS [LIm~ADa.OD []] We assume a slightly modified and constrained treatment of non-local dependencies (SLASH), in which empty nodes are eliminated and a lexical rule is used instead. While SLASH introduction is based on the standard filler-head schema, SLASH percolation is essentially constrained to the HEAD spine. Head-Filler-Schema LIC/s~J []< Lco"Ps ~< N-L[SLASH < >] Lie SUBJ [] | L L.-L[~L.S. <~>]JJ| L~,.~.~.H-D~R[s []] J SLASH termination is accounted for by a lexical rule, which removes an element from one of the va- lence lists (e0MPS or stsJ) and adds it to the SLASH list. Lexical Slash- Termination-Rule ILl(:/St~J ~/ ke0.P., L.-L[sLAs. ] /'-Ic/~B~ [] LEX-DTR S / Lcom's unionqEl,~) L.-L[sL's" < >] The percolation of SLASH across head domains is lexically determined. Most lexical items will be spec- ified as having an empty SLASH list. Bridge verbs (e.g., equi verbs such as want) or other heads al- lowing extraction out of a complement share their own SLASH value with the SLASH of the respective complement. 3 Equi and Bridge Verb "N-L [SL,SH E]] -~ r ~, <[]>111 \vpk L,-,-[s,-As,~-l] J]} Finally, we assume that rule schemata and prin- ciples have been compiled together (automatically or manually) to yield more specific subtypes of the schemata. This does not involve a loss of general- ization but simply means a further refinement of the type hierarchy. LP constraints could be compiled out beforehand or during the compilation of TAG structures, since the algorithm is lexicon driven. 3 Algorithm 3.1 Basic Idea While in TAG all arguments related to a particu- lar functor are represented in one elementary tree structure, the 'functional application' in HPSG is distributed over the phrasal schemata, each of which can be viewed as a partial description of a local tree. Therefore we have to identify which constituents in aWe choose such a lexicalized approach, because it will allow us to maintain a restriction that every TAG tree resulting from the compilation must be rooted in a non-emtpy lexical item. The approach will account for extraction of complements out of complements, i.e., along paths corresponding to chains of government rela- tions. As far as we can see, the only limitation arising from the percolation of SLASH only along head-projections is on extraction out of adjuncts, which may be desirable for some languages like English. On the other hand, these constructions would have to be treated by multi- component TAGs, which axe not covered by the intended interpretation of the compilation algorithm anyway. 94 a phrasal schema count as functors and arguments. In TAG different functor argument relations, such as head-complement, head-modifier etc., are repre- sented in the same format as branches of a trunk projected from a lexical anchor. As mentioned, this anchor is not always equivalent to the HPSG notion of a head; in a tree projected from a modifier, for ex- ample, a non-head (ADJUNCT-DTR) counts as a func- tor. We therefore have to generalize over different types of daughters in HPSG and define a general no- tion of a functor. We compute the functor-argument structure on the basis of a general selection relation. Following (Kas92) 4, we adopt the notion of a se- lector daughter (SD), which contains a selector fea- ture (SF) whose value constrains the argument (or non-selector) daughter (non-SD)) For example, in a head-complement structure, the SD is the HEAD-DTR, as it contains the list-valued feature coMPs (the SF) each of whose elements selects a C0m~-DTR, i.e., an el- ement of the CoMPs list is identified with the SYNSE~4 value of a COMP-DTR. We assume that a reduction takes place along with selection. Informally, this means that if F is the se- lector feature for some schema, then the value (or the element(s) in the list-value) of 1: that selects the non- SD(s) is not contained in the F value of the mother node. In case F is list-valued, we-assume that the rest of the elements in the list (those that did not select any daughter) are also contained in the F at the mother node. Thus we say that F has been re- duced by the schema in question. The compilation algorithm assumes that all HPSG schemata will satisfy the condition of si- multaneous selection and reduction, and that each schema reduces at least one SF. For the head- complement- and head-subject-schema, these con- ditions follow from the Valence Principle, and the SFs are coMPs and SUBJ, respectively. For the head- adjunct-schema, the ADJUNCT-DTR is the SD, because it selects the HEAD-DTR by its NOD feature. The NOD feature is reduced, because it is a head feature, whose value is inherited only from the HEAD-DTR and not from the ADJUNCT-DTR. Finally, for the filler-head- schema, the HEAD-DTR is the SD, as it selects the FILLER-DTR by its SLASH value, which is bound off, not inherited by the mother, and therefore reduced. We now give a general description of the compila- tion process. Essentially, we begin with a lexical de- 4The algorithm presented here extends and refines the approach described by (Kas92) by stating more precise criteria for the projection of features, for the termina- tion of the algorithm, and for the determination of those structures which should actually be used as elementary trees. 5Note that there might be mutual selection (as in the case of the specifier-head-relations proposed in (PS94)[44ff]). If there is mutual selection, we have to stipulate one of the daughters as the SD. The choice made would not effect the correctness of the compilation. scription and project phrases by using the schemata to reduce the selection information specified by the lexical type. Basic Algorithm Take a lexical type L and initial- ize by creating a node with this type. Add a node n dominating this node. For any schema S in which specified SFs of n are reduced, try to instantiate S with n corre- sponding to the SD of S. Add another node m dominating the root node of the instantiated schema. (The domination links are introduced to allow for the possibility of adjoining.) Re- peat this step (each time with n as the root node of the tree) until no further reduction is possible. We will fill in the details below in the following order: what information to raise across domination links (where adjoining may take place), how to de- termine auxiliary trees (and foot nodes), and when to terminate the projection. We note that the trees produced have a trunk leading from the lexical anchor (node for the given lexical type) to the root. The nodes that are sib- lings of nodes on the trunk, the selected daughters, are not elaborated further and serve either as foot nodes or substitution nodes. 3.2 Raising Features Across Domination Links Quite obviously, we must raise the SFs across dom- ination links, since they determine the applicability of a schema and licence the instantiation of an SD. If no SF were raised, we would lose all information about the saturation status of a functor, and the algorithm would terminate after the first iteration. There is a danger in raising more than the SFs. For example, the head-subject-schema in German would typically constrain a verbal head to be finite. Raising HEAD features would block its application to non-finite verbs and we would not produce the trees required for raising-verb adjunction. This is again because heads in HPSG are not equivalent to lexi- cal anchors in TAG, and that other local properties of the top and bottom of a domination link could differ. Therefore HEAD features and other LOCAL fea- tures cannot, in general, be raised across domination links, and we assume for now that only the SFs are raised. Raising all SFs produces only fully saturated el- ementary trees and would require the root and foot of any auxiliary tree to share all SFs, in order to be compatible with the SF values across any domina- tion links where adjoining can take place. This is too strong a condition and will not allow the resulting TAG to generate all the trees derivable with the giv- en HPSG (e.g., it would not allow unsaturated VP complements). In § 3.5 we address this concern by 95 using a multi-phase compilation. In the first phase, we raise all the SFs. 3.3 Detecting Auxiliary Trees and Foot Nodes Traditionally, in TAG, auxiliary trees are said to be minimal recursive structures that have a foot node (at the frontier) labelled identical to the root. As such category labels (S, NP etc.) determine where an auxiliary tree can be adjoined, we can informally think of these labels as providing selection informa- tion corresponding to the SFs of HPSG. Factoring of recursion can then be viewed as saying that auxiliary trees define a path (called the spine) from the root to the foot where the nodes at extremities have the same selection information. However, a closer look at TAG shows that this is an oversimplification. If we take into account the adjoining constraints (or the top and bottom feature structures), then it ap- pears that the root and foot share only some selec- tion information. Although the encoding of selection information by SFs in HPSG is somewhat different than that tradi- tionally employed in TAG, we also adopt the notion that the extremities of the spine in an auxiliary tree share some part (but not necessarily all) of the se- lection information. Thus, once we have produced a tree, we examine the root and the nodes in its fron- tier. A tree is an auxiliary tree if the root and some frontier node (which becomes the foot node) have some non-empty SF value in common. Initial trees are those that have no such frontier nodes. [SUBS<>] T1 COMPS < > SLASH [] [] , D', J COMPS < > SLASH [] D', [] coMPs <> SLASH [] I COMPS > SLASH want (equi verb) In the trees shown, nodes detected as foot nodes are marked with *. Because of the SUBJ and SLASH values, the HEAD-DTR is the foot of T2 below (an- chored by an adverb) and COMP-DTR is the foot of T3 (anchored by a raising verb). Note that in the tree T1 anchored by an equi-verb, the foot node is detected because the SLASH value is shared, al- though the SUBJ is not. As mentioned, we assume that bridge verbs, i.e., verbs which allow extraction out of their complements, share their SLASH value with their clausal complement. 3.4 Termination Returning to the basic algorithm, we will now con- sider the issue of termination, i.e., how much do we need to reduce as we project a tree from a lexical item. Normally, we expect a SF with a specified value to be reduced fully to an empty list by a series of ap- plications of rule schemata. However, note that the SLASH value is unspecified at the root of the trees T2 and T3. Of course, such nodes would still uni- fy with the SD of the filler-head-schema (which re- duces SLASH), but applying this schema could lead to an infinite recursion. Applying a reduction to an unspecified SF is also linguistically unmotivated as it would imply that a functor could be applied to an argument that it never explicitly selected. However, simply blocking the reduction of a SF whenever its value is unspecified isn't sufficient. For example, the root of T2 specifies the subs to be a non-empty list. Intuitively, it would not be appro- priate to reduce it further, because the lexical anchor (adverb) doesn't semantically license the SUBJ argu- ment itself. It merely constrains the modified head to have an unsaturated SUBS. [ suBs [] ] T2 COMPS < > SLASH [] , [suBJ []<[1> I , D [] COMPS < > L ' SLASH [] J SUBJ < > ] , COMPS < > J SLASH < > M0D [] VP-adverb Raising Verb (and Infinitive Marker to) -N-L [SLASH [~] COMPS / s LCOMPS[<> J ? \vp [H-L[SLASH []] 96 I D: COMPS SLASH raising verb [] ] T3 COMPS < > SLASH [] •[ COMPS SLASH D] <> [] To motivate our termination criterion, consider the adverb tree and the asterisked node (whose SLASH value is shared with SLASH at the root). Being a non-trunk node, it will either be a foot or a sub- stitution node. In either case, it will eventually be unified with some node in another tree. If that oth- er node has a reducible SLASH value, then we know that the reduction takes place in the other tree, be- cause the SLASH value must have been raised across the domination link where adjoining takes place. As the same SLASH (and likewise suB J) value should not be reduced in both trees, we state our termination criteria as follows: Termination Criterion The value of an SF F at the root node of a tree is not reduced further if it is an empty list, or if it is shared with the value of F at some non-trunk node in the frontier. Note that because of this termination criterion, the adverb tree projection will stop at this point. As the root shares some selector feature values (SLASH and SUB J) with a frontier node, this node becomes the foot node. As observed above, adjoining this tree will preserve these values across any domination links where it might be adjoined; and if the values stated there are reducible then they will be reduced in the other tree. While auxiliary trees allow argu- ments selected at the root to be realized elsewhere, it is never the case for initial trees that an argu- ment selected at the root can be realized elsewhere, because by our definition of initial trees the selec- tion of arguments is not passed on to a node in the frontier. We also obtain from this criterion a notion of local completeness. A tree is locally complete as soon as all arguments which it licenses and which are not licensed elsewhere are realized. Global completeness is guaranteed because the notion of "elsewhere" is only and always defined for auxiliary trees, which have to adjoin into an initial tree. 3.5 Additional Phases Above, we noted that the preservation of some SFs along a path (realized as a path from the root to the foot of an auxiliary tree) does not imply that all SFs need to be preserved along that path. Tree T1 provides such an example, where a lexical item, an equi-verb, triggers the reduction of an SF by taking a complement that is unsaturated for SUBJ but never shares this value with one of its own SF values. To allow for adjoining of auxiliary trees whose root and foot differ in their SFs, we could produce a number of different trees representing partial pro- jections from each lexical anchor. Each partial pro- jection could be produced by raising some subset of SFs across each domination link, instead of raising all SFs. However, instead of systematically raising all possible subsets of SFs across domination links, we can avoid producing a vast number of these par- tial projections by using auxiliary trees to provide guidance in determining when we need to raise only a particular subset of the SFs. Consider T1 whose root and foot differ in their SFs. From this we can infer that a SUBJ SF should not always be raised across domination links in the trees compiled from this grammar. However, it is only useful to produce a tree in which the susJ value is not raised when the bottom of a domination link has both a one element list as value for SUBJ and an empty COMPS list. Having an empty SUBJ list at the top of the domination link would then allow for adjunction by trees such as T1. This leads to the following multi-phase compila- tion algorithm. In the first phase, all SFs are raised. It is determined which trees are auxiliary trees, and then the relationships between the SFs associated with the root and foot in these auxiliary trees are recorded. The second phase begins with lexical types and considers the application of sequences of rule schemata as before. However, immediately after ap- plying a rule schema, the features at the bottom of a domination link are compared with the foot nodes of auxiliary trees that have differing SFs at foot and root. Whenever the features are compatible with such a foot node, the SFs are raised according to the relationship between the root and foot of the auxil- iary tree in question. This process may need to be iterated based on any new auxiliary trees produced in the last phase. 3.6 Example Derivation In the following we provide a sample derivation for the sentence (I know) what Kim wants to give to Sandy. Most of the relevant HPSG rule schemata and lex- ical entries necessary to derive this sentence were already given above. For the noun phrases what, Kim and Sandy, and the preposition to no special assumptions are made. We therefore only add the entry for the ditransitive verb give, which we take to subcategorize for a subject and two object com- plements. 97 Ditransitive Verb L c°MPS imp[ ]pp[ 1) From this lexical entry, we can derive in the first phase a fully saturated initial tree by apply- ing first the lexical slash-termination rule, and then the head-complement-, head-subject and filler-head- rule. Substitution at the nodes on the frontier would yield the string what Kim gives to Sandy. T4 COMPS < > SLASH < > [] NP what I v: I COMPS < > SLASH < [] > [] , NP D', ' [susJ '<[]>] Kim COMPS < > SLASH < [] > , [] V', pp I COMPS < > to Sandy SLASH < > COMPS < , > SLASH < > gives The derivations for the trees for the matrix verb want and for the infinitival marker to (equivalent to a raising verb) were given above in the examples T1 and T3. Note that the suBJ feature is only reduced in the former, but not in the latter structure. In the second phase we derive from the entry for give another initial tree (Ts) into which the auxiliary tree T1 for want can be adjoined at the topmost domination link. We also produce a second tree with similar properties for the infinitive marker to (T6). SUBJ <> ] T5 COMPS < > SLASH < > NP COMPS < > SLASH < [] > what D: I COMPS < > SLASH < [] > , [] D', pp I COMPS < to Sandy SLASH < COMPS < , [] > SLASH < > give T6 COMPS < > SLASH < [] > .: SLASH [] J D' , [] COMPS < > SLASH [] COMPS > * SLASH to By first adjoining the tree T6 at the topmost dom- ination link of T5 we obtain a structure T7 corre- sponding to the substring what ... to give to Sandy. Adjunction involves the identification of the foot node with the bottom of the domination link and identification of the root with top of the domina- tion link. Since the domination link at the root of the adjoined tree mirrors the properties of the ad- junction site in the initial tree, the properties of the domination link are preserved. 98 SUBJ <> ] T7 COMPS < SLASH < > NP COMPS < > SLASH < [] > what ' D: I COMPS < > SLASH < [] > [ [ COMPS < > [] COMPS < > SLASH < > SLASH < [] > , [] D: pp I COMPS < > to Sandy SLASH < > "°1 COMPS < , > SLASH < > give The final derivation step then involves the adjunc- tion of the tree for the equi verb into this tree, again at the topmost domination link. This has the effect of inserting the substring Kim wants into what ... to give to Sandy. 4 Conclusion We have described how HPSG specifications can be compiled into TAG, in a manner that is faithful to both frameworks. This algorithm has been imple- mented in Lisp and used to compile a significant fragment of a German HPSG. Work is in progress on compiling an English grammar developed at CSLI. This compilation strategy illustrates how linguis- tic theories other than those previously explored within the TAG formalism can be instantiated in TAG, allowing the association of structures with an enlarged domain of locality with lexical items. We have generalized the notion of factoring recursion in TAG, by defining auxiliary trees in a way that is not only adequate for our purposes, but also provides a uniform treatment of extraction from both clausal and non-clausal complements (e.g., VPs) that is not possible in traditional TAG. It should be noted that the results of our compila- tion will not always conform to conventional linguis- tic assumptions often adopted in TAGs, as exempli- fied by the auxiliary trees produced for equi verbs. Also, as the algorithm does not currently include any downward expansion from complement nodes on the frontier, the resulting trees will sometimes be more fractioned than if they had been specified directly in a TAG. We are currently exploring the possiblity of com- piling HPSG into an extension of the TAG formal- ism, such as D-tree grammars (RVW95) or the UVG- DL formalism (Ram94). These somewhat more pow- erful formalisms appear to be adequate for some phenomena, such as extraction out of adjuncts (re- call §2) and certain kinds of scrambling, which our current method does not handle. More flexible methods of combining trees with dominance links may also lead to a reduction in the number of trees that must be produced in the second phase of our compilation. There are also several techniques that we expect to lead to improved parsing efficiency of the resulting TAG. For instance, it is possible to declare specific non-SFs which can be raised, thereby reducing the number of useless trees produced during the multi- phase compilation. We have also developed a scheme to effectively organize the trees associated with lex- ical items. References Robert Kasper. On Compiling Head Driven Phrase Structure Grammar into Lexicalized Tree Adjoining Grammar. In Proceedings of the 2 "a Workshop on TAGs, Philadelphia, 1992. A. K. Joshi, K. Vijay-Shanker and D. Weir. The con- vergence of mildly context-sensitive grammatical for- malisms. In P. Sells, S. Shieber, and T. Wasow, eds., Foundational Issues in Natural Language Processing. MIT Press, 1991. Carl Pollard and Ivan Sag. Head Driven Phrase Struc- ture Grammar. CSLI, Stanford &: University of Chica- go Press, 1994. O. Rambow. Formal and Computational Aspects of Natural Language Syntax. Ph.D. thesis. Univ. of Philadelphia. Philadelphia, 1994. O. Rambow, K. Vijay-Shanker and D. Weir. D-Tree Grammars. In: ACL-95. Y. Schabes, A. Abeille, and A. K. Joshi. Parsing Strate- gies with 'Lexicalized' Grammars: Application to Tree Adjoining Grammars. COLING-88, pp. 578-583. Y. Schabes, and A. K. Joshi. Parsing with lexicalized tree adjoining grammar. In M. Tomita, ed., Cur- rent Issues in Parsing Technologies. Kluwer Academic Publishers, 1990. K. Vijay-Shanker. Using Descriptions of Trees in a TAG. Computational Linguistics, 18(4):481-517, 1992. K. Vijay-Shanker and A. K. Joshi. Feature Structure Based Tree Adjoining Grammars. In: COLING-88. : 99 | 1995 | 13 |
Memoization of Coroutined Constraints Mark Johnson Cognitive and Linguistic Sciences, Box 1978 Brown University Providence, l~I 02912, USA Mark_Johnson~Brown.edu Jochen DSrre* Institut fiir maschinelle Sprachverarbeitung Universit~it Stuttgart D-70174 Stuttgart, Germany Jochen.Doerre~ims.uni-stuttgart.de Abstract Some linguistic constraints cannot be effec- tively resolved during parsing at the loca- tion in which they are most naturally intro- duced. This paper shows how constraints can be propagated in a memoizing parser (such as a chart parser) in much the same way that variable bindings are, providing a general treatment of constraint coroutining in memoization. Prolog code for a sim- ple application of our technique to Bouma and van Noord's (1994) categorial gram- mar analysis of Dutch is provided. 1 Introduction As the examples discussed below show, some lin- guistic constraints cannot be effectively resolved du- ring parsing at the location in which they are most naturally introduced. In a backtracking parser, a natural way of dealing with such constraints is to coroutine them with the other parsing processes, re- ducing them only when the parse tree is sufficiently instantiated so that they can be deterministically resolved. Such parsers are particularly easy to im- plement in extended versions of Prolog (such as Pro- loglI, SICStus Prolog and Eclipse) which have such coroutining facilities built-in. Like all backtracking parsers, they can exhibit non-termination and expo- nential parse times in situations where memoizing parsers (such as chart parsers) can terminate in po- lynomial time. Unfortunately, the coroutining ap- proach, which requires that constraints share varia- bles in order to communicate, seems to be incompa- tible with standard memoization techniques, which *This research was largely conducted at the Institut ffir maschinelle Sprachverarbeitung in Stuttgart. We would like to thank Andreas Eisele, Pascal van Hen- tenryck, Martin Kay, Fernando Pereira, Edward Stabler and our colleagues at the Institut ffir maschinelle Sprach- verarbeitung for helpful comments and suggestions. All remaining errors are our own. The Prolog code presen- ted in this paper is available via anonymous ftp from Ix.cog.brown.edu as/pub/lernrna.tar.Z require systematic variable-renaming (i.e., copying) in order to avoid spurious variable binding. For generality, conciseness and precision, we for- malize our approach to memoization and constraints within H6hfeld and Smolka's (1988) general theory of Constraint Logic Programming (CLP), but we discuss how our method can be applied to mote stan- dard chart parsing as well. This paper extends our previous work reported in DSrre (1993) and John- son (1993) by generalizing those methods to arbi- trary constraint systems (including feature-structure constraints), even though for reasons of space such systems are not discussed here. 2 Lexical rules in Categorial Grammar This section reviews Bouma and van Noord's (1994) (BN henceforth) constraint-based categorial gram- mar analysis of modification in Dutch, which we use as our primary example in this paper. However, the memoizing CLP interpreter presented below has also been applied to GB and HPSG parsing, both of which benefit from constraint coroutining in parsing. BN can explain a number of puzzling scope phe- nomena by proposing that heads (specifically, verbs) subcategorize for adjuncts as well as arguments (rat- her than allowing adjuncts to subcategorize for the arguments they modify, as is standard in Categorial Grammar). For example, the first reading of the Dutch sentence (1) Frits opzettelijk Marie lijkt te ontwijken deliberately seems avoid 'Fritz deliberately seems to avoid Marie' 'Fritz seems to deliberately avoid Marie' is obtained by the analysis depicted in Figure 1. The other reading of this sentence is produced by a de- rivation in which the adjunct addition rule 'A' adds an adjunct to lijkt re, and applies vacuously to ont- wijken. It is easy to formalize this kind of grammar in pure Prolog. In order to simplify the presentation of the proof procedure interpreter below, we write clauses 100 Marie opzettelijk NP2 VPI\ADV\NP2 lijkt te VPt/VP.___.__.___~I A ontwijken (VPI\ADV\ VPI"/yP1pI\ADV\NP2)NP2)/D(V D v(V~NP2)VP'\NP2 A Frits ADV VPt\ADV NP1 VP1 Figure 1: The BN analysis of (1). In this derivation 'VPI' abbreviates 'S\NPI', 'A' is a lexieal rule which adds adjuncts to verbs, 'D' is a lexical 'division' rule which enables a control or raising verb to combine with arguments of higher arity, and 'D' is a unary modal operator which diacritically marks infinitival verbs. as 'H : :- B' where H is an atom (the head) and B is a list of atoms (the negative literals). The atom x(Cat, Left, Right) is true iff the sub- string between the two string positions Left and Right can be analyzed as belonging to category Cat. (As is standard, we use suffixes of the input string for string positions). The modal operator '~' is used to diacritically mark untensed verbs (e.g., ontwijken), and prevent them from combining with their arguments. Thus untensed verbs must combine with other verbs which subcategorize for them (e.g., lijkt re), forcing all verbs to appear in a 'verb cluster' at the end of a clause. For simplicity we have not provided a semantics here, but it is easy to add a 'semantic interpretation' as a fourth argument in the usual manner. The for- ward and backward application rules are specified as clauses of x/3. Note that the application rules are left-recursive, so a top-down parser will in general fail to terminate with such a grammar. :- op(990, xfx, ::- ). :- op(400, yfx, \ ). :- op(300, fy, # ). X Clause operator X Backward combinator X Modal operator b' x(X, Left, Right) ::- [ ~ Forward application x(X/Y, Left, Mid), x(Y, Mid, Right) ]. x(X, Left, Right) ::- [ ~ Backward application x(Y, Left, Mid), x(X\Y, Mid, Right) ]. x(I, [Word[Words], Words) ::- [ lex(Word, X) ]. Lexical entries are formalized using a two place re- lation lex(W0rd, Cat), which is true if Cat is a ca- tegory that the lexicon assigns to Word. lex('Frits', np) ::- ~. lex('Marie', np) ::- []. lex(opzettelijk, adv) ::- D. lex(ont2ijken, #I ) ::- [ add_adjunots(s~np~np, I ) ]. lex(lijkt_te, I / #Y ) ::- [ add_adjuncts((s\np)/(s\np), IO), division(IO, I/Y ) ]. The add_adjuncts/2 and division/2 predicates formalize the lexical rules 'A' (which adds adjuncts to verbs) and 'D' (the division rule). add_adjuncts(s, s) ::- ~. add_adjuncts(I, Y\adv) ::- [ add_adjuncts(I, Y) ]. add_adjuncts(I\£, Y\A) ::- [ add_adjuncts(X, Y) ]. add_adjuncts(I/A, T/A) ::- [ add_adjunc~s(l, T) 3. division(I, I) ::- []. division(XO/YO, (I\Z)/(Y\Z)) ::- [ division(IO/YO, I/Y) ]. Note that the definitions of add_adjuncSs/2 and division/2 are recursive, and have an infinite num- ber of solutions when only their first arguments are instantiated. This is necessary because the num- ber of adjuncts that can be associated with any given verb is unbounded. Thus it is infeasible to enumerate all of the categories that could be associated with a verb when it is retrieved from the lexicon, so following BN, we treat the predica- tes add_adjlmcts/2 and division/2 as coroutined constraints which are only resolved when their se- cond arguments become sufficiently instantiated. As noted above, this kind of constraint corouti- ning is built-in to a number of Prolog implemen- tations. Unfortunately, the left recursion inherent in the combinatory rules mentioned earlier dooms any standard backtracking top-down parser to non- termination, no matter how coroutining is applied to 101 the lexical constraints. As is well-known, memoizing parsers do not suffer from this deficiency, and we present a memoizing interpreter below which does terminate. 3 The Lemma Table proof procedure This section presents a coroutining, memoizing CLP proof procedure. The basic intuition behind our ap- proach is quite natural in a CLP setting like the one of HShfeld and Smolka, which we sketch now. A program is a set of definite clauses of the form p(x) ql(Xl) ^... ^ q.(X.) ^ ¢ where the Xi are vectors of variables, p(X) and qi(Xi) are relational atoms and ¢ is a basic cons- traint coming from a basic constraint language C. ¢~ will typically refer to some (or all) of the variables mentioned. The language of basic constraints is clo- sed under conjunction and comes with (computable) notions of consistency (of a constraint) and entail- ment (¢1 ~c ¢2) which have to be invariant under variable renaming} Given a program P and a goal G, which is a conjunction of relational atoms and constraints, a P-answer of G is defined as a consi- stent basic constraint ¢ such that ¢ --+ G is valid in every model of P. SLD-resolution is generalized in this setting by performing resolution only on rela- tional atoms and simplifying (conjunctions of) basic constraints thus collected in the goal list. When fi- nally only a consistent basic constraint remains, this is an answer constraint ¢. Observe that this use of basic constraints generalizes the use of substitutions in ordinary logic programming and the (simplifica- tion of a) conjunction of constraints generalizes uni- fication. Actually, pure Prolog can be viewed as a syntactically sugared variant of such a CLP language with equality constraints as basic constraints, where a standard Prolog clause p(T) ~- ql (T,),..., qn (T,) is seen as an abbreviation for a clause in which the equality constraints have been made explicit by means of new variables and new equalities p(X) ,--- X=T, XI--T,,...,Xn=T,, q,(x,,). Here the Xl are vectors of variables and the T/ are vectors of terms. Now consider a standard memoizing proof proce- dure such as Earley Deduction (Pereira and War- ren 1983) or the memoizing procedures described by Tamaki and Sato (1986), Vieille (1989) or War- ren (1992) from this perspective. Each memoized goal is associated with a set of bindings for its ar- guments; so in CLP terms each memoized goal is a 1This essentially means that basic constraints can be recast as first-order predicates. conjunction of a single relational atom and zero or more equality constraints. A completed (i.e., ato- mic) clause p(T) with an instantiated argument T abbreviates the non-atomic clause p(X) ~ X - T, where the equality constraint makes the instantia- tion specific. Such equality constraints are 'inheri- ted' via resolution by any clause that resolves with the completed clause. In the CLP perspective, variable-binding or equa- lity constraints have no special status; informally, all constraints can be treated in the same way that pure Prolog treats equality constraints. This is the central insight behind the Lemma Table proof proce- dure: general constraints are permitted to propagate into and out of subcomputations in the same way that Earley Deduction propagates variable bindings. Thus the Lemma Table proof procedure generalizes Earley Deduction in the following ways: 1. Memoized goals are in general conjunctions of relational atoms and constraints. This allows constraints to be passed into a memoized sub- computation. We do not use this capability in the categorial grammar example (except to pass in variable bindings), but it is important in GB and HPSG parsing applications. For example, memoized goals in our GB parser consist of conjunctions of X' and ECP constraints. Because the X' phrase-structure rules freely permit empty ca- tegories every string has infinitely many well- formed analyses that satisfy the X' constraints, but the conjoined ECP constraint rules out all but a very few of these empty nodes. 2. Completed clauses can contain arbitrary ne- gative literals (rather than just equality cons- traints, as in Earley Deduction). This allows constraints to be passed out ofa memoized sub- computation. In the categorial grammar example, the add_adjuncts/2 and division/2 associated with a lexical entry cannot be finitely resolved, as noted above, so e.g., a clause x(#X, [onl:wijken], r-I) ::- [ add_adjuncl;s(s\np\np, Z ) ]. . is classified as a completed clause; the add_adjuncts/2 constraint in its body is inhe- rited by any clause which uses this lemma. Subgoals can be selected in any order (Earley Deduction always selects goals in left-to-right order). This allows constraint eoroutining wi- thin a memoized subcomputation. In the categorial grammar example, a cate- gory becomes more instantiated when it com- bines with arguments, allowing eventually the add_adjuncts/2 and division/2 to be deter- ministically resolved. Thus we use the flexibility 102 in the selection of goals to run constraints whe- never their arguments are sufficiently instantia- ted, and delay them otherwise. 4. Memoization can be selectively applied (Earley Deduction memoizes every computational step). This can significantly improve overall efficiency. In the categorial grammar example only x/3 goals are memoized (and thus only these goals incur the cost of table management). The 'abstraction' step, which is used in most me- moizing systems (including complex feature gram- mar chart parsers where it is somewhat confusingly called 'restriction', as in Shieber 1985), receives an elegant treatment in a CLP approach; an 'abstrac- ted' goal is merely one in which not all of the equality constraints associated with the variables appearing in the goal are selected with that goal. 2 For example, because of the backward application rule and the left-to-right evaluation our parser uses, eventually it will search at every left string position for an uninstantiated category (the variable Y in the clause), we might as well abstract all memoized goals of the form x(C, L, R) to x(_, L, _), i.e., goals in which the category and right string position are uninstan- tinted. Making the equality constraints explicit, we see that the abstracted goal is obtained by merely selecting the underlined subset of these below: x(Xl,X2, X3),Xl = C, X2 = L, Xa = R. While our formal presentation does not discuss ab- straction (since it can be implemented in terms of constraint selection as just described), because our implementation uses the underlying Prolog's unifi- cation mechanism to solve equality constraints over terms, it provides an explicit abstraction operation. Now we turn to the specification of the algorithm itself, beginning with the basic computational enti- ties it uses. Definition 1 A (generalized) goal is a multiset of relational atoms and constraints. A (generalized) clause Ho 4-- Bo is an ordered pair of generalized goals, where /fro contains at least one relational atom. A relational interpretation .4 (see HShfeld and Smolka 1988 for definition) satisfies a goal G iff .A satisfies each element of G, and it satisfies a clause H0 *--- B0 iff either .A fails to satisfy some element of B0 or .A satisfies each element of H0. 2After this paper was accepted, we discovered that a more general formulation of abstraction is required for systems using a hierarchy of types, such as typed feature structure constraints (Carpenter 1992). In applications of the Lemma Table Proof Procedure to such systems it may be desirable to abstract from a 'strong' type cons- tralnt in the body of a clause to a logically 'weaker' type constraint in the memoized goal. Such a form of ab- straction cannot be implemented using the selection rule alone. This generalizes the standard notion of clause by allowing the head H0 to consist of more than one atom. The head H0 is interpreted conjunctively; i.e., if each element of B0 is true, then so is each element of H0. The standard definition of resolution extends unproblematically to such clauses. Definition 2 We say that a clause co - H0 ~ B0 resolves with a clause cl = Ht ~-- BI on a non-empty set ofliterals C C_ Bo iff there is a variant Cl ~ of el of the form C *--- BI' such that V(co)NV(Bx') C V(C) (i.e., the variables common to e0 and BI ~ also appear in C, so there is no accidental variable sharing). If Co resolves with Cl on C, then the clause H0 ~ (B0 - C) U Bx' is called a resolvent of co with C 1 On C. Now we define items, which are the basic computa- tional units that appear on the agenda and in the lemma tables, which record memoized subcomputa- tions. Definition 3 An item is a pair (t, c) where c is a clause and t is a tag, i.e., one of program, solution or table(B) for some goal B. A lemma table for a goal G is a pair (G, La) where La is a finite list of items. The algorithm manipulates a set T of lemma tables which has the property that the first components of any two distinct members of T are distinct. This justifies speaking of the (unique) lemma table in T for a goal G. Tags are associated with clauses by a user- specified control rule, as described below. The tag associated with a clause in an item identifies the ope- ration that should be performed on that clause. The solution tag labels 'completed' clauses, the program tag directs the proof procedure to perform a non- memoizing resolution of one of the clanse's negative literals with program clauses (the particular nega- tive literal is chosen by a user-specified selection rule, as in standard SLD resolution), and the table(B) tag indicates that a subcomputation with root goal B (which is always a subset of the clause's negative literals) should be started. Definition 4 A control rule is a function from clau- ses G *-- B to one of program, solution or table(C) for some goal C C B. A selection rule is a function from clauses G *-- B where B contains at least one rela- tional atom to relational atoms a, where a appears in B. Because program steps do not require memoization and given the constraints on the control rule just mentioned, the list LG associated with a lemma table (G, LG) will only contain items of the form (t, G ,-- B) where t is either solution or table(C) for some goal C C_ B. Definition 5 To add an item an item e = (t, H ~ B) to its table means to replace the table (H, L) in T with (H, JelL]). 103 Input A non-empty goal G, a program P, a selection rule S, and a control rule R. Output A set of goals G' for which RiG' ) = solution and P ~ G *-- G'. Global Data Structures A set T of lemma tables and a set A of items called the agenda. Algorithm Set T := {(G, 0)} and A := ((program, G *-- G)}. Until A is empty, do: Remove an item e = it, c) from A. Case t of program For each clause p E P such that c resolves with p on S(c), choose a corresponding resolvent e' and add iRic'), c') to A. table(B) Add e to its table, s If T contains a table (B', L) where B' is a variant of B then for each item (solution, d) E L such that c resolves with d on B choose a corresponding resolvent d' and add iR(c"), d') to A. Otherwise, add a new table i B, ¢) to T, and add (program, B ~-- B) to the agenda. solution Add e to its table. Let e = H ~ B. Then for each item of the form (tabh(H'), d) in any table in T where H' is a variant of H and c' resolves with c on H', choose a corresponding resolvent d' and add (R(d'), d') to A. Set r := {B: (solution, G *-- B) E L,/G, L) E T}. Figure 2: The Lemma Table algorithm The formal description of the Lemma Table proof procedure is given in Figure 2. We prove the so- undness and completeness of the proof procedure in DSrre and Johnson (in preparation). In fact, so- undness is easy to show, since all of the operations are resolution steps. Completeness follows from the fact that Lemma Table proofs can be 'unfolded' into standard SLD search trees (this unfolding is well- founded because the first step of every table-initiated subcomputation is required to be a program reso- lution), so completeness follows from HShfeld and Smolka's completeness theorem for SLD resolution in CLP. 4 A worked example Returning to the categorial grammar example above, the control rule and selection rule are specified by the Prolog code below, which can be informally described as follows. All x/3 literals are classi- fied as 'memo' literals, and add_adjuncts/2 and division/2 whose second arguments are not suf- ficiently instantiated are classified as 'delay' literals. If the clause contains a memo literal G, then the con- trol rule returns tablei[G]). Otherwise, if the clause contains any non-delay literals, then the control rule 3In order to handle the more general form of abstrac- tion discussed in footnote 2 which may be useful with ty- ped feature structure constraints, replace B with a(B) in this step, where a(B) is the result of applying the abstraction operation to B. The abstraction operation should have the property that a(B) is exactly the same as B, except that zero or more constraints in B are replaced with logically weaker constraints. returns program and the selection rule chooses the left-most such literal. If none of the above apply, the control rule returns solution. To simplify the in- terpreter code, the Prolog code for the selection rule and tableiG ) output of the control rule also return the remaining literals along with chosen goal. :- ensure_loaded(library(lists)). :- op(990, fx, [delay, memo]). delay division(_, X/Y) :- var(l), var(Y). delay add_adjuncts(_, X/Y) :- vat(X), vat(Y). memo x( ..... ). control(GsO, Control) :- memo(G), select(G, CeO, Gs) -> Control = table([G], Gs) ; member(G, GsO), \+ delay(G) -> Control = program ; Control = solution. selection(GsO, G, Gs) :- select(G1, GsO, Gel), \+ delay(Gl) -> G = Gl, Ca = Gel. Because we do not represent variable binding as ex- plicit constraints, we cannot implement 'abstraction' by means of the control rule and require an explicit abstraction operation. The abstraction operation here unbinds the first and third arguments of x/3 goals, as discussed above. abetraction([x(_,Left,_)], [x(_,Left,_)]). 104 0.1[o] e 0.211] T 0.311] T 0.411] P 0.514] s 0.6[2,5] W 1.716] P 1.817] T 1.917] T 1.1017] P 1.111101 S 0.1216,11] S 0.1312,12] W 2.14113] P 2.15114] W 2.161141 T 0.1713,12] T 1.1819,11] T 0.1913,5] T x(A, [l_t, o], B) ~-- x(A, [l_t, o], B). x(A, [l_t, o], B) ~-- x(A/C, [l_t, o], D), x(C, D, B). x(A, [l_t, o], B) ~ x(C, [l_t, o], D), x(A\C, D, B). x(A, [l_t, o], [o]) *-- lex(l_t, A). x(A/#B, [l_t, o], [o]) ~-- add(s\np/(s\np), C), div(C, A/B). x(A, [l_t, o], B) ~ add(s\np/(s\np), C), div(C, A/D), x(#D, [o], B). x(A, [o], B) ~ x(A, [o], S). x(A, [o], B) *-- x(A/C, [o], D), x(C, D, B). x(A, [o], B) ~-- x(C, [o], D), x(A\C, D, S). x(A, [o], 4) ~- lex(o, A). x(#A, [o], ~) ~- add(s\np\np, A). x(A, [l_t, o], 0) ~'- add(s\np\np, S), add(s\np/(s\np), C), div(C, A/B). x(A, [Lt, o], B) *-- add(s\np\np, C), add(s\np/(s\np), D), div(D, A/E/C), x(E, Q, B). x(A, 0, B) ~- x(A, 0, B). x(A, 0, B) ~- x(A/C, Q, D), x(C, D, B). x(h, 4, B) +-- x(C, 4, D), x(A\C, D, B). x(A, [l_t, o], B) ~-- add(s\np\np, C), add(s\np/(s\np), D), div(D, E/C), x(A\E, ~, B). x(A, [o], B) ~-- add(s\np\np, C), x(A\#C, ~, B). x(A, [l_t, o], B) ~ add(s\np/(s\np), C), div(C, D/E), x(A\(D/#E), [o], B). Figure 3: The items produced during the proof of x(¢, [lijkLte,on~wijkenJ ,=) using the control and selection rules specified in the text. The prefix t.n[a] T identifies the table t to which this item belongs, assigns this item a unique identifying number n, provides the number(s) of the item(s) a which caused this item to be created, and displays its tag T (P for 'program', T for 'table' and S for 'solution'). The selected literal(s) are shown underlined. To save space, 'add_adjuncts' is abbreviated by 'add', 'division' by 'div', 'lijkt_te' by 'It', and 'ontwijken' by 'o'. Figure 3 depicts the proof of a parse of the verb clu- ster in (1). Item 1 is generated by the initial goal; its sole negative literal is selected for program reso- lution, producing items 2-4 corresponding to three program clauses for x/3. Because items 2 and 3 con- tain 'memo' literals, the control rule tags them table; there already is a table for a variant of these goals (after abstraction). Item 4 is tagged program bec- ause it contains a negative literal that is not 'memo' or 'delay'; the resolution of this literal with the pro- gram clauses for lex/3 produces item 5 containing the constraint literals associated with lijkt re. Both of these are classified as 'delay' literals, so item 5 is tagged solution, and both are 'inherited' when item 5 resolves with the table-tagged items 2 and 3, produ- cing items 6 (corresponding to a right application analysis with lijkt te as functor) and item 19 (cor- responding to a left application analysis with ont. wijken as functor) respectively. Item 6 is tagged table, since it contains a x/3 literal; because this goal's second argument (i.e., the left string position) differs from that of the goal associated with table 0, a new table (table 1) is constructed, with item 7 as its first item. The three program clauses for x/3 are used to re- solve the selected literal in item 7, just as in item 1, yielding items 8-10. The lex/3 literal in item 10 is resolved with the appropriate program clause, pro- ducing item 11. Just as in item 5, the second argu- ment of the single literal in item 11 is not sufficiently instantiated, so item 11 is tagged solution, and the unresolved literal is 'inherited' by item 12. Item 12 contains the partially resolved analysis of the verb complex. Items 13-16 analyze the empty string; notice that there are no solution items for table 2. Items 17-19 represent partial alternative analyses of the verb cluster where the two verbs combine using other rules than forward application; again, these yield no solution items, so item 12 is the sole analy- sis of the verb cluster. 5 A simple interpreter This section describes an implementation of the Lemma Table proof procedure in Prolog, designed for simplicity rather than efficiency. Tables are stored in the Prolog database, and no explicit agenda is used. The dynamic predicate goal_Cable(G, I) records the initial goals G for each table subcompu- tation and that table's identifying index I (a number assigned to each table when it is created). The dy- namic predicate table_solution(I, S) records all of the solution items generated for table I so far, and table_paxent(I, T) records the table items T, called 'parent items' below, which are 'waiting' for additio- nal solution items from table I. The 'top level' goal is prove(G, Cs), where G is 105 a single atom (the goal to be proven), and Cs is a list of (unresolved) solution constraints (different solutions are enumerated through backtracking). prove/2 starts by retracting the tables associa- ted with previous computations, asserting the table entry associated with the initial goal, and then calls take_action/2 to perform a program resolution on the initial goal. After all succeeding steps are com- plete, prove/2 returns the solutions associated with table 0. prove(Goal, _Constraints) :- retractall (goal_gable(_, _) ), retractall (table_solut ion (_, _) ), retractall (gable_parent (_, _) ), regractall (counter (_)), assert(goal_gable( [Goal], O)), ¢ake_acgion(proEram , [Goal] : :-[Goal], O), fail. prove(Goal, Constraints) :- table_solution(O, [Goal] : :-Constraints). The predicate take_action(L, C, I) processes items. L is the item's label, C its clause and I is the in- dex of the table it belongs to. The first clause calls complete/2 to resolve the solution clause with any parent items the table may have, and the third clause constructs a parent item term (which enco- des both the clause, the tabled goal, and the in- dex of the table the item belongs to) and calls insert_into_table/2 to insert it into the appro- priate table. take_action(solution, Clause, Index) :- assert (Cable_solution(Index, Clause)), findall(P, gable_parent (Index, P), Paren¢Items), member (ParentIgem, ParenCItems), complete (ParentItem, Clause). take_acCion(proEram , Head: :-Goal, Index) :- selection(Goal, Selected, Bodyl), Selected : :- HodyO, append(BodyO, Bodyl, Body), control(Body, Action), take_action(Action, Head: :-Body, Index). take_action(table(Goal, Other), Head : : -_Body, Index) :- ins err_into_table (Goal, ¢ableItem(Head, Goal, Other, Index)). complete/2 takes an item labeled table and a clause, resolves the head of the clause with the item, and calls control/2 and take_action/3 to process the resulting item. complete(tableItem(Head, Goal, Body1, Index), Goal: :-BodyO) :- append(BodyO, Bodyl, Body), control (Body, Action), take_action(Action, Head: :-Body, Index). The first clause insert_into_table/2 checks to see if a table for the goal to be tabled has already been constructed (numbervars/3 is used to ground a copy of the term). If an appropriate table does not exist, the second clause calls create_table/3 to construct one. insert_into_table(Goal, ParentItem) :- copy_term(Goal, GoalCopy), numbervars (GoalCopy, O, _), goal_table (GoalCopy, Index), !, assert (table_parent (Index, ParentIgem) ), findall(Sol, table_solution(Index, Sol), Solutions), !, member(Solutlon, Solutions), complege(ParengItem, SQlugion). insert_into_table (GoalO, ParentICem) :- absgraction(GoalO, Goal), !, create_gable(Goal, ParengItem, Index), ¢ake_action(proEram, Goal: :-Goal, Index). create_table/3 performs the necessary database manipulations to construct a new table for the goal, assigning a new index for the table, and adding ap- propriate entries to the indices. create_table(Goal, ParentI¢~, Index) :- (retract(councer(IndexO)) -> true ; IndexO=O), Index is IndexO+l, assert (counter (Index)), assert(goal_table(Goal , Index)), as sert (table_parent (Index, ParentItem) ). 6 Conclusion This paper has presented a general framework which allows both constraint coroutining and memoizs- tion. To achieve maximum generality we stated the Lemma Table proof procedure in HShfeld and Smolka's (1988) CLP framework, but the basic idea--that arbitrary constraints can be allowed to propagate in essentially the same way that variable bindings do--can be applied in most approaches to complex feature based parsing. For example, the technique can be used in chart parsing: in such a system an edge consists not only of a dotted rule and associated variable bindings (i.e., instantiated feature terms), but also contains zero or more as yet unresolved constraints that are propagated (and simplified if sufficiently instantiated) during applica- tion of the fundamental rule. At a more abstract level, the identical propagation of both variable bindings and more general cons- traints leads us to question whether there is any principled difference between them. While still preli- minary, our research suggests that it is often possible 106 to reexpress complex feature based grammars more succinctly by using more general constraints. References G. Bouma and G. van Noord. Constraint-Based Ca- tegorial Grammar. In Proceedings of the 3Pnd An- nual Meeting of the ACL, New Mexico State Uni- versity, Las Cruces, New Mexico, 1994. B. Carpenter. The Logic of Typed Feature Structu- res. Cambridge Tracts in Theoretical Computer Science 32. Cambridge University Press. 1992. J. DSrre. Generalizing Earley deduction for constraint-based grammars. In J. D6rre (ed.), Computational Aspects of Constraint-Based Linguistic Description I, DYANA-2 deliverable RI.~.A. ESPRIT, Basic Research Project 6852, July 1993. J. DSrre and M. Johnson. Memoization and co- routined constraints, ms. Institut fiir maschinelle Sprachverarbeitung, Universit~it Stuttgart. M. HShfeld and G. Smolka. Definite Relations over Constraint Languages. LILOG Report 53, IWBS, IBM Deutschland, Postfach 80 08 80, 7000 Stutt- gart 80, W. Germany, October 1988. (available on-line by anonymous ftp from /duck.dfki.uni- sb.de:/pub/papers) M. Johnson. Memoization in Constraint Logic Programming. Presented at First Workshop on Principles and Practice of Constraint Program- ming, April P8-30 1993, Newport, Rhode Island. F. C. Pereira and D. H. Warren. Parsing as Deduc- tion. In Proceedings of the Plst Annual Meeting of the ACL, Massachusetts Institute of Technology, pp. 137-144, Cambridge, Mass., 1983. S. M. Shieber. Using Restriction to Extend Par- sing Algorithms for Complex-Feature-Based For- malisms. In Proceedings of the 23rd Annual Mee- ting of the Association for Computational Lingui- stics, pp. 145-152, 1985. Tamaki, H. and T. Sato. "OLDT resolution with tabulation", in Proceedings of Third Internatio- nal Conference on Logic Programming, Springer- Verlag, Berlin, pages 84-98. 1986. Vieille, L. "Recursive query processing: the power of logic", Theoretical Computer Science 69, pages 1- 53. 1989. Warren, D. S. "Memoing for logic programs", in Communications of the ACM 35:3, pages 94-111. 1992. 107 | 1995 | 14 |
Combining Multiple Knowledge Sources for Discourse Segmentation Diane J. Litman AT&T Bell Laboratories 600 Mountain Avenue Murray Hill, NJ 07974 [email protected] Rebecca J. Passonneau* Bellcore 445 South Street Morristown, NJ 07960 beck~bellcore.com Abstract We predict discourse segment boundaries from linguistic features of utterances, using a corpus of spoken narratives as data. We present two methods for developing seg- mentation algorithms from training data: hand tuning and machine learning. When multiple types of features are used, results approach human performance on an inde- pendent test set (both methods), and using cross-validation (machine learning). 1 Introduction Many have argued that discourse has a global struc- ture above the level of individual utterances, and that linguistic phenomena like prosody, cue phra- ses, and nominal reference are partly conditioned by and reflect this structure (cf. (Grosz and Hirschberg, 1992; Grosz and Sidner, 1986; Hirschberg and Grosz, 1992; Hirschberg and Litman, 1993; Hirschberg and Pierrehumbert, 1986; Hobbs, 1979; Lascarides and Oberlander, 1992; Linde, 1979; Mann and Thomp- son, 1988; Polanyi, 1988; Reichman, 1985; Webber, 1991)). However, an obstacle to exploiting the rela- tion between global structure and linguistic devices in natural language systems is that there is too little data about how they constrain one another. We have been engaged in a study addressing this gap. In previous work (Passonneau and Litman, 1993), we reported on a method for empirically validating global discourse units, and on our evaluation of algo- rithms to identify these units. We found significant agreement among naive subjects on a discourse seg- mentation task, which suggests that global discourse units have some objective reality. However, we also found poor correlation of three untuned algorithms (based on features of referential noun phrases, cue words, and pauses, respectively) with the subjects' segmentations. In this paper, we discuss two methods for develo- ping segmentation algorithms using multiple know- *Bellcore did not support the second author's work. ledge sources. In section 2, we give a brief overview of related work and summarize our previous results. In section 3, we discuss how linguistic features are coded and describe our evaluation. In section 4, we present our analysis of the errors made by the best performing untuned algorithm, and a new algorithm that relies on enriched input features and multiple knowledge sources. In section 5, we discuss our use of machine learning tools to automatically construct decision trees for segmentation from a large set of input features. Both the hand tuned and automa- tically derived algorithms improve over our previ- ous algorithms. The primary benefit of the hand tuning is to identify new input features for impro- ving performance. Machine learning tools make it convenient to perform numerous experiments, to use large feature sets, and to evaluate results using cross- validation. We discuss the significance of our results and briefly compare the two methods in section 6. 2 Discourse Segmentation 2.1 Related Work Segmentation has played a significant role in much work on discourse. The linguistic structure of Grosz and Sidner's (1986) tri-partite discourse model con- sists of multi-utterance segments whose hierarchical relations are isomorphic with intentional structure. In other work (e.g., (Hobbs, 1979; Polanyi, 1988)), segmental structure is an artifact of coherence re- lations among utterances, and few if any specific claims are made regarding segmental structure per se. Rhetorical Structure Theory (RST) (Mann and Thompson, 1988) is another tradition of defining re- lations among utterances, and informs much work in generation. In addition, recent work (Moore and Paris, 1993; Moore and Pollack, 1992) has addressed the integration of intentions and rhetorical relations. Although all of these approaches have involved de- tailed analyses of individual discourses or represen- tative corpora, we believe there is a need for more rigorous empirical studies. Researchers have begun to investigate the ability of humans to agree with one another on segmen- 108 tation, and to propose methodologies for quantify- ing their findings. Several studies have used expert coders to locally and globally structure spoken dis- course according to the model of Grosz and Sid- net (1986), including (Grosz and Hirschberg, 1992; Hirschberg and Grosz, 1992; Nakatani et al., 1995; Stifleman, 1995). Hearst (1994) asked subjects to place boundaries between paragraphs of exposi- tory texts, to indicate topic changes. Moser and Moore (1995) had an expert coder assign segments and various segment features and relations based on RST. To quantify their findings, these studies use notions of agreement (Gale et al., 1992; Mo- set and Moore, 1995) and/or reliability (Passonneau and Litman, 1993; Passonneau and Litman, to ap- pear; Isard and Carletta, 1995). By asking subjects to segment discourse using a non-linguistic criterion, the correlation of linguistic devices with independently derived segments can then be investigated in a way that avoids circularity. Together, (Grosz and Hirschberg, 1992; Hirschberg and Grosz, 1992; Nakatani et al., 1995) comprise an ongoing study using three corpora: professio- nally read AP news stories, spontaneous narrative, and read and spontaneous versions of task-oriented monologues. Discourse structures are derived from subjects' segmentations, then statistical measures are used to characterize these structures in terms of acoustic-prosodic features. Grosz and Hirschberg's work also used the classification and regression tree system CART (Breiman et al., 1984) to automati- cally construct and evaluate decision trees for classi- fying aspects of discourse structure from intonatio- nal feature values. Morris and Hirst (1991) structu- red a set of magazine texts using the theory of (Grosz and Sidner, 1986), developed a thesaurus-based le- xical cohesion algorithm to segment text, then qua- litatively compared their segmentations with the re- sults. Hearst (1994) presented two implemented seg- mentation algorithms based on term repetition, and compared the boundaries produced to the bounda- ries marked by at least 3 of 7 subjects, using in- formation retrieval metrics. Kozima (1993) had 16 subjects segment a simplified short story, developed an algorithm based on lexical cohesion, and qualita- tively compared the results. Reynar (1994) propo- sed an algorithm based on lexical cohesion in con- junction with a graphical technique, and used infor- mation retrieval metrics to evaluate the algorithm's performance in locating boundaries between conca- tenated news articles. 2.2 Our Previous Results We have been investigating a corpus of monologues collected and transcribed by Chafe (1980), known as the Pear stories. As reported in (Passonneau and Litman, 1993), we first investigated whether units of global structure consisting of sequences of utterances could be reliably identified by naive sub- jects. We analyzed linear segmentations of 20 nar- ratives performed by naive subjects (7 new subjects per narrative), where speaker intention was the seg- ment criterion. Subjects were given transcripts, as- ked to place a new segment boundary between li- nes (prosodic phrases) 1 wherever the speaker had a new communicative goal, and to briefly describe the completed segment. Subjects were free to as- sign any number of boundaries. The qualitative results were that segments varied in size from 1 to 49 phrases in length (Avg.-5.9), and the rate at which subjects assigned boundaries ranged from 5.5% to 41.3%. Despite this variation, we found statistically significant agreement among subjects across all narratives on location of segment boun- daries (.114 z 10 -6 < p < .6 z 10-9). We then looked at the predictive power of lin- guistic cues for identifying the segment boundaries agreed upon by a significant number of subjects. We used three distinct algorithms based on the distri- bution of referential noun phrases, cue words, and pauses, respectively. Each algorithm (NP-A, CUE- A, PAUSE-A) was designed to replicate the subjects' segmentation task (break up a narrative into conti- guous segments, with segment breaks falling between prosodic phrases). NP-A used three features, while CUE-A and PAUSE-A each made use of a single fea- ture. The features are a subset of those described in section 3. To evaluate how well an algorithm predicted seg- mental structure, we used the information retrie- val (IR) metrics described in section 3. As repor- ted in (Passonneau and Litman, to appear), we also evaluated a simple additive method for combining algorithms in which a boundary is proposed if each separate algorithm proposes a boundary. We tested all pairwise combinations, and the combination of all three algorithms. No algorithm or combination of algorithms performed as well as humans. NP- A performed better than the other unimodal algo- rithms, and a combination of NP-A and PAUSE-A performed best. We felt that significant improve- ments could be gained by combining the input fea- tures in more complex ways rather than by simply combining the outputs of independent algorithms. 3 Methodology 3.1 Boundary Classification We represent each narrative in our corpus as a se- quence of potential boundary sites, which occur bet- ween prosodic phrases. We classify a potential boun- dary site as boundary if it was identified as such by at least 3 of the 7 subjects in our earlier study. Otherwise it is classified as non-boundary. Agree- ment among subjects on boundaries was significant at below the .02% level for values ofj ___ 3, where j is 1 We used Chafe's (1980) prosodic analysis. 109 ..Because he's looking at the girl. ]1 SUBJECT (non-boundary)[ [.75] Falls over, [ 5 SUBJECTS (boundary) l [1.35] uh there's no conversation in this movie. [0 SUBJECTS (non-boundary)[ [.6] There's sounds, [0 SUBJECTS (.on-boundary)] yOU know, I O SUBJECTS (non-boundary) l like the birds and stuff, 10 SUBJECTS (non-boundary)] but there., the humans beings in it don't say anything. 17 SUBJECTS (boundary)[ ll.01 He falls over, Figure h Excerpt from narr. 6, with boundaries. the number of subjects (1 to 7), on all 20 narratives. 2 Fig. 1 shows a typical segmentation of one of the narratives in our corpus. Each line corresponds to a prosodic phrase, and each space between the li- nes corresponds to a potential boundary site. The bracketed numbers will be explained below. The bo- xes in the figure show the subjects' responses at each potential boundary site, and the resulting boundary classification. Only 2 of the 7 possible boundary si- tes are classified as boundary. 3.2 Coding of Linguistic Features Given a narrative of n prosodic phrases, the n-1 po- tential boundary sites are between each pair of pros- odic phrases Pi and P/+I, i from 1 to n-1. Each potential boundary site in our corpus is coded using the set of linguistic features shown in Fig. 2. Values for the prosodic features are obtained by automatic analysis of the transcripts, whose con- ventions are defined in (Chafe, 1980) and illustra- ted in Fig. h .... and "?" indicate sentence- final intonational contours; "," indicates phrase-final but not sentence final intonation; "[X]" indicates a pause lasting X seconds; ".." indicates a break in timing too short to be measured. The featu- res before and after depend on the final punctua- tion of the phrases Pi and Pi+I, respectively. The value is '+sentence.final.contour' if "." or "?", '- sentence.final.contour' if ",". Pause is assigned 'true' if Pi+l begins with [X], 'false' otherwise. Duration is assigned X if pause is 'true', 0 otherwise. The cue phrase features are also obtained by au- tomatic analysis of the transcripts. Cue1 is assigned 'true' if the first lexical item in PI+I is a member of the set of cue words summarized in (Hirschberg and Litman, 1993). Word1 is assigned this lexical item if 2We previously used agreement by 4 subjects as the threshold for boundaries; for j > 4, agreement was signi- ficant at the .01~0 level. (Passonneau and Litman, 1993) • Prosodic Features - before:+sentence.final.contour,-sentence.flnal.contour - after: +sentence.final.contour,-sentence.flnal.contour. - pause: true, false. - duration: continuous. • Cue Phrase Features - cue1: true, false. - word1: also, and, anyway, basically, because, but, fi- nally, first, like, meanwhile, no, now, oh, okay, only, or, see, so, then, well, where, NA. -- cue2: true, false. - word2: and, anyway, because, boy, but, now, okay, or, right, so, still, then, NA. • Noun Phrase Features - coref: +coref,-corer, NA. - infer: +infer, -infer, NA. - global.pro: +global.pro, -global.pro, NA. • Combined Feature -- cue-prosody: complex, true, false. Figure 2: Features and their potential values. cuel is true, 'NA' (not applicable) otherwise, a Cue2 is assigned 'true' if cue, is true and the second lexi- cal item is also a cue word. Word2 is assigned the second lexical item if cue2 is true, 'NA' otherwise. Two of the noun phrase (NP) features are hand- coded, along with functionally independent clauses (FICs), following (Passonneau, 1994). The two aut- hors coded independently and merged their results. The third feature, global.pro, is computed from the hand coding. FICs are tensed clauses that are neit- her verb arguments nor restrictive relatives. If a new FIC (C/) begins in prosodic phrase Pi+I, then NPs in Cj are compared with NPs in previous clauses and the feature values assigned as follows4: 1. corer = '+coref' if Cj contains an NP that co- refers with an NP in Cj-1; else corer= '-cord' 2. infer= '+infer' ifCj contains an NP whose refe- rent can be inferred from an NP in Cj-1 on the basis of a pre-defined set of inference relations; else infer- '-infer' 3. global.pro = '+global.pro' if Cj contains a defi- nite pronoun whose referent is mentioned in a previous clause up to the last boundary assigned by the algorithm; else global.pro = '-global.pro' If a new FIC is not initiated in Pi+I, values for all three features are 'NA'. Cue-prosody, which encodes a combination of prosodic and cue word features, was motivated by an analysis of IR errors on our training data, as de- scribed in section 4. Cue-prosody is 'complex' if: aThe cue phrases that occur in the corpus &re shown as potential values in Fig. 2. 4The NP algorithm can assign multiple boundaries within one prosodic phrase if the phrase contains mul- tiple clauses; these very rare cases are normalized (Pas- sonneau and Litman, 1993). 110 ..Because hei's looking at the girl. [.75] (ZIBRO-PRONOUNi) Falls over, before after pause duration cue 1 word 1 cue~ word;~ coref infer E;lobal.pro cue-prosodic +s.f.c -s.f.c true .75 false NA fM~e NA + + true Figure 3: Example feature coding of a potential boundary site. 1. before = '+sentence.final.contour' 2. pause = 'true' 3. And either: (a) cuet = 'true', wordt ~ 'and' (b) cuet = 'true', word1 = 'and', cue2 = 'true', word2 ¢ 'and' Else, cue-prosody has the same values as pause. Fig. 3 illustrates how the first boundary site in Fig. 1 would be coded using the features in Fig. 2. The prosodic and cue phrase features were moti- vated by previous results in the literature. For ex- ample, phrases beginning discourse segments were correlated with preceding pause duration in (Grosz and Hirschberg, 1992; ttirschberg and Grosz, 1992). These and other studies (e.g.~ (iiirschberg and Lit- man, 1993)) also found it useful to distinguish bet- ween sentence and non-sentence final intonational contours. Initial phrase position was correlated with discourse signaling uses of cue words in (Hirschberg and Litman, 1993); a potential correlation between discourse signaling uses of cue words and adjacency patterns between cue words was also suggested. Fi- nally, (Litman, 1994) found that treating cue phra- ses individually rather than as a class enhanced the results of (iiirschberg and Litman, 1993). Passonneau (to appear) examined some of the few claims relating discourse anaphoric noun phrases to global discourse structure in the Pear corpus. Re- suits included an absence of correlation of segmental structure with centering (Grosz et al., 1983; Kamey- ama, 1986), and poor correlation with the contrast between full noun phrases and pronouns. As noted in (Passonneau and Litman, 1993), the NP features largely reflect Passonneau's hypotheses that adja- cent utterances are more likely to contain expres- sions that corefer, or that are inferentially linked, if they occur within the same segment; and that a definite pronoun is more likely than a full NP to re- fer to an entity that was mentioned in the current segment, if not in the previous utterance. 3.3 Evaluation The segmentation algorithms presented in the next two sections were developed by examining only a training set of narratives. The algorithms are then evaluated by examining their performance in pre- dicting segmentation on a separate test set. We cur- rently use 10 narratives for training and 5 narratives for testing. (The remaining 5 narratives are reser- ved for future research.) The 10 training narratives Traininl~ Set .63 .72 .06 .12 Test Set .64 .68 .07 .11 Table 1: Average human performance. range in length from 51 to 162 phrases (Avg.=101.4), or from 38 to 121 clauses (Avg.=76.8). The 5 test narratives range in length from 47 to 113 phrases (Avg.=S7.4), or from 37 to 101 clauses (Avg.=69.0). The ratios of test to training data measured in narra- tives, prosodic phrases and clauses, respectively, are 50.0%, 43.1% and 44.9%. For the machine learning algorithm we also estimate performance using cross- validation (Weiss and Kulikowski, 1991), as detailed in Section 5. To quantify algorithm performance, we use the in- formation retrieval metrics shown in Fig. 4. Recall is the ratio of correctly hypothesized boundaries to target boundaries. Precision is the ratio of hypo- thesized boundaries that are correct to the total hy- pothesized boundaries. (Cf. Fig. 4 for fallout and error.) Ideal behavior would be to identify all and only the target boundaries: the values for b and c in Fig. 4 would thus both equal O, representing no errors. The ideal values for recall, precision, fallout, and error are 1, 1, 0, and 0, while the worst values are 0, 0, 1, and 1. To get an intuitive summary of overall performance, we also sum the deviation of the observed value from the ideal value for each me- tric: (1-recall) + (1-precision) + fallout + error. The summed deviation for perfect performance is thus 0. Finally, to interpret our quantitative results, we use the performance of our human subjects as a tar- get goal for the performance of our algorithms (Gale et al., 1992). Table 1 shows the average human per- formance for both the training and test sets of nar- ratives. Note that human performance is basically the same for both sets of narratives. However, two Subjects Algorithm Boundary INon-Doundary Boundary a b Non-Boundary c d Recall = Precision = Fallout ---- b Error ---- ~ Figure 4: Information retrieval metrics. 111 factors prevent this performance from being closer to ideal (e.g., recall and precision of 1). The first is the wide variation in the number of boundaries that subjects used, as discussed above. The second is the inherently fuzzy nature of boundary location. We discuss this second issue at length in (Passonnean and Litman, to appear), and present relaxed IR me- trics that penalize near misses less heavily in (Lit- man and Passonneau, 1995). 4 Hand Tuning To improve performance, we analyzed the two types of IR errors made by the original NP algorithm (Pas- sonneau and Litman, 1993) on the training data. Type "b" errors (cf. Fig. 4), mis-classification of non-boundaries, were reduced by changing the co- ding features pertaining to clauses and NPs. Most "b" errors correlated with two conditions used in the NP algorithm, identification of clauses and of infe- rential links. The revision led to fewer clauses (more assignments of 'NA' for the three NP features) and more inference relations. One example of a change to clause coding is that formulaic utterances having the structure of clauses, but which function like in- terjections, are no longer recognized as independent clauses. These include the phrases let's see, let me see, I don't know, you know when they occur with no verb phrase argument. Other changes pertained to sentence fragments, unexpected clausal arguments, and embedded speech. Three types of inference relations linking succes- sive clauses (Ci-1, Ci) were added (originally there were 5 types (Passonneau, 1994)). Now, a pronoun (e.g., it, that, this) in Ci referring to an action, event or fact inferrable from Ci-1 links the two clauses. So does an implicit argument, as in Fig. 5, where the missing argument of notice is inferred to be the event of the pears falling. The third case is where an NP in Ci is described as part of an event that results directly from an event mentioned in Ci-1. "C" type errors (cf. Fig. 4), mis-classification of boundaries, often occurred where prosodic and cue features conflicted with NP features. The origi- nal NP algorithm assigned boundaries wherever the three values '-coref', '-infer', '-global.pro' (defined in section 3) co-occurred, represented as the first con- ditional statement of Fig. 6. Experiments led to the hypothesis that the most improvement came by as- signing a boundary if the cue-prosody feature had the value 'complex', even if the algorithm would not otherwise assign a boundary, as shown in Fig. 6. CI. Phr. 6 3.01 7 8 3.02 [1.1 [.7] A-nd] he's not really., doesn't seem to be paying all that much attention [.557 because [.45]] you know the pears falli, and.. he doesn't really notice (Oi), Figure 5: Inferential link due to implicit argument. if (coref = -coref and infer = -infer and global.pro = -global.pro) then boundary else|f cue-prosody ---- complex then boundary else non-boundary Figure 6: Condition 2 algorithm. We refer to the original NP algorithm applied to the initial coding as Condition 1, and the tuned al- gorithm applied to the enriched coding as Condition 2. Table 2 presents the average IR scores across the narratives in the training set for both conditi- ons. Reduction of "b" type errors raises precision, and lowers fallout and error rate. Reduction of "c" type errors raises recall, and lowers fallout and error rate. All scores improve in Condition 2, with pre- cision and fallout showing the greatest relative im- provement. The major difference from human per- formance is relatively poorer precision. The standard deviations in Table 2 are often close to 1/4 or 1/3 of the reported averages. This indicates a large amount of variability in the data, reflecting wide differences across narratives (speakers) in the training set with respect to the distinctions recogni- zed by the algorithm. Although the high standard deviations show that the tuned algorithm is not well fitted to each narrative, it is likely that it is overspe- cialized to the training sample in the sense that test narratives are likely to exhibit further variation. Table 3 shows the results of the hand tuned al- gorithm on the 5 randomly selected test narratives on both Conditions 1 and 2. Condition 1 results, the untuned algorithm with the initial feature set, are very similar to the training set except for worse precision. Thus, despite the high standard devia- tions, 10 narratives seems to have been a sufficient sample size for evaluating the initial NP algorithm. Condition 2 results are better than condition 1 in Table 3, and condition 1 in Table 2. This is strong evidence that the tuned algorithm is a better pre- dictor of segment boundaries than the original NP algorithm. Nevertheless, the test results of condition 2 are much worse than the corresponding training re- sults, particularly for precision (.44 versus .62). This Averalse Recall Prec Fall Error SumDev Condition 1 .42 .40 .14 .22 1.54 Std. Dev. .17 .12 .06 .07 .34 Condition 2 .58 .62 .08 .14 1.02 Std. Dev. .14 .10 .04 .05 .18 Table 2: Performance on training set. Average Recall Prec Fall Error SumDev Condition 1 .44 .29 .16 .21 1.64 Std. Dev. .18 .17 .07 .05 .32 Condition 2 .50 .44 .11 .17 1.34 Std. Dev. .21 .06 .03 .04 .29 Table 3: Performance on test set. 112 confirms that the tuned algorithm is over calibrated to the training set. 5 Machine Learning We use the machine learning program C4.5 (Quin- lan, 1993) to automatically develop segmentation al- gorithms from our corpus of coded narratives, where each potential boundary site has been classified and represented as a set of linguistic features. The first input to C4.5 specifies the names of the classes to be learned (boundary and non-boundary), and the names and potential values of a fixed set of coding features (Fig. 2). The second input is the training data, i.e., a set of examples for which the class and feature values (as in Fig. 3) are specified. Our trai- ning set of 10 narratives provides 1004 examples of potential boundary sites. The output of C4.5 is a classification algorithm expressed as a decision tree, which predicts the class of a potential boundary gi- ven its set of feature values. Because machine learning makes it convenient to induce decision trees under a wide variety of con- ditions, we have performed numerous experiments, varying the number of features used to code the trai- ning data, the definitions used for classifying a po- tential boundary site as boundary or non-boundary 5 and the options available for running the C4.5 pro- gram. Fig. 7 shows one of the highest-performing learned decision trees from our experiments. This decision tree was learned under the following condi- tions: all of the features shown in Fig. 2 were used to code the training data, boundaries were classified as discussed in section 3, and C4.5 was run using only the default options. The decision tree predicts the class of a potential boundary site based on the featu- res before, after, duration, cuel, wordl, corer, infer, and global.pro. Note that although not all available features are used in the tree, the included features represent 3 of the 4 general types of knowledge (pros- ody, cue phrases and noun phrases). Each level of the tree specifies a test on a single feature, with a branch for every possible outcome of the test. 6 A branch can either lead to the assignment of a class, or to another test. For example, the tree initially branches based on the value of the feature before. If the value is '-sentence.final.contour' then the first branch is taken and the potential boundary site is as- signed the class non-boundary. If the value of before is 'q-sentence.final.contour' then the second branch is taken and the feature corer is tested. The performance of this learned decision tree ave- raged over the 10 training narratives is shown in Table 4, on the line labeled "Learning 1". The line labeled "Learning 2" shows the results from another 5(Litman and Passonneau, 1995) varies the number of subjects used to determine boundaries. eThe actual tree branches on every value of worda; the figure merges these branches for clarity. if before = -sentence.final.contour then non.boundary elaeif before = +sentence.final.contour then ifcoref = NA then non-boundary elseif coref = +corer then if after ----. +sentence.final.contour then if duration <__ 1.3 then non-boundary elself duration > 1.3 then boundary elseif after = -sentence.final.contour then if word 1 E {also,basically, because,finally, first,like, meanwhile,no,oh,okay, only, aee,so,well,where,NA} then non-boundary else|f word 1 E {anyway, but,now,or,then} then boundary else|f word I = and then if duration < 0.6 then non-boundary elseifdurat~on > 0.6 then boundary elseif coref = -corer then if infer = +infer then non-boundary elself infer = NA then boundary elseifinfer = -infer then if after = -sentence.final.contour then boundary elself after = +sentence.final.contour then if cue 1 = true then if global.pro = NA then boundary elseif global.pro = -global.pro then boundary elself global.pro = +global.pro then if duration < 0.65 then non-boundary elseifdurat~'on > 0.65 then boundary elseifcue I = false then if duration > 0.5 then non.boundary elselfduration <: 0.5 then if duration < 0.35 then non-boundary eiseifdurat~on > 0.35 then boundary Figure 7: Learned decision tree for segmentation. machine learning experiment, in which one of the default C4.5 options used in "Learning 1" is over- ridden. The "Learning 2" tree (not shown due to space restrictions) is more complex than the tree of Fig. 7, but has slightly better performance. Note that "Learning 1" performance is comparable to hu- man performance (Table 1), while "Learning 2" is slightly better than humans. The results obtained via machine learning are also somewhat better than the results obtained using hand tuning--particularly with respect to precision ("Condition 2" in Table 2), and are a great improvement over the original NP results ("Condition 1" in Table 2). The performance of the learned decision trees ave- raged over the 5 test narratives is shown in Table 5. Comparison of Tables 4 and 5 shows that, as with the hand tuning results (and as expected), average per- formance is worse when applied to the testing rather than the training data particularly with respect to precision. However, performance is an improvement over our previous best results ("Condition 1" in Ta- ble 3), and is comparable to ("Learning 1") or very slightly better than ("Learning 2") the hand tuning results ("Condition 2" in Table 3). We also use the resampling method of cross- validation (Weiss and Kulikowski, 1991) to estimate performance, which averages results over multiple partitions of a sample into test versus training data. We performed 10 runs of the learning program, each using 9 of the 10 training narratives for that run's 113 Average Recall Prec Fall Error SumDev Learning 1 .54 .76 .04 .11 .85 Std. Dev. .18 .12 .02 .04 .28 Learning 2 .59 .78 .03 .10 .76" Std. Dev. .22 .12 .02 .04 .29 Table 4: Performance on training set. Average Recall Prec Fall Error SumDev Learning 1 .43 .48 .08 .16 1.34 Std. Dev. .21 .13 .03 .05 .36 Learning 2 .47 .50 .09 .16 1.27 Std. Dev. .18 .16 .04 .07 .42 Table 5: Performance on test set. Average Recall Prec Fall Error SumDev Learning 1 .43 .63 .05 .15 1.14' Std. Dev, .19 .16 .03 .03 .24 Learning 2 .46 .61 .07 .15 1.15 Std. Dev. .20 .14 .04 .03 .21 Table 6: Using 10-fold cross-validation. training set (for learning the tree) and the remaining narrative for testing. Note that for each iteration of the cross-validation, the learning process begins from scratch and thus each training and testing set are still disjoint. While this method does not make sense for humans, computers can truly ignore pre- vious iterations. For sample sizes in the hundreds (our 10 narratives provide 1004 examples) 1O-fold cross-validation often provides a better performance estimate than the hold-out method (Weiss and Ku- likowski, 1991). Results using cross-validation are shown in Table 6, and are better than the estimates obtained using the hold-out method (Table 5), with the major improvement coming from precision. Bec- ause a different tree is learned on each iteration, the cross-validation evaluates the learning method, not a particular decision tree. 6 Conclusion We have presented two methods for developing seg- mentation hypotheses using multiple linguistic fea- tures. The first method hand tunes features and algorithms based on analysis of training errors. The second method, machine learning, automatically in- duces decision trees from coded corpora. Both me- thods rely on an enriched set of input features com- pared to our previous work. With each method, we have achieved marked improvements in performance compared to our previous work and are approaching human performance. Note that quantitatively, the machine learning results are slightly better than the hand tuning results. The main difference on average performance is the higher precision of the automated algorithm. Furthermore, note that the machine lear- ning algorithm used the changes to the coding fea- tures that resulted from the tuning methods. This suggests that hand tuning is a useful method for understanding how to best code the data, while ms- chine learning provides an effective (and automatic) way to produce an algorithm given a good feature representation. Our results lend further support to the hypothesis that linguistic devices correlate with discourse struc- ture (cf. section 2.1), which itself has practical im- port. Understanding systems could infer segments as a step towards producing summaries, while ge- neration systems could signal segments to increase comprehensibility/Our results also suggest that to best identify or convey segment boundaries, systems will need to exploit multiple signals simultaneously. We plan to continue our experiments by further merging the automated and analytic techniques, and evaluating new algorithms on our final test corpus. Because we have already used cross-validation, we do not anticipate significant degradation on new test narratives. An important area for future research is to develop principled methods for identifying di- stinct speaker strategies pertaining to how they si- gnal segments. Performance of individual speakers varies widely as shown by the high standard deviati- ons in our tables. The original NP, hand tuned, and machine learning algorithms all do relatively poorly on narrative 16 and relatively well on 11 (both in the test set) under all conditions. This lends sup- port to the hypothesis that there may be consistent differences among speakers regarding strategies for signaling shifts in global discourse structure. References Leo Breiman, Jerome Friedman, Richard Oishen, and C. Stone. 1984. Classification and Regression Trees. Wadsworth and Brooks, Monterey, CA. Wallace L. Chafe. 1980. The Pear Stories. Ablex Publishing Corporation, Norwood, NJ. William Gale, Ken W. Church, and David Yarow- sky. 1992. Estimating upper and lower bounds on the performance of word-sense disambiguation programs. In Proc. of the 30th ACL, pages 249- 256. Barbara Grosz and Julia Hirschberg. 1992. Some intonational characteristics of discourse structure. In Proc. of the International Conference on Spo- ken Language Processing. Barbara Grosz and Candace Sidner. 1986. Atten- tion, intentions and the structure of discourse. Computational Linguistics, 12:175-204. Barbara J. Grosz, Aaravind K. Joshi, and Scott Weinstein. 1983. Providing a unified account of definite noun phrases in discourse. In Proc. of the 21st ACL, pages 44-50. rCf. (Hirschberg a~d Pierrehumbert, 1986) who argue that comprehensibility improves if units are prosodically signaled. 114 Marti A. Hearst. 1994. Multi-paragraph segmenta- tion of expository text. In Proc, of the 32nd A CL. Julia Hirschberg and Barbara Grosz. 1992. Intona- tional features of local and global discourse struc- ture. In Proc. of the Darpa Workshop on Spoken Language. Julia Hirschberg and Diane Litman. 1993. Empiri- cal studies on the disambiguation of cue phrases. Computational Linguistics, 19(3):501-530. Julia Hirschberg and Janet Pierrehumbert. 1986. The intonational structuring of discourse. In Proc. of the 24th A CL. Jerry R. Hobbs. 1979. Coherence and coreference. Cognitive Science, 3(1):67-90. Amy Isard and Jean Carletta. 1995. Replicabi- lity of transaction and action coding in the map task corpus. In AAA1 1995 Spring Symposium Series: Empirical Methods in Discourse Interpre- tation and Generation, pages 60-66. Megumi Kameyama. 1986. A property-sharing constraint in centering. In Proc. of the 24th ACL, pages 200-206. H. Kozima. 1993. Text segmentation based on si- milarity between words. In Proc. of the 31st ACL (Student Session), pages 286-288. Alex Lascarides and Jon Oberlander. 1992. Tempo- ral coherence and defeasible knowledge. Theoreti- cal Linguistics. Charlotte Linde. 1979. Focus of attention and the choice of pronouns in discourse. In Talmy Givon, editor, Syntax and Semantics: Discourse and Syn- tax, pages 337-354. Academic Press, New York. Diane J. Litman and Rebecca J. Passonneau. 1995. Developing algorithms for discourse segmentation. In AAAI 1995 Spring Symposium Series: Empiri. cal Methods in Discourse Interpretation and Ge- neration, pages 85-91. Diane J. Litman. 1994. Classifying cue phrases in text and speech using machine learning. In Proc. of the 12th AAA1, pages 806-813. William C. Mann and Sandra Thompson. 1988. Rhetorical structure theory. TEXT, pages 243- 281. Johanna D. Moore and Cecile Paris. 1993. Planning text for advisory dialogues: Capturing intentional and rhetorical information. Computational Lin- guistics, 19:652-694. Johanna D. Moore and Martha E. Pollack. 1992. A problem for RST: The need for multi-level discourse analysis. Computational Linguistics, 18:537-544. Jane Morris and Graeme ttirst. 1991. Lexical co- hesion computed by thesaural relations as an in- dicator of the structure of text. Computational Linguistics, 17:21-48. Megan Moser and Julia D. Moore. 1995. Using dis- course analysis and automatic text generation to study discourse cue usage. In AAAI 1995 Spring Symposium Series: Empirical Methods in Dis- course Interpretation and Generation, pages 92- 98. Christine H. Nakatani, Julia Hirsehberg, and Bar- bara J. Grosz. 1995. Discourse structure in spo- ken language: Studies on speech corpora. In AAAI 1995 Spring Symposium Series: Empirical Methods in Discourse Interpretation and Genera- tion, pages 106-112. Rebecca J. Passonneau and Diane J. Litman. 1993. Intention-based segmentation: Human reliability and correlation with linguistic cues. In Proc. of the 31st ACL, pages 148-155. Rebecca J. Passonneau and D. Litman. to appear. Empirical analysis of three dimensions of spoken discourse. In E. Hovy and D. Scott, editors, In- terdisciplinary Perspectives on Discourse. Sprin- ger Verlag, Berlin. Rebecca J. Passonneau. 1994. Protocol for coding discourse referential noun phrases and their ante- cedents. Technical report, Columbia University. Rebecca J. Passonneau. to appear. Interaction of the segmental structure of discourse with explicit- ness of discourse anaphora. In E. Prince, A. Joshi, and M. Walker, editors, Proc. of the Workshop on Centering Theory in Naturally Occurring Dis- course. Oxford University Press. Livya Polanyi. 1988. A formal model of discourse structure. Journal of Pragmaties, pages 601-638. John K. Quinlan. 1993. C4.5 : Programs for Ma- chine Learning. Morgan Kaufmann Publishers, San Mates, Calif. Rachel Reichman. 1985. Getting Computers to Talk Like You and Me: Discourse Contezt, Focus, and Semantics. Bradford. MIT, Cambridge. J. C. Reynar. 1994. An automatic method of fin- ding topic boundaries. In Proc. of the 3$nd ACL (Student Session), pages 331-333. Lisa J. Stifleman. 1995. A discourse analysis approach to structured speech. In AAAI 1995 Spring Symposium Series: Empirical Methods in Discourse Interpretation and Generation, pages 162-167. Bonnie L. Webber. 1991. Structure and ostension in the interpretation of discourse deixis. Language and Cognitive Processes, pages 107-135. Sholom M. Weiss and Casimir Kulikowski. 1991. Computer systems that learn: classification and prediction methods from statistics, neural nets, machine learning, and expert s~/stems. Morgan Kaufmann. 115 | 1995 | 15 |
Utilizing Statistical Dialogue Act Processing in Verbmobil Norbert Reithinger and Elisabeth Maier* DFKI GmbH Stuhlsatzenhausweg 3 D-66123 Saarbriicken Germany {re±thinger, maier}@dfki, uni- sb. de Abstract In this paper, we present a statistical ap- proach for dialogue act processing in the di- alogue component of the speech-to-speech translation system VERBMOBIL. Statistics in dialogue processing is used to predict follow-up dialogue acts. As an application example we show how it supports repair when unexpected dialogue states occur. 1 Introduction Extracting and processing communicative intentions behind natural language utterances plays an im- portant role in natural language systems (see e.g. (Cohen et al., 1990; Hinkelman and Spackman, 1994)). Within the speech-to-speech translation sys- tem VERBMOBIL (Wahlster, 1993; Kay et al., 1994), dialogue acts are used as the basis for the treatment of intentions in dialogues. The representation of in- tentions in the VERBMOBIL system serves two main purposes: • Utilizing the dialogue act of an utterance as an important knowledge source for transla- tion yields a faster and often qualitative better translation than a method that depends on sur- face expressions only. This is the case especially in the first application of VV.RBMOBIL, the on- demand translation of appointment scheduling dialogues. • Another use of dialogue act processing in VERB- MOBIL is the prediction of follow-up dialogue acts to narrow down the search space on the analysis side. For example, dialogue act pre- dictions are employed to allow for dynamically adaptable language models in word recognition. *This work was funded by the German Federal Min- istry for Education, Research and Technology (BMBF) in the framework of the Verbmohil Project under Grant 01IV101K/1. The responsibility for the contents of this study lies with the authors. Thanks to Jan Alexanders- son for valuable comments and suggestions on earlier drafts of this paper. Recent results (e.g. (Niedermair, 1992)) show a reduction of perplexity in the word recognizer between 19% and 60% when context dependent language models are used. DiMogue act determination in VERBMOBIL is done in two ways, depending on the system mode: using deep or shallow processing. These two modes depend on the fact that VERBMOBIL is only translating on demand, i.e. when the user's knowledge of English is not sufficient to participate in a dialogue. If the user of VERBMOBIL needs translation, she presses a button thereby activating deep processing. In depth processing of an utterance takes place in maximally 50% of the dialogue contributions, namely when the owner speaks German only. DiMogue act extraction from a DRS-based semantic representation (Bos et al., 1994) is only possible in this mode and is the task of the semantic evaluation component of VERB- MOBIL. In the other processing mode the diMogue com- ponent tries to process the English passages of the diMogue by using a keyword spotter that tracks the ongoing dialogue superficiMly. Since the keyword spotter only works reliably for a vocabulary of some ten words, it has to be provided with keywords which typically occur in utterances of the same diMogue act type; for every utterance the dialogue component supplies the keyword spotter with a prediction of the most likely follow-up dialogue act and the situation- dependent keywords. The dialogue component uses a combination of statistical and knowledge based approaches to pro- cess dialogue acts and to maintain and to provide contextual information for the other modules of VERBMOBIL (Maier and McGlashan, 1994). It in- cludes a robust dialogue plan recognizing module, which uses repair techniques to treat unexpected di- alogue steps. The information acquired during di- alogue processing is stored in a dialogue memory. This contextual information is decomposed into the intentional structure, the referential structure, and the temporal structure which refers to the dates mentioned in the dialogue. 116 An overview of the dialogue component is given in (Alexandersson et al., 1995). In this paper main emphasis is on statistical dialogue act prediction in VEFtBMOBIL, with an evaluation of the method, and an example of the interaction between plan recogni- tion and statistical dialogue act prediction. Main Wadoguo Gro~ Suggea Introduce Init I=lequoet_CornmQmt Requut Commont • Commont / Thank Su99eet Requeet_Comment Con(mrn PotonUol additions In any cllelogue Clarily_Amvo¢ °--.-= <, / I:)igam= V COa~y_Ou=ry I 1-1 Initial Stw 0 Final State • Nc~4iaal SUm [ Figure 1: A dialogue model for the description of appointment scheduling dialogs 2 The Dialogue Model and Predictions of Dialogue Acts Like previous approaches for modeling task-oriented dialogues we assume that a dialogue can be de- scribed by means of a limited but open set of di- alogue acts (see e.g. (Bilange, 1991), (Mast et al., 1992)). We selected the dialogue acts by examining the VERBMOBIL corpus, which consists of transliter- ated spoken dialogues (German and English) for ap- pointment scheduling. We examined this corpus for the occurrence of dialogue acts as proposed by e.g. (Austin, 1962; Searle, 1969) and for the necessity to introduce new, sometimes problem-oriented dialogue acts. We first defined 17 dialogue acts together with semi-formal rules for their assignment to utterances (Maier, 1994). After one year of experience with these acts, the users of dialogue acts in VERBMOBIL selected them as the domain independent "upper" concepts within a more elaborate hierarchy that be- comes more and more propositional and domain de- pendent towards its leaves (Jekat et al., 1995). Such a hierarchy is useful e.g. for translation purposes. Following the assignment rules, which also served as starting point for the automatic determination of dialogue acts within the semantic evaluation com- ponent, we hand-annotated over 200 dialogues with dialogue act information to make this information available for training and test purposes. Figure 1 shows the domain independent dialogue acts and the transition networks which define admis- sible sequences of dialogue acts. In addition to the dialogue acts in the main dialogue network, there are five dialogue acts, which we call deviations, that can occur at any point of the dialogue. They are repre- sented in an additional subnetwork which is shown at the bottom of figure 1. The networks serve as the basis for the implementation of a parser which determines whether an incoming dialogue act is com- patible with the dialogue model. As mentioned in the introduction, it is not only important to extract the dialogue act of the cur- rent utterance, but also to predict possible follow up dialogue acts. Predictions about what comes next are needed internally in the dialogue compo- nent and externally by other components in VERB- MOBIL. An example of the internal use, namely the treatment of unexpected input by the plan recog- nizer, is described in section 4. Outside the dialogue component dialogue act predictions are used e.g. by the abovementioned semantic evaluation component and the keyword spotter. The semantic evaluation component needs predictions when it determines the dialogue act of a new utterance to narrow down the set of possibilities. The keyword spotter can only detect a small number of keywords that are selected for each dialogue act from the VERBMOBIL corpus of annotated dialogues using the Keyword Classifica- tion Tree algorithm (Kuhn, 1993; Mast, 1995). For the task of dialogue act prediction a knowledge source like the network model cannot be used since the average number of predictions in any state of the main network is five. This number increases when the five dialogue acts from the subnetwork which can occur everywhere are considered as well. In that case the average number of predictions goes up to 10. Be- cause the prediction of 10 dialogue acts from a total number of 17 is not sufficiently restrictive and be- cause the dialogue network does not represent pref- erence information for the various dialogue acts we need a different model which is able to make reliable dialogue act predictions. Therefore we developed a statistical method which is described in detail in the next section. 3 The Statistical Prediction Method and its Evaluation In order to compute weighted dialogue act predic- tions we evaluated two methods: The first method is to attribute probabilities to the arcs of our net- work by training it with annotated dialogues from our corpus. The second method adopted informa- tion theoretic methods from speech recognition. We 117 implemented and tested both methods and currently favor the second one because it is insensitive to de- viations from the dialogue structure as described by the dialogue model and generally yields better pre- diction rates. This second method and its evaluation will be described in detail in this section. Currently, we use n-gram dialogue act probabil- ities to compute the most likely follow-up dialogue act. The method is adapted from speech recogni- tion, where language models are commonly used to reduce the search space when determining a word that can match a part of the input signal (Jellinek, 1990). It was used for the task of dialogue act pre- diction by e.g. (Niedermair, 1992) and (Nagata and Morimoto, 1993). For our purpose, we consider a di- alogue S as a sequence of utterances Si where each utterance has a corresponding dialogue act si. If P(S) is the statistical model of S, the probability can be approximated by the n-gram probabilities P(S) = H P(siIsi-N+I'"" S,-l) i=1 Therefore, to predict the nth dialogue act sn we can use the previously uttered dialogue acts and de- termine the most probable dialogue act by comput- ing s. := max P(sls._;, s,,-u, s.,-z, ...) $ To approximate the conditional probability P(.I.) the standard smoothing technique known as deleted interpolation is used (Jellinek, 1990) with P(s.ls.-,,s.-2) = qlf(sn) q- qzf(sn Is.-x) + q3f(Sn I'.-1, s.-u) where f are the relative frequencies computed from a training corpus and qi weighting factors with ~"~qi = 1. To evaluate the statistical model, we made vari- ous experiments. Figure 2 shows the results for three representative experiments (TS1-TS3, see also (Rei- thinger, 1995)). I Pred. I TS1 TS2 TS3 1 44,24% 37.47 % 40.28% 2 66,47 % 56.50% 59.62% 3 81,46% 69.52% 71.93% Figure 2: Predictions and hit rates In all experiments 41 German dialogues (with 2472 dialogue acts) from our corpus are used as training data, including deviations. TS1 and TS2 use the same 81 German dialogues as test data. The difference between the two experiments is that in TS1 only dialogue acts of the main dialogue network are processed during the test, i.e. the deviation acts of the test dialogues are not processed. As can be seen -- and as could be expected -- the prediction rate drops heavily when unforseeable deviations oc- cur. TS3 shows the prediction rates, when all cur- rently available annotated dialogues (with 7197 dia- logue acts) from the corpus are processed, including deviations. 16 w m w M m | | | $ Io I$ | ! ! | i ! Figure 3: Hit rates for 47 dialogues using 3 predic- tions Compared to the data from (Nagata and Mori- moto, 1993) who report prediction rates of 61.7 %, 77.5% and 85.1% for one, two or three predictions respectively, the predictions are less reliable. How- ever, their set of dialogue acts (or the equivalents, called illocutionary force types) does not include di- alogue acts to handle deviations. Also, since the dialogues in our corpus are rather unrestricted, they have a big variation in their structure. Figure 3 shows the variation in prediction rates of three dia- logue acts for 47 dialogues which were taken at ran- dom from our corpus. The x-axis represents the dif- ferent diMogues, while the y-axis gives the hit rate for three predictions. Good examples for the differ- ences in the dialogue structure are the diMogue pairs #15/#16 and #41/#42. The hit rate for dialogue #15 is about 54% while for #16 it is about 86%. Even more extreme is the second pair with hit rates of approximately 93% vs. 53%. While diMogue #41 fits very well in the statisticM model acquired from the training-corpus, dialogue #42 does not. This figure gives a rather good impression of the wide va- riety of material the dialogue component has to cope with. 4 Application of the Statistical Model: Treatment of Unexpected Input The dialogue model specified in the networks mod- els all diMogue act sequences that can be usually expected in an appointment scheduling dialogue. In case unexpected input occurs repair techniques have 118 to be provided to recover from such a state and to continue processing the dialogue in the best possible way. The treatment of these cases is the task of the dialogue plan recognizer of the dialogue component. The plan recognizer uses a hierarchical depth-first left-to-right technique for dialogue act processing (Vilain, 1990). Plan operators have been used to encode both the dialogue model and methods for re- covery from erroneous dialogue states. Each plan operator represents a specific goal which it is able to fulfill in case specific constraints hold. These constraints mostly address the context, but they can also be used to check pragmatic features, like e.g. whether the dialogue participants know each other. Also, every plan operator can trigger follow- up actions, h typical action is, for example, the update of the dialogue memory. To be able to fulfill a goal a plan operator can define subgoals which have to be achieved in a pre-specified order (see e.g. (Maybury, 1991; Moore, 1994) for comparable ap- proaches). fmwl_2_01: der Termin den wir neulich abgesprochen haben am zehnten an dem Samstag (MOTIVATE) (the date we recently agreed upon, the lOth that Saturday) da kann ich doch nich' (REJECT) (then I can not) wit sollten einen anderen ausmachen (INIT) (we should make another one) mpsl_2_02: wean ich da so meinen Termin- Kalender anschaue, (DELIBERATE) (if I look at my diary) dan sieht schlecht aus (REJECT). (that looks bad) Figure 4: Part of an example dialogue Since the VERBMOBIL system is not actively par- ticipating in the appointment scheduling task but only mediating between two dialogue participants it has to be assumed that every utterance, even if it is not consistent with the dialogue model, is a legal dialogue step. The first strategy for error recovery therefore is based on the hypothesis that the attri- bution of a dialogue act to a given utterance has been incorrect or rather that an utterance has vari- ous facets, i.e. multiple dialogue act interpretations. Currently, only the most plausible dialogue act is provided by the semantic evaluation component. To find out whether there might be an additional inter- pretation the plan recognizer relies on information provided by the statistics module. If an incompat- ible dialogue act is encountered, an alternative dia- logue act is looked up in the statistical module which is most likely to come after the preceding dialogue act and which can be consistently followed by the current dialogue act, thereby gaining an admissible dialogue act sequence. To illustrate this principle we show a part of the processing of two turns (fmwl..2_01 and mpsl_2_02, see figure 4) from an example dialogue with the di- alogue act assignments as provided by the seman- tic evaluation component. The translations stick to the German words as close as possible and are not provided by VERBMOBIL. The trace of the dialogue component is given in figure 5, starting with pro- cessing of INIT. Planner: -- Processing INIT Planner: -- Processing DELIBERATE Warning -- Repairing... Planner: -- Processing REJECT Trying to find a dialogue act to bridge DELIBERATE and REJECT ... Possible insertions and their scores: ((SUGGEST 81326) (REQUEST_COMMENT 37576) (DELIBERATE20572)) Testing SUGGEST for compatibility with surrounding dialogue acts... The previomsdialogue act INIT has an additional reading of SUGGEST: INIT -> INIT SUGGEST ! Warning -- Repairing... Planner: -- Processing IiIT Planner: -- Processing SUGGEST , . . Figure 5: Example of statistical repair In this example the case for statistical repair oc- curs when a REJECT does not - as expected - follow a SUGGEST. Instead, it comes after the INIT of the topic to be negotiated and after a DELIBERATE. The latter dialogue act can occur at any point of the dialogue; it refers to utterances which do not con- tribute to the negotiation as such and which can be best seen as "thinking aloud". As first option, the plan recognizer tries to repair this state using sta- tistical information, finding a dialogue act which is able to connect INIT and REJECT 1. As can be seen in figure 5 the dialogue acts REQUEST_COMMENT, DE- LIBERATE, and SUGGEST can be inserted to achieve a consistent dialogue. The annotated scores are the product of the transition probabilities times 1000 be- tween the previous dialogue act, the potential inser- tion and the current dialogue act which are provided 1 Because DELIBERATE has only the function of "so- cial noise" it can be omitted from the following considerations. 119 by the statistic module. Ordered according to their scores, these candidates for insertion are tested for compatibility with either the previous or the current dialogue act. The notion of compatibility refers to dialogue acts which have closely related meanings or which can be easily realized in one utterance. To find out which dialogue acts can be combined we examined the corpus for cases where the repair mechanism proposes an additional reading. Looking at the sample dialogues we then checked which of the proposed dialogue acts could actually occur together in one utterance, thereby gaining a list of admissi- ble dialogue act combinations. In the VERBMOBIL corpus we found that dialogue act combinations like SUGGEST and REJECT can never be attributed to one utterance, while INIT can often also be interpreted as a SUQGEST therefore getting a typical follow-up reaction of either an acceptance or a rejection. The latter case can be found in our example: INIT gets an additional reading of SUGeEST. In cases where no statistical solution is possible plan-based repair is used. When an unexpected di- alogue act occurs a plan operator is activated which distinguishes various types of repair. Depending on the type of the incoming dialogue act specialized repair operators are used. The simplest case cov- ers dialogue acts which can appear at any point of the dialogue, as e.g. DELIBERATE and clarification dialogues (CLARIFY_QUERY and CLARIFY-ANSWER). We handle these dialogue acts by means of repair in order to make the planning process more efficient: since these dialogue acts can occur at any point in the dialogue the plan recognizer in the worst case has to test for every new utterance whether it is one of the dialogue acts which indicates a deviation. To prevent this, the occurrence of one of these dialogue acts is treated as an unforeseen event which triggers the repair operator. In figure 5, the plan recognizer issues a warning after processing the DELIBERATE di- alogue act, because this act was inserted by means of a repair operator into the dialogue structure. 5 Conclusion This paper presents the method for statistical dia- logue act prediction currently used in the dialogue component of VERBMOBIL. It presents plan repair as one example of its use. The analysis of the statistical method shows that the prediction algorithm shows satisfactory results when deviations from the main dialogue model are excluded. If dialogue acts for deviations are in- cluded, the prediction rate drops around 10%. The analysis of the hit rate shows also a large variation in the structure of the dialogues from the corpus. We currently integrate the speaker direction into the prediction process which results in a gain of up to 5 % in the prediction hit rate. Additionally, we in- vestigate methods to cluster training dialogues in classes with a similar structure. An important application of the statistical predic- tion is the repair mechanism of the dialogue plan rec- ognizer. The mechanism proposed here contributes to the robustness of the whole VERBMOBIL system insofar as it is able to recognize cases where dialogue act attribution has delivered incorrect or insufficient results. This is especially important because the in- put given to the dialogue component is unreliable when dialogue act information is computed via the keyword spotter. Additional dialogue act readings can be proposed and the dialogue history can be changed accordingly. Currently, the dialogue component processes more than 200 annotated dialogues from the VERBMOBIL corpus. For each of these dialogues, the plan rec- ognizer builds a dialogue tree structure, using the method presented in section 4, even if the dialogue structure is inconsistent with the dialogue model. Therefore, our model provides robust techniques for the processing of even highly unexpected dialogue contributions. In a next version of the system it is envisaged that the semantic evaluation component and the keyword spotter are able to attribute a set of dialogue acts with their respective probabilities to an utterance. Also, the plan operators will be augmented with sta- tistical information so that the selection of the best possible follow-up dialogue acts can be retrieved by using additional information from the plan recog- nizer itself. References Jan Alexandersson, Elisabeth Maier, and Norbert Reithinger. 1995. A Robust and Efficient Three-Layered Dialog Component for a Speech- to-Speech Translation System. In Proceedings of the 7th Conference of the European Chapter of the A CL (EA CL-95), Dublin, Ireland. John Austin. 1962. How to do things with words. Oxford: Clarendon Press. Eric Bilange. 1991. A task independent oral dia- logue model. In Proceedings of the Fifth Confer- ence of the European Chapter of the Association for Computational Linguistics (EACL-91), pages 83-88, Berlin, Germany. Johan Bos, Elsbeth Mastenbroek, Scott McGlashan, Sebastian Millies, and Manfred Pinkal. 1994. The Verbmobil Semantic Formalismus. Technical re- port, Computerlinguistik, Universit~it des Saar- landes, Saarbriicken. Philip R. Cohen, Jerry Morgan, and Martha E. Pol- lack, editors. 1990. Intentions in Communication. MIT Press, Cambridge, MA. Elizabeth A. Hinkelman and Stephen P. Spackman. 1994. Communicating with Multiple Agents. In 120 Proceedings of the 15th International Conference on Computational Linguistics (COLING 94), Au- gust 5-9, 1994, Kyoto, Japan, volume 2, pages 1191-1197. Susanne Jekat, Alexandra Klein, Elisabeth Maier, Ilona Maleck, Marion Mast, and J. Joachim Quantz. 1995. Dialogue Acts in Verbmobil. Verb- mobil Report Nr. 65, Universit~it Hamburg, DFKI Saarbriicken, Universit~it Erlangen, TU Berlin. Fred Jellinek. 1990. Self-Organized Language Mod- eling for Speech Recognition. In A. Waibel and K.-F. Lee, editors, Readings in Speech Recogni- tion, pages 450-506. Morgan Kaufmann. Martin Kay, Jean Mark Gawron, and Peter Norvig. 1994. Verbmobil. A Translation System for Face- to-Face Dialog. Chicago University Press. CSLI Lecture Notes, Vol. 33. Roland Kuhn. 1993. Keyword Classification Trees for Speech Understanding Systems. Ph.D. thesis, School of Computer Science, McGill University, Montreal. Elisabeth Maier and Scott McGlashan. 1994. Se- mantic and Dialogue Processing in the VERB- MOBIL Spoken Dialogue Translation System. In Heinrich Niemann, Renato de Mori, and Ger- hard Hanrieder, editors, Progress and Prospects of Speech Research and Technology, volume 1, pages 270-273, Miinchen. Elisabeth Maier. 1994. Dialogmodellierung in VERBMOBIL - Pestlegung der Sprechhandlun- gen fiir den Demonstrator. Technical Report Verbmobil Memo Nr. 31, DFKI Saarbriicken. Marion Mast, Ralf Kompe, Franz Kummert, Hein- rich Niemann, and Elmar NSth. 1992. The Di- alogue Modul of the Speech Recognition and Di- alog System EVAR. In Proceedings of Interna- tional Conference on Spoken Language Processing (ICSLP'92), volume 2, pages 1573-1576. Marion Mast. 1995. SchliisselwSrter zur Detek- tion yon Diskontinuit~iten und Sprechhandlun- gen. Technical Report Verbmobil Memo Nr. 57, Friedrich-Alexander-Universit~it, Erlangen- Niirnberg. Mark T. Maybury. 1991. Planning Multisen- tential English Text Using Communicative Acts. Ph.D. thesis, University of Cambridge, Camb- dridge, GB. Johanna Moore. 1994. Participating in Explanatory Dialogues. The MIT Press. Masaaki Nagata and Tsuyoshi Morimoto. 1993. An experimental statistical dialogue model to predict the Speech Act Type of the next utterance. In Proceedings of the International Symposium on Spoken Dialogue (ISSD-93), pages 83-86, Waseda University, Tokyo, Japan. Gerhard Th. Niedermair. 1992. Linguistic Mod- elling in the Context of Oral Dialogue. In Pro- ceedings of International Conference on Spoken Language Processing (ICSLP'92}, volume 1, pages 635-638, Banff, Canada. Norbert Reithinger. 1995. Some Experiments in Speech Act Prediction. In AAAI 95 Spring Sym- posium on Empirical Methods in Discourse Inter- pretation and Generation, Stanford University. John R. Searle. 1969. Speech Acts. Cambridge: University Press. Marc Vilain. 1990. Getting Serious about Parsing Plans: a Grammatical Analysis of Plan Recogni- tion. In Proceedings of AAAI-90, pages 190-197. Wolfgang Wahlster. 1993. Verbmobil-Translation of Pa~e-to-Pace Dialogs. Technical report, German Research Centre for Artificial Intelligence (DFKI). In Proceedings of MT Summit IV, Kobe, Japan. 121 | 1995 | 16 |
Evaluating Automated and Manual Acquisition of Anaphora Resolution Strategies Chinatsu Aone and Scott William Bennett Systems Research and Applications Corporation (SRA) 2000 15th Street North Arlington, VA 22201 aonec~sra.corn, bennett~sra.com Abstract We describe one approach to build an au- tomatically trainable anaphora resolution system. In this approach, we use Japanese newspaper articles tagged with discourse information as training examples for a ma- chine learning algorithm which employs the C4.5 decision tree algorithm by Quin- lan (Quinlan, 1993). Then, we evaluate and compare the results of several variants of the machine learning-based approach with those of our existing anaphora resolu- tion system which uses manually-designed knowledge sources. Finally, we compare our algorithms with existing theories of anaphora, in particular, Japanese zero pro- nouns. 1 Introduction Anaphora resolution is an important but still diffi- cult problem for various large-scale natural language processing (NLP) applications, such as information extraction and machine tr~slation. Thus far, no theories of anaphora have been tested on an empir- ical basis, and therefore there is no answer to the "best" anaphora resolution algorithm. I Moreover, an anaphora resolution system within an NLP sys- tem for real applications must handle: • degraded or missing input (no NLP system has complete lexicons, grammars, or semantic knowledge and outputs perfect results), and • different anaphoric phenomena in different do- mains, languages, and applications. Thus, even if there exists a perfect theory, it might not work well with noisy input, or it would not cover all the anaphoric phenomena. 1Walker (Walker, 1989) compares Brennan, Friedman a~ad Pollard's centering approach (Brennan et al., 1987) with Hobbs' algorithm (Hohbs, 1976) on a theoretical basis. These requirements have motivated us to de- velop robust, extensible, and trainable anaphora resolution systems. Previously (Aone and Mc- Kee, 1993), we reported our data-driven multilin- gual anaphora resolution system, which is robust, exteusible, and manually trainable. It uses dis- course knowledge sources (KS's) which are manu- ally selected and ordered. (Henceforth, we call the system the Manually-Designed Resolver, or MDR.) We wanted to develop, however, truly automatically trainable systems, hoping to improve resolution per- formance and reduce the overhead of manually con- structing and arranging such discourse data. In this paper, we first describe one approach we are taking to build an automatically trainable anaphora resolution system. In this approach, we tag corpora with discourse information, and use them as training examples for a machine learning algorithm. (Henceforth, we call the system the Ma- chine Learning-based Resolver, or MLR.) Specifi- cally, we have tagged Japanese newspaper articles about joint ventures and used the C4.5 decision tree algorithm by Quinlan (Quinlan, 1993). Then, we evaluate and compare the results of the MLR with those produced by the MDR. Finally, we compare our algorithms with existing theories of anaphora, in particular, Japanese zero pronouns. 2 Applying a Machine Learning Technique to Anaphora Resolution In this section, we first discuss corpora which we created for training and testing. Then, we describe the learning approach chosen, and discuss training features and training methods that we employed for our current experiments. 2.1 Training and Test Corpora In order to both train and evaluate an anaphora resolution system, we have been developing cor- pora which are tagged with discourse information. The tagging has been done using a GUI-based tool called the Discourse Tagging Tool (DTTool) ac- cording to "The Discourse Tagging Guidelines" we 122 have developed. 2 The tool allows a user to link an anaphor with its antecedent and specify the type of the anaphor (e.g. pronouns, definite NP's, etc.). The tagged result can be written out to an SGML- marked file, as shown in Figure 1. For our experiments, we have used a discourse- tagged corpus which consists of Japanese newspaper articles about joint ventures. The tool lets a user de- fine types of anaphora as necessary. The anaphoric types used to tag this corpus are shown in Table 1. NAME anaphora are tagged when proper names are used anaphorically. For example, in Figure 1, "Yamaichi (ID=3)" and "Sony-Prudential (ID=5)" referring back to "Yamaichi Shouken (ID=4)" (Ya- maichi Securities) and "Sony-Prudential Seimeiho- ken (ID=6)" (Sony-Prudential Life Insurance) re- spectively are NAME anaphora. NAME anaphora in Japanese are different from those in English in that any combination of characters in an antecedent can be NAME anaphora as long as the character or- der is preserved (e.g. "abe" can be an anaphor of "abcde"). Japanese definite NPs (i.e. DNP anaphora) are those prefixed by "dou" (literally meaning "the same"), "ryou" (literally meaning "the two"), and deictic determiners like "kono"(this) and "sono" (that). For example, "dou-sha" is equivalent to "the company", and "ryou-koku" to "the two countries". The DNP anaphora with "dou" and "ryou" pre- fixes are characteristic of written, but not spoken, Japanese texts. Unlike English, Japanese has so-called zero pro- nouns, which are not explicit in the text. In these cases, the DTTool lets the user insert a "Z" marker just before the main predicate of the zero pronoun to indicate the existence of the anaphor. We made dis- tinction between QZPRO and ZPRO when tagging zero pronouns. QZPRO ("quasi-zero pronoun") is chosen when a sentence has multiple clauses (sub- ordinate or coordinate), and the zero pronouns in these clauses refer back to the subject of the initial clause in the same sentence, as shown in Figure 2. The anaphoric types are sub-divided according to more semantic criteria such as organizations, people, locations, etc. This is because the current appli- cation of our multilingual NLP system is informa- tion extraction (Aone et al., 1993), i.e. extracting from texts information about which organizations are forming joint ventures with whom. Thus, resolv- ing certain anaphora (e.g. various ways to refer back to organizations) affects the task performance more than others, as we previously reported (Aone, 1994). Our goal is to customize and evaluate anaphora res- olution systems according to the types of anaphora when necessary. 2Our work on the DTTool and tagged corpora was reported in a recent paper (Aone and Bennett, 1994). 2.2 Learning Method While several inductive learning approaches could have been taken for construction of the trainable anaphoric resolution system, we found it useful to be able to observe the resulting classifier in the form of a decision tree. The tree and the features used could most easily be compared to existing theories. Therefore, our initial approach has been to employ Quinlan's C4.5 algorithm at the heart of our clas- sification approach. We discuss the features used for learning below and go on to discuss the training methods and how the resulting tree is used in our anaphora resolution algorithm. 2.3 Training Features In our current machine learning experiments, we have taken an approach where we train a decision tree by feeding feature vectors for pairs of an anaphor and its possible antecedent. Currently we use 66 features, and they include lezical (e.g. category), syntactic (e.g. grammatical role), semantic (e.g. se- mantic class), and positional (e.g. distance between anaphor and antecedent) features. Those features can be either unary features (i.e. features of either an anaphor or an antecedent such as syntactic number values) or binary features (i.e. features concerning relations between the pairs such as the positional re- lation between an anaphor and an antecedent.) We started with the features used by the MDR, gener- alized them, and added new features. The features that we employed are common across domains and languages though the feature values may change in different domains or languages. Example of training features are shown in Table 2. The feature values are obtained automatically by processing a set of texts with our NLP system, which performs lexical, syntactic and semantic analysis and then creates discourse markers (Kamp, 1981) for each NP and S. 3 Since discourse markers store the output of lexical, syntactic and semantic process- ing, the feature vectors are automatically calculated from them. Because the system output is not always perfect (especially given the complex newspaper ar- ticles), however, there is some noise in feature values. 2.4 Training Methods We have employed different training methods using three parameters: anaphoric chains, anaphoric type identification, and confidence factors. The anaphoric chain parameter is used in selecting training examples. When this parameter is on, we select a set of positive training examples and a set of negative training examples for each anaphor in a text in the following way: 3 Existence of zero pronouns in sentences is detected by the syntax module, and discourse maxkers are created for them. 123 <CORe: m='I"><COREF n~'4">ttl--lEff-</mR~:<u.~J- m='s'>y-'-- • ~')l,~Y:,,~,)t,¢.@~l~ (~P,-'ll~l~:.~t, :¢4t. lr)~) <CORE]: m='O" rcPE='~ RB:='i"></COR~>III@b~. ~)q~'~6<COR~ ZD='2e rVPE='ZPm-t~-" REFf'I"></COREF>~Ii 3"~. <CORe: ZD='~' WRf"NANE--OR6" RB:f'4">ttI--</COE~<COREF ~"8">q~,l,~ltC)~e't-"~.'3tt~ttll~:~'~'& </COR~<COR~ m='s" WR='tt~E-O~ REFf"#'>y-'---. ~')t,-~>-b,,v)l,</mR~{:~-, <COmF n)="¢' WPE='Dm" REF='8"> C r~ 5, ~-7" I, <,'CUT~ ~ <CORBF m='9" WR='ZT4~O-O~ 8EEf'5"> </OR~ ff -~ T <CO~ m=" ~o" TYR='~O-U~ RE~='5"> Figure 1: Text Tagged with Discourse Information using SGML Tags DNP DNP-F DNP-L DNP-ORG DNP-P DNP-T DNP-BOTH DNP-BOTH-ORG DNP-BOTH-L DNP-BOTH-P REFLEXIVE NAME NAME-F NAME-L NAME-ORG NAME-P DPRO LOCI TIMEI QZPRO QZPRO-ORG QZPRO-P ZPRO ZPRO-IMP ZPRO-ORG ZPRO-P Table 1: Summary of Anaphoric Types Meaning Definite NP Definite NP Definite NP Definite NP Definite NP Definite NP whose referent is a facility whose referent is a location whose referent is an organization whose referent is a person whose referent is time Definite NP whose referent is two entities Definite NP whose referent is two organization entities Definite NP whose referent is two location entities Definite NP whose referent is two person entities Reflexive expressions (e.$. "jisha ~) Proper name Proper name for facility Proper name for location Proper name for organization Proper name for person Deictic pronoun (this, these) Locational indexical (here, there) Time indexical (now, then, later) Quasi-zero pronoun Quasi-zero pronoun whose referent is an organization Quasi-zero pronoun whose referent is a person Zero pronoun Zero pronoun in an impersonal construction Zero pronoun whose referent is an organization Zero pronoun whose referent is a person JDEL Dou-ellipsis SONY-wa RCA-to teikeishi, VCR-wo QZPRO Sony-subj RCA-with joint venture VCR-obj (it) kaihatsusuru to QZPRO happyoushita develop that (it) announced "(SONY) announced that SONY will form a joint venture with RCA and (it) will develop VCR's." Figure 2: QZPRO Example Table 2: Examples of Training Features Unary feature Binaxy feature Lexical category matching-category Syntactic topicalized matching-topicalized Semantic semantic-class subsuming-semantic-class Positional antecedent-precedes-anaphor 124 Positive training examples are those anaphor- antecedent pairs whose anaphor is directly linked to its antecedent in the tagged corpus and also whose anaphor is paired with one of the antecedents on the anaphoric chain, i.e. the transitive closure between the anaphor and the first mention of the antecedent. For example, if B refers to A and C refers to B, C- A is a positive training example as well as B-A and C-B. Negative training examples are chosen by pairing an anaphor with all the possible antecedents in a text except for those on the transitive closure described above. Thus, if there are possible antecedents in the text which are not in the C-B-A transitive closure, say D, C-D and B-D are negative training examples. When the anaphoric chain parameter is off, only those anaphor-antecedent pairs whose anaphora are directly linked to their antecedents in the corpus are considered as positive examples. Because of the way in which the corpus was tagged (according to our tagging guidelines), an anaphor is linked to the most recent antecedent, except for a zero pronoun, which is linked to its most recent overt antecedent. In other words, a zero pronoun is never linked to another zero pronoun. The anaphoric type identification parameter is utilized in training decision trees. With this param- eter on, a decision tree is trained to answer "no" when a pair of an anaphor and a possible antecedent are not co-referential, or answer the anaphoric type when they are co-referential. If the parameter is off, a binary decision tree is trained to answer just "yes" or "no" and does not have to answer the types of anaphora. The confidence factor parameter (0-100) is used in pruning decision trees. With a higher confidence factor, less pruning of the tree is performed, and thus it tends to overfit the training examples. With a lower confidence factor, more pruning is performed, resulting in a smaller, more generalized tree. We used confidence factors of 25, 50, 75 and 100%. The anaphoric chain parameter described above was employed because an anaphor may have more than one "correct" antecedent, in which case there is no absolute answer as to whether one antecedent is better than the others. The decision tree approach we have taken may thus predict more than one an- tecedent to pair with a given anaphor. Currently, confidence values returned from the decision tree are employed when it is desired that a single antecedent be selected for a given anaphor. We are experiment- ing with techniques to break ties in confidence values from the tree. One approach is to use a particular bias, say, in preferring the antecedent closest to the anaphor among those with the highest confidence (as in the results reported here). Although use of the confidence values from the tree works well in prac- tice, these values were only intended as a heuristic for pruning in Quinlan's C4.5. We have plans to use cross-validation across the training set as a method of determining error-rates by which to prefer one predicted antecedent over another. Another approach is to use a hybrid method where a preference-trained decision tree is brought in to supplement the decision process. Preference-trained trees, like that discussed in Connolly et al. (Connolly et al., 1994), are trained by presenting the learn- ing algorithm with examples of when one anaphor- antecedent pair should be preferred over another. Despite the fact that such trees are learning prefer- ences, they may not produce sufficient preferences to permit selection of a single best anaphor-antecedent combination (see the "Related Work" section be- low). 3 Testing In this section, we first discuss how we configured and developed the MLRs and the MDR for testing. Next, we describe the scoring methods used, and then the testing results of the MLRs and the MDR. In this paper, we report the results of the four types of anaphora, namely NAME-ORG, QZPRO-ORG, DNP-ORG, and ZPRO-ORG, since they are the ma- jority of the anaphora appearing in the texts and most important for the current domain (i.e. joint ventures) and application (i.e. information extrac- tion). 3.1 Testing the MLRa To build MLRs, we first trained decision trees with 1971 anaphora 4 (of which 929 were NAME-ORG; 546 QZPRO-ORG; 87 DNP-ORG; 282 ZPRO-ORG) in 295 training texts. The six MLRs using decision trees with different parameter combinations are de- scribed in Table 3. Then, we trained decision trees in the MLR-2 configuration with varied numbers of training texts, namely 50, 100, 150,200 and 250 texts. This is done to find out the minimum number of training texts to achieve the optimal performance. 3.2 Testing the MDR The same training texts used by the MLRs served as development data for the MDR. Because the NLP system is used for extracting information about joint ventures, the MDR was configured to handle only the crucial subset of anaphoric types for this ex- periment, namely all the name anaphora and zero pronouns and the definite NPs referring to organi- zations (i.e. DNP-ORG). The MDR applies different sets of generators, filters and orderers to resolve dif- ferent anaphoric types (Aone and McKee, 1993). A generator generates a set of possible antecedent hy- potheses for each anaphor, while a filter eliminates *In both training and testing, we did not in- clude anaphora which refer to multiple discontinuous antecedents. 125 MLR-1 MLR-2 MLR-3 MLR-4 MLR-5 MLR-6 Table 3: Six Configurations of MLRs yes no yes no yes no yes no yes yes no no confidence factor lOO% 75% ' 50% " 25% 75% 75% unlikely hypotheses from the set. An orderer ranks hypotheses in a preference order if there is more than one hypothesis left in the set after applying all the applicable filters. Table 4 shows KS's employed for the four anaphoric types. 3.3 Scoring Method We used recall and precision metrics, as shown in Table 5, to evaluate the performance of anaphora resolution. It is important to use both measures because one can build a high recall-low precision system or a low recall-high precision system, neither of which may be appropriate in certain situations. The NLP system sometimes fails to create discourse markers exactly corresponding to anaphora in texts due to failures of hxical or syntactic processing. In order to evaluate the performance of the anaphora resolution systems themselves, we only considered anaphora whose discourse markers were identified by the NLP system in our evaluation. Thus, the system performance evaluated against all the anaphora in texts could be different. Table 5: Recall and Precision Metrics for Evaluation Recall = Nc/I, Precision = Nc/Nn I Number of system-identified anaphora in input N~ Number of correct resolutions Nh Number of resolutions attempted 3.4 Testing Results The testing was done using 1359 anaphora (of which 1271 were one of the four anaphoric types) in 200 blind test texts for both the MLRs and the MDR. It should be noted that both the training and testing texts are newspaper articles about joint ventures, and that each article always talks about more than one organization. Thus, finding antecedents of orga- nizational anaphora is not straightforward. Table 6 shows the results of six different MLRs and the MDR for the four types of anaphora, while Table 7 shows the results of the MLR-2 with different sizes of train- ing examples, 4 Evaluation 4.1 The MLRs vs. the MDR Using F-measures 5 as an indicator for overall perfor- mance, the MLRs with the chain parameters turned on and type identification turned off (i.e. MLR-1, 2, 3, and 4) performed the best. MLR-1, 2, 3, 4, and 5 all exceeded the MDR in overall performance based on F-measure. Both the MLRs and the MDR used the char- acter subsequence, the proper noun category, and the semantic class feature values for NAME-ORG anaphora (in MLR-5, using anaphoric type identifi- cation). It is interesting to see that the MLR addi- tionally uses the topicalization feature before testing the semantic class feature. This indicates that, infor- mation theoretically, if the topicalization feature is present, the semantic class feature is not needed for the classification. The performance of NAME-ORG is better than other anaphoric phenomena because the character subsequence feature has very high an- tecedent predictive power. 4.1.1 Evaluation of the MLIts Changing the three parameters in the MLRs caused changes in anaphora resolution performance. As Table 6 shows, using anaphoric chains without anaphoric type identification helped improve the MLRs. Our experiments with the confidence fac- tor parameter indicates the trade off between recall and precision. With 100% confidence factor, which means no pruning of the tree, the tree overfits the examples, and leads to spurious uses of features such as the number of sentences between an anaphor and an antecedent near the leaves of the generated tree. This causes the system to attempt more anaphor resolutions albeit with lower precision. Conversely, too much pruning can also yield poorer results. MLR-5 illustrates that when anaphoric type iden- tification is turned on the MLR's performance drops SF-measure is calculated by: F= (~2+1.0) × P x R #2 x P+R where P is precision, R is recall, and /3 is the relative importance given to recall over precision. In this case, = 1.0. 126 NAME-ORG DNP-ORG QZPRO-ORG ZPRO-ORG Generators Table 4: KS's used by the MDR Filters current-text current-text current-paragraph current-paragraph syntactic-category-propn nam~chax-subsequence semantic-class-org semantic-dass-org semantic-amount-singular not-in-the-same-dc semantic-dass-from-pred not-in-the-same-dc sere antic-dass-from-pred Orderers reverse-recency topica]ization subject-np recency topica]ization subject-np category-np recency topicalization subject-np category-np recency # exmpls MLR-1 MLR-2 MLR-3 MLR-4 MLR-5 MLR-6 MDR Table 6: Recall and Precision of the MLRs and the MDR NAME-ORG 631 R P 84.79 92.24 84.79 93.04 83.20 94.09 83.84 94.30 85.74 92.80 68.30 91.70 76.39 90.09 DNP-ORG 54 R P 44.44 50.00 44.44 52.17 37.04 58.82 38.89 60.00 44.44 55.81 29.63 64.00 35.19 50.00 383 R P 65.62 80.25 64.84 84.69 63.02 84.91 64.06 85.12 56.51 89.67 54.17 90.83 67.19 67.19 ZPRO-ORG 203 R P 4O.78 64.62 39.32 73.64 35.92 73.27 37.86 76.47 15.53 78.05 13.11 75.00 43.20 43.20 Average 1271 R P 70.20 83.49 69.73 86.73 67.53 88.04 68.55 88.55 63.84 89.55 53.49 89.74 66.51 72.91 F-measure 1271 F 76.27 77.30 76.43 77.28 74.54 67.03 69.57 texts 50 I00 150 2OO 25O 295 MDR Table 7:MLR-2 Configuration with Varied Training Data Sizes NAME-ORG DNP-ORG QZPRO-ORG ZPRO-ORG R P R P R 81.30 91.94 35.19 48.72 59.38 82.09 92.01 38.89 53.85 63.02 82.57 91.89 48.15 60.47 55.73 83.99 91.70 46.30 60.98 63.02 84.79 93.21 44.44 53.33 65.10 84.79 93.04 44.44 52.17 64.84 76.39 90.09 35.19 50.00 67.19 Average F-measure P R P R P F 76.77 29.13 56.07 64.31 81.92 72.06 85.82 28.64 62.77 65.88 85.89 74.57 85.60 20.39 70.00 62.98 87.28 73.17 82.88 36.41 65.22 68.39 84.99 75.79 83.89 40.78 73.04 70.04 86.53 77.42 84.69 39.32 73.64 69.73 86.73 77.30 67.19 43.20 43.20 66.51 72.91 69.57 127 but still exceeds that of the MDR. MLR-6 shows the effect of not training on anaphoric chains. It results in poorer performance than the MLR-1, 2, 3, 4, and 5 configurations and the MDR. One of the advantages of the MLRs is that due to the number of different anaphoric types present in the training data, they also learned classifiers for several additional anaphoric types beyond what the MDR could handle. While additional coding would have been required for each of these types in the MDR, the MLRs picked them up without ad- ditional work. The additional anaphoric types in- cluded DPRO, REFLEXIVE, and TIMEI (cf. Ta- ble 1). Another advantage is that, unlike the MDR, whose features are hand picked, the MLRs automat- ically select and use necessary features. We suspect that the poorer performance of ZPRO- OR(; and DNP-ORG may be due to the following deficiency of the current MLR algorithms: Because anaphora resolution is performed in a "batch mode" for the MLRs, there is currently no way to perco- late the information on an anaphor-antecedent link found by a system after each resolution. For exam- ple, if a zero pronoun (Z-2) refers to another zero pronoun (Z-l), which in turn refers to an overt NP, knowing which is the antecedent of Z-1 may be im- portant for Z-2 to resolve its antecedent correctly. However, such information is not available to the MLRs when resolving Z-2. 4.1.2 Evaluation of the MDR One advantage of the MDR is that a tagged train- ing corpus is not required for hand-coding the reso- lution algorithms. Of course, such a tagged corpus is necessary to evaluate system performance quan- titatively and is also useful to consult with during algorithm construction. However, the MLR results seem to indicate the limitation of the MDR in the way it uses orderer KS's. Currently, the MDR uses an ordered list of multiple orderer KS's for each anaphoric type (cf. Table 4), where the first applicable orderer KS in the list is used to pick the best antecedent when there is more than one possibility. Such selection ignores the fact that even anaphora of the same type may use different orderers (i.e. have different preferences), de- pending on the types of possible antecedents and on the context in which the particular anaphor was used in the text. 4.2 Training Data Size vs. Performance Table 7 indicates that with even 50 training texts, the MLR achieves better performance than the MDR. Performance seems to reach a plateau at about 250 training examples with a F-measure of around 77.4. 5 Related Work Anaphora resolution systems for English texts based on various machine learning algorithms, including a decision tree algorithm, are reported in Connolly et al. (Connolly et al., 1994). Our approach is different from theirs in that their decision tree identifies which of the two possible antecedents for a given anaphor is "better". The assumption seems to be that the closest antecedent is the "correct" antecedent. How- ever, they note a problem with their decision tree in that it is not guaranteed to return consistent clas- sifications given that the "preference" relationship between two possible antecedents is not transitive. Soderland and Lehnert's machine learning-based information extraction system (Soderland and Lehn- ert, 1994) is used specifically for filling particular templates from text input. Although a part of its task is to merge multiple referents when they corefer (i.e. anaphora resolution), it is hard to evaluate how their anaphora resolution capability compares with ours, since it is not a separate module. The only evaluation result provided is their extraction result. Our anaphora resolution system is modular, and can be used for other NLP-based applications such as machine translation. Soderland and Lehnert's ap- proach relies on a large set of filled templates used for training. Domain-specific features from those tem- plates are employed for the learning. Consequently, the learned classifiers are very domain-specific, and thus the approach relies on the availability of new filled template sets for porting to other domains. While some such template sets exist, such as those assembled for the Message Understanding Confer- ences, collecting such large amounts of training data for each new domain may be impractical. Zero pronoun resolution for machine translation reported by Nakaiwa and Ikehara (Nakaiwa and Ike- hara, 1992) used only semantic attributes of verbs in a restricted domain. The small test results (102 sentences from 29 articles) had high success rate of 93%. However, the input was only the first paragraphs of newspaper articles which contained relatively short sentences. Our anaphora resolu- tion systems reported here have the advantages of domain-independence and full-text handling without the need for creating an extensive domain knowledge base. Various theories of Japanese zero pronouns have been proposed by computational linguists, for ex- ample, Kameyama (Kameyama, 1988) and Walker et aL (Walker et al., 1994). Although these the- ories are based on dialogue examples rather than texts, "features" used by these theories and those by the decision trees overlap interestingly. For ex- ample, Walker et ai. proposes the following ranking scheme to select antecedents of zero pronouns. (GRAMMATICAL or ZERO) TOPIC > EMPATHY > SUBJECT > OBJECT2 > OBJECT > OTHERS 128 In examining decision trees produced with anaphoric type identification turned on, the following features were used for QZPRO-ORG in this order: topical- ization, distance between an anaphor and an an- tecedent, semantic class of an anaphor and an an- tecedent, and subject NP. We plan to analyze further the features which the decision tree has used for zero pronouns and compare them with these theories. 6 Summary and Future Work This paper compared our automated and manual ac- quisition of anaphora resolution strategies, and re- ported optimistic results for the former. We plan to continue to improve machine learning-based sys- tem performance by introducing other relevant fea- tures. For example, discourse structure informa- tion (Passonneau and Litman, 1993; Hearst, 1994), if obtained reliably and automatically, will be an- other useful domain-independent feature. In addi- tion, we will explore the possibility of combining machine learning results with manual encoding of discourse knowledge. This can be accomplished by allowing the user to interact with the produced clas- sifters, tracing decisions back to particular examples and allowing users to edit features and to evaluate the efficacy of changes. References Chinatsu Aone and Scott W. Bennett. 1994. Dis- course Tagging Tool and Discourse-tagged Mul- tilingual Corpora. In Proceedings of Interna- tional Workshop on Sharable Natural Language Resources (SNLR). Chinatsu Aone and Douglas McKee. 1993. Language-Independent Anaphora Resolution Sys- tem for Understanding Multilingual Texts. In Proceedings of 31st Annual Meeting of the ACL. Chinatsu Aone, Sharon Flank, Paul Krause, and Doug McKee. 1993. SRA: Description of the SOLOMON System as Used for MUC-5. In Pro- ceedings of Fourth Message Understanding Con- ference (MUC-5). Chinatsu Aone. 1994. Customizing and Evaluating a Multilingual Discourse Module. In Proceedings of the 15th International Conference on Compu- tational Linguistics (COLING). Susan Brennan, Marilyn Friedman, and Carl Pol- lard. 1987. A Centering Approach to Pronouns. In Proceedings of 25th Annual Meeting of the ACL. Dennis Connolly, John D. Burger, and David S. Day. 1994. A Machine Learning Approach to Anaphoric Reference. In Proceedings of Interna- tional Conference on New Methods in Language Processing (NEMLAP). Marti A. Hearst. 1994. Multi-Paragraph Segmenta- tion of Expository Text. In Proceedings of 32nd Annual Meeting of the ACL. Jerry R. Hobbs. 1976. Pronoun Resolution. Tech- nical Report 76-1, Department of Computer Sci- ence, City College, City University of New York. Megumi Kameyama. 1988. Japanese Zero Pronom- inal Binding, where Syntax and Discourse Meet. In Papers from the Second International Worksho on Japanese Syntax. Hans Kamp. 1981. A Theory of Truth and Semantic Representation. In J. Groenendijk et al., editors, Formal Methods in the Study of Language. Math- ematical Centre, Amsterdam. Hiromi Nakaiwa and Satoru Ikehara. 1992. Zero Pronoun Resolution in a Japanese to English Ma- chine Translation Systemby using Verbal Seman- tic Attribute. In Proceedings of the Fourth Con- ference on Applied Natural Language Processing. Rebecca J. Passonneau and Diane J. Litman. 1993. Intention-Based Segmentation: Human Reliabil- ity and Correlation with Linguistic Cues. In Pro- ceedings of 31st Annual Meeting of the ACL. J. Ross quinlan. 1993. C~.5: Programs forMachine Learning. Morgan Kaufmann Publishers. Stephen Soderland and Wendy Lehnert. 1994. Corpus-driven Knowledge Acquisition for Dis- course Analysis. In Proceedings of AAAI. Marilyn Walker, Masayo Iida, and Sharon Cote. 1994. Japanese Discourse and the Process of Cen- tering. Computational Linguistics, 20(2). Marilyn A. Walker. 1989. Evaluating Discourse Pro- cessing Algorithms. In Proceedings of 27th Annual Meeting of the ACL. 129 | 1995 | 17 |
Investigating Cue Selection and Placement in Tutorial Discourse Megan Moser Learning Research g: Dev. Center, and Department of Linguistics University of Pittsburgh Pittsburgh, PA 15260 moser@isp, pitt. edu Johanna D. Moore Department of Computer Science, and Learning Research & Dev. Center University of Pittsburgh Pittsburgh, PA 15260 jmoore @ cs. pitt. edu Abstract Our goal is to identify the features that pre- dict cue selection and placement in order to devise strategies for automatic text gen- eration. Much previous work in this area has relied on ad hoc methods. Our coding scheme for the exhaustive analysis of dis- course allows a systematic evaluation and refinement of hypotheses concerning cues. We report two results based on this anal- ysis: a comparison of the distribution of Sn~CE and BECAUSE in our corpus, and the impact of embeddedness on cue selection. Discourse cues play a crucial role in many dis- course processing tasks, including plan recogni- tion (Litman and Allen, 1987), anaphora resolu- tion (Gross and Sidner, 1986), and generation of coherent multisentential texts (Elhadad and McK- eown, 1990; Roesner and Stede, 1992; Scott and de Souza, 1990; Zukerman, 1990). Cues are words or phrases such as BECAUSE, FIRST, ALTHOUGH and ALSO that mark structural and semantic relation- ships between discourse entities. While some specific issues concerning cue usage have been resolved (e.g., the disambiguation of discourse and sentential cues (Hirschberg and Litman, 1993)), our concern is to identify general strategies of cue selection and place- ment that can be implemented for automatic text generation. Relevant research in reading comprehen- sion presents a mixed picture (Goldman and Mur- ray, 1992; Lorch, 1989), suggesting that felicitous use of cues improves comprehension and recall, but that indiscriminate use of cues may have detrimental effects on recall (Millis et al., 1993) and that the benefit of cues may depend on the subjects' reading skill and level of domain knowledge (McNamara et al., In press). However, interpreting the research is problematic because the manipulation of cues both within and across studies has been very unsystem- atic (Lorch, 1989). While Knott and Dale (1994) use systematic manipulation to identify functional categories of cues, their method does not provide the description of those functions needed for text generation. For the study described here, we developed a cod- ing scheme that supports an exhaustive analysis of a discourse. Our coding scheme, which we call Re- lational Discouse Analysis (RDA), synthesizes two accounts of discourse structure (Gross and Sidner, 1986; Mann and Thompson, 1988) that have often been viewed as incompatible. We have applied RDA to our corpus of tutorial explanations, producing an exhaustive analysis of each explanation. By doing such an extensive analysis and representing the re- sults in a database, we are able to identify patterns of cue selection and placement in terms of multiple factors including segment structure and semantic re- lations. For each cue, we determine the best descrip- tion of its distribution in the corpus. Further, we are able to formulate and verify more general patterns about the distribution of types of cues in the corpus. The corpus study is part of a methodology for identifying the factors that influence effective cue selection and placement. Our analysis scheme is co- ordinated with a system for automatic generation of texts. Due to this coordination, the results of our analyses of "good texts" can be used as rules that are implemented in the generation system. In turn, texts produced by the generation system provide a means for evaluation and further refinement of our rules for cue selection and placement. Our ultimate goal is to provide a text generation component that can be used in a variety of application systems. In addition, the text generator will provide a tool for the systematic construction of materials for reading comprehension experiments. The study is part of a project to improve the explanation component of a computer system that trains avionics technicians to troubleshoot complex electronic circuitry. The tutoring system gives the student a troubleshooting problem to solve, allows the student to solve the problem with minima] tutor interaction, and then engages the student in a post- problem critiquing session. During this session, the system replays the student's solution step by step, pointing out good aspects of the solution as well as ways in which the solution could be improved. 130 To determine how to build an automated explana- tion component, we collected protocols of 3 human expert tutors providing explanations during the cri- tiquing session. Because the explanation component we are building interacts with users via text and menus, the student and human tutor were required to communicate in written form. In addition, in or- der to study effective explanation, we chose experts who were rated as excellent tutors by their peers, students, and superiors. 1 Relational Discourse Analysis Because the recognition of discourse coherence and structure is complex and dependent on many types of non-linguistic knowledge, determining the way in which cues and other linguistic markers aid that recognition is a difficult problem. The study of cues must begin with descriptive work using intuition and observation to identify the factors affecting cue us- age. Previous research (Hobbs, 1985; Grosz and Sidner, 1986; Schiffrin, 1987; Mann and Thomp- son, 1988; Elhadad and McKeown, 1990) suggests that these factors include structural features of the discourse, intentional and informational relations in that structure, givenness of information in the dis- course, and syntactic form of discourse constituents. In order to devise an algorithm for cue selection and placement, we must determine how cue usage is af- fected by combinations of these factors. The corpus study is intended to enable us to gather this infor- mation, and is therefore conducted directly in terms of the factors thought responsible for cue selection and placement. Because it is important to detect the contrast between occurrence and nonoccurrence of cues, the corpus study must be be exhaustive, i.e., it must include all of the factors thought to contribute to cue usage and all of the text must be analyzed. From this study, we are deriving a system of hypotheses about cues. In this section we describe our approach to the analysis of a single speaker's discourse, which we call Relational Discourse Analysis (RDA). Apply- ing RDA to a tutor's explanation is exhaustive, i.e., every word in the explanation belongs to exactly one element in the analysis. All elements of the analysis, from the largest constituents of an explanation to the minimal units, are determined by their function in the discourse. A tutor may offer an explanation in multiple segments, the topmost constituents of the explanation. Multiple segments arise when a tutor's explanation has several steps, e.g., he may enumerate several reasons why the student's action was inemcient, or he may point out the flaws in the student's step and then describe a better alterna- tive. Each segment originates with an intention of the speaker; segments are identified by looking for sets of clauses that taken together serve a purpose. Segments are internally structured and consist of a core, i.e., that element that most directly expresses the segment purpose, and any number of contrlb- utors, the remaining constituents in the segment each of which plays a role in serving the purpose expressed by the core. For each contributor in a segment, we analyze its relation to the core from an intentional perspective, i.e., how it is intended to support the core, and from an informational perspec- tive, i.e., how its content relates to that of the core. Each segmei,t constituent, both core and contribu- tors, may itself be a segment with a core:contributor structure, or may be a simpler functional element. There are three types of simpler functional elements: (1) units, which are descriptions of domain states and actions, (2) matrix elements, which express a mental attitude, a prescription or an evaluation by embedding another element, and (3) relation clus- ters, which are otherwise like segments except that they have no core:coatributor structure. This approach synthesizes ideas which were pre- viously thought incompatible from two theories of discourse structure, the theory proposed by Grosz and Sidner (1986) and Rhetorical Structure Theory (RST) proposed by Mann and Thompson (1988). The idea that the hierarchical segment structure of discourse originates with intentions of the speaker, and thus the defining feature of a segment is that there be a recognizable segment purpose, is due to Grosz and Sidner. The idea that discourse is hierarchically structured by palrwise relations in which one relatum (the nucleus) is more central to the speaker's purpose is due to Mann and Thomp- son. Work by Moore and Pollack (1992) modi- fied the RST assumption that these palrwise re- lations are unique, demonstrating that intentional and informational relations occur simultaneously. Moser and Moore (1993) point out the correspon- dence between the relation of dominance among intentions in Grosz and Sidner and the nucleus- satellite distinction in RST. Because our analysis realizes this relation/distinction in a form different from both intention dominance and nuclearity, we have chosen the new terms core and contributor. To illustrate the application of RDA, consider the partial tutor explanation in Figure i t. The purpose of this segment is to inform the student that she made the strategy error of testing inside paxt3 too soon. The constituent that expresses the purpose, in this case (B), is the core" of the segment. The other constituents help to achieve the segment purpose. We analyze the way in which each contributor relates to the core from two perspectives, intentional and in- formational, as illustrated below. Each constituent may itself be a segment with its own core:contributor structure. For example, (C) is a subsegment whose tin order to make the example more intelligible to the reader, we replaced references to parts of the circuit with the simple labels partl, part~ and part3. 131 purpose is to give a reason for testing part2 first, namely that part2 is more susceptible to damage and therefore a more likely source of the circuit fault. The core of this subsegment is (C.2) because it most directly expresses this purpose. The contributor in (C.1) provides a reason for this susceptibility, i.e., that part2 is moved frequently. ALTHO A. you know that part1 is good, B. you should eliminate part2 before troubleshooting in part3. THIS IS BECAUSE C. 1. part2 is moved frequently AND THUS 2. is more susceptible to damage. Figure 1: An example tutor explanation Due to space limitations, we can provide only a brief description of core:contributor relations, and omit altogether the analysis of the example into the minimal RDA units of state and action units, matrix expressions and clusters. A contributor is analyzed for both its intentional and informational relations to its core. Intentional relations describe how a contributor may affect the heater's adoption of the core. For example, (A) in Figure 1 acknowl- edges a fact that might have led the student to make the mistake. Such a concession contributes to the hearer's adoption of the core in (B) by acknowledg- ing something that might otherwise interfere with this intended effect. Another kind of intentional re- lation is evidence, in which the contributors are intended to increase the hearer's belief in the core. For example, (C) stands in the evidence relation to (B). The set of intentional relations in RDA is a modification of the presentational relations of RST. Each core:contributor pair is also analyzed for its informational relation. These relations describe how the situations referred to by the core and contributor are related in the domain. The RDA analysis of the example in Figure 1 is shown schematically in Figure 2. As a convention, the core appears as the mother of all the relations it participates in. Each relation is labeled with both its intentional and informational relation, with the order of relata in the label indicating the linear order in the cliscourse. Each relation node has up to two daughters: the cue, if any, and the contributor, in the order they appear in the discourse. 2 Reliability of RDA application To assess inter-coder reliability of RDA analyses, we compared two independent analyses of the same data. Because the results reported in this paper de- pend only on the structural aspects of the analysis, our reliability assessment is confined to these. The conce$$ton:core step :prev-result ALTHO A B. you should eliminate part2 before troubleshooting in part3 core:eride~ce gcfion:regsozt THIS IS C.2 BECAUSE I evidence:core c=uae:e.~ect C.1 AND THUS Figure 2: The RDA analysis of the example in Fig- ure 1 categorization of core:contributor relations will not be assessed here. The reliability coder coded one quarter of the cur- rently analyzed corpus, consisting of 132 clauses, 51 segments, and 70 relations. Here we report the per- centage of instances for which the reliability coder agreed with the main coder on the various aspects of coding. There are several kinds of judgements made in an RDA analysis, and all of them are possible sources of disagreement. First, the two coders could analyze a contributor as supporting different cores. This oc- curred 7 times (90% agreement). Second, the coders could disagree on the core of a segment. This oc- curred 2 times (97% agreement). Third, the coders could disagree on which relation a cue was associ- ated with. This occurred 1 time (98% agreement). The final source of disagreement reflects more of a theoretical question than a question of reliable anal- ysis. The coders could disagree on whether a rela- turn should be further analyzed into an embedded core:contributor structure. This occurred 8 times (91% agreement). These rates of agreement cannot be sensibly com- pared to those found in studies of (nonembedded) segmentation agreement (Grosz and Hirschberg, 1992; Passonneau and Litman, 1993; Hearst, 1994) because our assessment of RDA reliability differs from this work in several key ways. First, the RDA coding task is more complex than identifying lo- cations of segment boundaries. Second, our sub- jects/coders are not naive about their task; they are trained. Finally, the data is not spoken as in these other studies. Future work will include a more extensive relia- bility study, one that includes the intentional and informational relations. 132 3 Initial results and their application For each tutor explanation in our corpus, each coder analyzes the text as described above, and then en- ters this analysis into a database. The technique of representing an analysis in a database and then using database queries to test hypotheses is similar to work using RST analyses to investigate the form of purpose clauses (Vander Linden et al., 1992). Be- cause our analysis is exhaustive, information about both occurrence and nonoccurrence of cues can be retrieved from the database in order to test and mod- ify hypotheses about cue usage. That is, both cue- based and factor-based retrievals are possible. In cue-based retrievals, we use an occurrence of the cue under investigation as the criterion for retrieving the value of its hypothesized descriptive factors. Factor- based retrievals provide information about cues that is unique to this study. In factor-based retrieval, the occurrence of a combination of descriptive factor values is the criteria for retrieving the accompanying cues. In this section, we report two results, one from each perspective: a comparison of the distribution of sn~cE and BECAUSE in our corpus, and the impact of embeddedness on cue selection. These results are based on the portion of our cor- pus that is analyzed and entered into the database, approximately 528 clauses. These clauses comprise 216 segments in which 287 relations were analyzed. Accompanying these relations were 165 cue occur- rences, resulting from 39 distinct cues. 3.1 Choice of"Since ~' or "Because" SINCE and BECAUSE were two of the most fre- quently used cues in our corpus, occurring 23 and 13 times, respectively. To investigate their distribution, we began with the proposal of Elhadad and McKeown (1990). As with our study, their work aims to define each cue in terms of fea- tures of the propositions it connects for the pur- pose of cue selection during text generation. Their work relies on the literature and intuitions to identify these features, and thus provides an important back- ground for a corpus study by suggesting features to include in the corpus analysis and initial hypotheses to investigate. Quirk et al. (1972) note several distributional dif- ferences between the two cues: (i) since is used when the contributor precedes the core, whereas BECAUSE typically occurs when the core precedes the contribu- tor, (ii) BECAUSE can be used to directly answer a ~#hy question, whereas SINCE cannot, and (iii) BECAUSE can be in the focus position of an it-cleft, whereas SINCE cannot. These distributional differences are reflected in our corpus, and the ordering difference (i) is of particular interest. SINCE and BECAUSE are al- ways placed with a contributor. All but one (22/23) occurrences of Sn~CE accompanied relations in con- tributor:core order, while all (13/13) occurrences of BECAUSE accompanied relations in core:contributor order 2. The crucial factor in distinguishing between S~CE and BECAUSE is the relative order of core and contrib- utor. Elhadad and McKeown (1990) claim that the two cues differ with respect to what Ducrot (1983) calls "polyphony", i.e., whether the subordinate re- latum is attributed to the hearer or to the speaker. The idea is that SINCE is used when a relatum has its informational source with the hearer (e.g., by being previously said or otherwise conveyed by the hearer). BECAUSE is monophonous, i.e., its relata originate from a single utterer, while sINCE can be polyphonous. According to Elhadad and McKeown, polyphony is a kind of given-new distinction and thus the ordering difference between the two cues reduces to the well-known tendency for given to pre- cede new. Unfortunately, this characterization of the distinction between s~cg and BECAUSE is not supported by our corpus study. As shown in Figure 3, whether or not contribu- tors could be attributed to the hearer did not corre- late with the choice of SINCE or BECAUSE. To judge whether a contributor is attributable to the student, mention of ~n action or result of a test that the student previously performed (e.g., you tested 30 to 9round earlier) was counted as 'yes', while informa- tion available by observation (e.g., partl a~d part2 are co~r~ected b~l wires), specialized circuit knowl- edge (e.g., part1 is used bll this test step) and gen- eral knowledge (e.g., part~ is more prone to damage ) were counted as 'no'. Is contributor Cue choice attributable sINCE BECAUSE to student? yes 13 no 10 Figure 3: Polyphony does not underlie the choice between SINCE and BECAUSE. This result shows that the choice between since and BECAUSE is determined by something other than the attributability of contributor to hearer. In fu- ture work, we will consider other factors that may determine ordering as possible alternative accounts for this choice. Another factor to be considered in distinguishing the two cues is the embeddedness dis- cussed in the next section. Furthermore, this result demonstrates the need to move beyond small num- bers of constructed examples and intuitions formed ~This included answers that begin with BECAUSE. In these cases, we took the core to be the presupposition to the question. 133 from unsystematic analyses of naturally occurring data. Only by an exhaustive analysis such as ours can hypotheses such as the one discussed here be systematically evaluated. 3.2 Effect of Segment Embeddedness on Cue Selection The second question we report on here concerns whether segment embeddedness affects cue selection. Much of the work on cue usage, e.g., (Elhadad and McKeown, 1990; Millis etal., 1993; Schiffrin, 1987; Zukerman, 1990) has focused on pairs of text spans, and this has led to the development of heuristics for cue selection that take into account the relation between the spans and other local features of the two relata (e.g., relative ordering of core and contributor, complexity of each span). However, analysis of our corpus led us to hypothesize that the hierarchical context in which a relation occurs, i.e., what seg- ment(s) the relation is embedded in, is a factor in cue usage. For example, recall that the relation between C.1 and C.2 in Figure 2 was expressed as part~ is moved frequently, AND THUS it is more susceptible to dam- age. Now, the relation between C.1 and C.2 could have been expressed, BECAUSE part2 is muted fre- quently, it is more musceptible to damage. However, this relation is embedded in the contributor of the relation between B and C, which is cued by THIS IS BECAUSE. Intuitively, we expect that, when a rela- tion is embedded in another relation already marked by BECAUSE, a speaker will select an alternative to BECAUSE to mark the embedded relation. That is, two relations, one embedded in the other, should be signaled by different cues. Because RDA analyses capture the hierarchical structure of texts, we were able to explore the effect of embedding on cue selec- tion. We hypothesized that cue selection for one rela- tion constrains the cue selection for relations em- bedded in it to be a different cue. To test this hy- pothesis, we paired each cue occurrence with all the other cue occurrences in the same turn. Then, for each pair of cues in the same turn, it was catego- rized in two ways: (1) the embeddedness of the rela- tions associated with the two cues, and (2) whether the two cues are the same, alternatives or different. Two cues are alternatives when their use with a re- lation would contribute (approximately) the same semantic content s . The sets of alternatives in our data are {ALSO,AND}, {BUT,ALTHOUGH,HOWEVER) and SBecause it is based on a test of intersubstitutability, the taxonomy proposed by Knott and Dale (1994) does not establish the sets of alternatives that are of inter- est here. Two cues may be intersubstitutable in some contexts but not semantic alternatives (e.g., AND and BECAUSE), or they may be semantic alternatives but not intersubstitutable because they are placed in different positions in a relation (e.g., so and BECAUSE). {BECAUSE,SINCE,SO,THUS,THEREFOI:tE}. The question is whether the choice between the same and an al- ternate cue correlates with the embeddedness of the two relations. As shown in Figure 4, we can conclude that, when a relation is going to have a cue that is semantically similar to the cue of a relation it is embedded in, an alternative cue must be chosen. Other researchers in text generation recognized the need to avoid repeti- tion of cues within a single text and devised heuris- tics such as "avoid repeating the same connective as long as there are others available" (Roesner and Stede, 1992). Our results show that this heuristic is over constraining. The first column of Figure 4 shows that the same cue may occur within a single explanation as long as there is no embedding be- tween the two relations being cued. Based on these results, our text generation algorithm will use em- beddedness as a factor in cue selection. Are relat|ons II Cue choice embedded? Same I Alternate . .. yes 0 7 no 6 18 Figure 4: Embeddedness correlates with choice be- tween same and alternate cues. 4 Conclusions We have introduced Relational Discourse Analysis, a coding scheme for the exhaustive analysis of text or single speaker discourse. RDA is a synthesis of ideas from two theories of discourse structure (Grosz and Sidner, 1986; Mann and Thompson, 1988). It pro- vides a system for analyzing discourse and formulat- ing hypotheses about cue selection and placement. The corpus study results in rules for cue selection and placement that will then be exercised by our text generator. Evaluation of these automatically generated texts forms the basis for further explo- ration of the corpus and subsequent refinement of the rules for cue selection and placement. Two initial results from the corpus study were reported. While the factor of core:contributor or- der accounted for the choice between s~ce and BE- CAUSE, this factor could not be explained in terms of whether the contributor can be attributed to the hearer. Alternative explanations for the ordering factor will be explored in future work, including other types given-new distinctions and larger con- textual factors such as focus. Second, the cue selec- tion for one relation was found to constrain the cue selection for embedded relations to be distinct cues. Both of these results are being implemented in our text generator. 134 Acknowledgments The research described in this paper was supported by the Office of Naval Research, Cognitive and Neu- ral Sciences Division (Grant Number: N00014-91-J- 1694), and a grant from the DoD FY93 Augmen- tation of Awards for Science and Engineering Re- search Training (ASSERT) Program (Grant Num- ber: N00014-93-I-0812). We are grateful to Erin Glendening for her patient and careful coding and database entry, and to Maria Gordin for her relia- bility coding. References O. Ducrot. 1983. Le seas commun. Le dire et le dit. Les editions de Minuit, Paris. Michael Elhadad and Kathleen McKeown. 1990. Generating connectives. In Proceedings of the Thirteenth International Conference on Compu- tational Linguistics, pages 97-101, Helsinki. Susan R. Goldman and John D. Murray. 1992. Knowledge of connectors as cohesion devices in text: A comparative study of native-english speakers. Journal of Educational Ps~lchology, 44(4):504-519. Barbara Grosz and Julia Hirschberg. 1992. Some intonational characteristics of discourse structure. In Proceedings of the International Conference on Spoken Language Processing. Barbara J. Grosz and Candace L. Sidner. 1986. At- tention, intention, and the structure of discourse. Computational Linguistics, 12(3):175-204. Marti Hearst. 1994. Multl-paragraph segmentation of expository discourse. In Proceedings of the 32nd Annual Meeting of the Association for Computa- tional Linguistics. Julia Hirschberg and Diane Litman. 1993. Empiri- cal studies on the disambiguation of cue phrases. Computational Linguistics, 19(3):501-530. Jerry R. Hobbs. 1985. On the coherence and struc- ture of discourse. Technical Report CSLI-85-37, Center for the Study of Language and Informa- tion, Leland Stanford Junior University, Stanford, California, October. Alistair Knott and Robert Dale. 1994. Using lin- guistic pheomena to motivate a set of coherence relations. Discourse Processes, 18(1):35-62. Diane J. Litman and James F. Allen. 1987. A plan recognition model for subdialogues in conversa- tions. Cognitive Science, 11:163-200. Robert Lorch. 1989. Text signaling devices and their effects on reading and memory processes. Educational Ps~/chology Review, 1:209-234. William C. Mann and Sandra A. Thompson. 1988. Rhetorical Structure Theory: Towards a func- tional theory of text organization. TEXT, 8(3):243-281. Danielle S. McNamara, Eileen Kintsch, Nancy But- ler Songer, and Walter Klatsch. In press. Are good texts always better? Interactions of text coherence, background knowledge, and levels of understanding in learning from text. Cognition and Instruction. Keith Millis, Arthur Gracsser, and Karl Haberlandt. 1993. The impact of connectives on the memory for expository text. Applied Cognitive PsT/ehology, 7:317-339. Johanna D. Moore and Martha E. Pollack. 1992. A problem for RST: The need for multi-level discourse analysis. Computational Linguistics, 18(4):537-544. Megan Moser and Johanna D. Moore. 1993. Inves- tigating discourse relations. In Proceedings of the A CL Workshop on Intentionalit!/and Stureture in Discourse Relations, pages 94-98. Rebecca Passonneau and Diane Litmus. 1993. Intention-based segmentation: Human reliability and correlation with linguistic cues. In Proceed- ings of the 81st Annual Meeting of the Association for Computational Linguistics. Randolph Quirk et al. 1972. A Grammar of Con. temporary English. Longman, London. Dietmar Roesner and Manfred Stede. 1992. Cus- tomizing RST for the automatic production of technical manuals. In R. Dale, E. Hovy, D. Rosner, and O. Stock, editors, Proceedings of the Sizth International Workshop on Natu- ral Language Generation, pages 199-215, Berlin. Springer-Verlag. Deborah Schiffrin. 1987. Discourse Markers. Cam- bridge University Press, New York. Donia Scott and Clarisse Sieckenius de Souza. 1990. Getting the message across in RST-based text generation. In R. Dale, C. Mellish, and M. Zock, editors, Current Research in Natural Language Generation, pages 47-73. Academic Press, New York. Keith Vander Linden, Susanna Cumming, and James Martin. 1992. Expressing local rhetorical relations in instructional text. Technical Report 92-43, University of Colorado. To appear in Com- putational Linguistics. Ingrid Zukerman. 1990. A predictive approach for the generation of rhetorical devices. Computa- tional Intelligence, 6(1):25-40. 135 | 1995 | 18 |
Response Generation in Collaborative Negotiation* Jennifer Chu-Carroll and Sandra Carberry Department of Computer and Information Sciences University of Delaware Newark, DE 19716, USA E-marl: {jchu,carberry} @cis.udel.edu Abstract In collaborative planning activities, since the agents are autonomous and heterogeneous, it is inevitable that conflicts arise in their beliefs during the planning process. In cases where such conflicts are relevant to the t~t~k at hand, the agents should engage in collaborative ne- gotiation as an attempt to square away the dis- crepancies in their beliefs. This paper presents a computational strategy for detecting conflicts regarding proposed beliefs and for engaging in collaborative negotiation to resolve the con- flicts that warrant resolution. Our model is capable of selecting the most effective aspect to address in its pursuit of conflict resolution in cases where multiple conflicts arise, and of se- lecting appropriate evidence to justify the need for such modification. Furthermore, by cap- turing the negotiation process in a recursive Propose-Evaluate.Modify cycle of actions, our model can successfully handle embedded ne- gotiation subdialogues. 1 Introduction In collaborative consultation dialogues, the consultant and the executing agent collaborate on developing a plan to achieve the executing agent's domain goal. Since agents are autonomous and heterogeneous, it is inevitable that conflicts in their beliefs arise during the planning pro- cess. In such cases, collaborative agents should attempt to square away (Joshi, 1982) the conflicts by engaging in collaborative negotiation to determine what should con- stitute their shared plan of actions and shared beliefs. Collaborative negotiation differs from non-collaborative negotiation and argum_entation mainly in the attitude of the participants, since collaborative agents are not self- centered, but act in a way as to benefit the agents as This material is based upon work supported by the National Science Foundation under Grant No. IRI-9122026. a group. Thus, when facing a conflict, a collaborative agent should not automatically reject a belief with which she does not agree; instead, she should evaluate the belief and the evidence provided to her and adopt the belief if the evidence is convincing. On the other hand, if the evalua- tion indicates that the agent should maintain her original belief, she should attempt to provide sufficient justifica- tion to convince the other agent to adopt this belief if the belief is relevant to the task at hand. This paper presents a model for engaging in collabo- rative negoa~ion to resolve conflicts in agents' beliefs about domain knowledge. Our model 1) detects con- flicts in beliefs and initiates a negotiation subdialogue only when the conflict is relevant to the current ta.~k, 2) selects the most effective aspect to address in its pursuit of conflict resolution when multiple conflicts exist, 3) selects appropriate evidence to justify the system's pro- posed modification of the user's beliefs, and 4) captures the negotiation process in a recursive Propose-Evaluate- Mod/fy cycle of actions, thus enabling the system to han- dle embedded negotiation sulxlialognes. 2 Related Work Researchers have studied the analysis and generation of arguments (Birnbaum et al., 1980; Reichman, 1981; Co- hen, 1987; Sycara, 1989; Quilici, 1992; Maybury, 1993); however, agents engaging in argumentative dialogues are solely interested in winning an argument and thus ex- hibit different behavior from collaborative agents. Sidner (1992; 1994) formulated an artificial language for mod- eling collaborative discourse using propo~acceptance and proposal/rejection sequences; however, her work is descriptive and does not specify response generation strategies for agents involved in collaborative interac- tions. Webber and Joshi (1982) have noted the importance of a cooperative system providing support for its responses. They identified strategies that a system can adopt in justi- fying its beliefs; however, they did not specify the criteria under which each of these strategies should be selected. 136 Walker (1994) described a method of determining when to include optional warrants to justify a claim based on factors such as communication cost, inference cost, and cost of memory retrieval. However, her model focuses on determining when to include informationally redundant utterances, whereas our model determines whether or not justification is needed for a claim to be convincing and, ff so, selects appropriate evidence from the system's private beliefs to support the claim. Caswey et al. (Cawsey et al., 1993; Logan et al., 1994) introduced the idea of utilizing a belief revision mechanism (Galliers, 1992) to predict whether a set of evidence is sufficient to change a user's existing belief and to generate responses for information retrieval di- alogues in a library domain. They argued that in the library dialogues they analyzed, "in no cases does ne- gotiation extend beyond the initial belief conflict and its immediate resolution:' (Logan et al., 1994, page 141). However, our analysis of naturally-occurring consultation dialogues (Columbia University Transcripts, 1985; SRI Transcripts, 1992) shows that in other domains conflict resolution does extend beyond a single exchange of con- flicting befiefs; therefore we employ a re, cursive model for collaboration that captures extended negotiation and represents the structure of the discourse. Furthermore, their system deals with a single conflict, while our model selects a focus in its pursuit of conflict resolution when multiple conflicts arise. In addition, we provide a process for selecting among multiple possible pieces of evidence. 3 Features of Collaborative Negotiation Collaborative negoti~ion occurs when conflicts arise among agents developing a shared plan 1 during collab- orative planning. A collaborative agent is driven by the goal of developing a plan that best satisfies the interests of all the agents as a group, instead of one that maximizes his own interest. This results in several distinctive features of collaborative negotiation: 1) A collaborative agent does not insist on winning an argument, and may change his beliefs ff another agent presents convincing justification for an opposing belief. This differentiates collaborative negotiation from argumentation (Birnbaum et al., 1980; Reichman, 1981; Cohen, 1987; Quilici, 1992). 2) Agents involved in collaborative negotiation are open and hon- est with one another; they will not deliberately present false information to other agents, present information in such a way as to mislead the other agents, or strategi- cally hold back information from other agents for later use. This distinguishes collaborative negotiation from non-collaborative negotiation such as labor negotiation (Sycara, 1989). 3) Collaborative agents are interested in 1The notion of shared plan has been used in (Grosz and Sidner, 1990; Allen, 1991). others' beliefs in order to decide whether to revise their own beliefs so as to come to agreement (Chu-Carroll and Carberry, 1995). Although agents involvedin argumenta- tion and non-collaborative negotiation take other agents' beliefs into consideration, they do so mainly to find weak points in their opponents' beliefs and attack them to win the argument. In our earlier work, we built on Sidner's pro- posal/acceptance and proposal/rejection sequences (Sit- net, 1994) and developed a model tha¢ captures collabo- rative planning processes in a Propose-Evaluate-Modify cycle of actions (Chu-Carroll and Carberry, 1994). This model views coll~tive planning as agent A propos- ing a set of actions and beliefs to be i~ted into the plan being developed, agent B evaluating the pro- posal to determine whether or not he accepts the proposal and, ff not, agent B proposing a set of modifications to A's original proposal. The proposed modifications will again be evaluated by A, and if conflicts arise, she may propose modifications to B's previously proposed modifications, resulting in a recursive process. However, our research did not specify, in cases where multiple conflicts arise, how an agent should identify which pm of an unaccept~ proposal to address or how to select evidence to support the proposed modification. This paper extends that work by i~ting into the modification process a slrategy to determine the aspect of the proposal that the agent will address in her pursuit of conflict resolution, as well as a means of selecting appropriate evidence to justify the need for such modification. 4 Response Generation in Collaborative Negotiation In order to capture the agents' intentions conveyed by their utterances, our model of collaborative negotiation utilizes an enhanced version of the dialogue model de- scribed in (Lambert and Carberry, 1991) to represent the current status of the interaction. The enhanced di- alogue model has four levels: the domain level which consists of the domain plan being constructed for the user's later execution, the problem-solving level which contains the actions being performed to construct the do- n~n plan, the belief level which consists of the mutual beliefs pursued during the planning process in order to further the problem-solving intentions, and the discourse level which contains the communicative actions initiated to achieve the mutual beliefs (Chu-Carroll and Carberry, 1994). This paper focuses on the evaluation and mod- ification of proposed beliefs, and details a strategy for engaging in collaborative negotiations. 137 4.1 Evaluating Proposed Beliefs Our system maintains a set of beliefs about the domain and about the user's beliefs. Associated with each be- lief is a strength that represents the agent's confidence in holding that belief. We model the strength of a belief using endorsements, which are explicit records of factors that affect one's certainty in a hypothesis (Cohen, 1985), following (Galliers, 1992; Logan et al., 1994). Our en- dorsements are based on the semantics of the utterance used to convey a befief, the level of expertise of the agent conveying the belief, stereotypical knowledge, etc. The belief level of the dialogue model consists of mu- tual beliefs proposed by the agents' discourse actions. When an agent proposes a new belief and gives (optional) supporting evidence for it, this set of proposed beliefs is represented as a belief tree, where the belief represented by a child node is intended to support that represented by its parent. The root nodes of these belief trees (rap-level beliefs) contribute to problem-solving actions and thus affect the domain plan being developed. Given a set of newly proposed beliefs, the system must decide whether to accept the proposal or m initiate a negotiation dialogue to resolve conflicts. The evaluation of proposed beliefs starts at the leaf nodes of the proposed belief trees since acceptance of a piece of proposed evidence may affect ac- ceptance of the parent belief it is intended to support. The process continues until the top-level proposed beliefs are evaluated. Conflict resolution strategies are invoked only if the top-level proposed beliefs are not accepted because if collaborative agents agree on a belief relevant to the domain plan being constructed, it is irrelevant whether they agree on the evidence for that belief (Young et al., 1994). In determining whether to accept a proposed befief or evidential relationship, the evaluator first constructs an evidence set containing the system's evidence thin supports or attacks _bcl and the evidence accepted by the system that was proposed by the user as support for -bel. Each piece of evidence contains a belief _beli, and an evidential relationship supports(.beli,-bel). Follow- ing Walker's weakest link assumption (Walker, 1992) the strength of the evidence is the weaker of the strength of the belief and the strength of the evidential relationship. The evaluator then employs a simplified version of Gal- liers' belief revision mechanism 2 (Galliers, 1992; Logan et al., 1994) to compare the strengths of the evidence that supports and attacks _bel. If the strength of one set of evi- dence strongly outweighs that of the other, the decision to accept or reject.bel is easily made. However, if the differ- ence in their strengths does not exceed a pre-determined 2For details on how our model determines the acceptance of a belief using the ranking of endorsements proposed by GaUiers, see (Chu-Carroll, 1995). ..v.~ ..e~......n.~q.h..x~ ............................ ., ~." -~ MB~3tSt-Teaches(Smith~I)) ] a ; 1 ~q. , i[MB~J,S,O.-S~,~KS,~th,n~,a ~)) ~, -.. -. Dlsc~rse Level ", i ......... : ............................................................... ". "d "" "[ lnf~J,S,~Teache~(Smi~ I i ,', [Tell('O,S,-Teaches(Smith,AI))] [Address-Acceplance ~i ~' [ I~°'m(U,S,O"-S~ic~(Smith,~= Ye'O) k~" [ TeU(U,S,On-S~,t,~(Smith,~xt y~0) I ,. ......................................................................... J Dr. Smith is not teaching AL Dr. Smith is going on sablmutical next year. Figure 1: Belief and Discourse Levels for (2) and (3) threshold, the evaluator has insufficient information to determine whether to adopt _bel and therefore will ini- tiate an information-sharing subdialogue (Cho-Carmll and Carberry, 1995) to share information with the user so that each of them can knowiedgably re-evaluate the user's original proposal. If, during infommtion-sharing, the user provides convincing support for a belief whose negation is held by the system, the system may adopt the belief after the re-evaluation process, thus resolving the conflict without negotiation. 4.1.1 Example To illustrate the evaluation of proposed beliefs, con- sider the following uttermmes: (1) S: 1 think Dr. Smith is teaching AI next semester. (2) U: Dr. Smith is not teaching AL (3) He is going on sabbatical next year. Figure 1 shows the belief and discourse levels of the dialogue model that captures utterances (2) and (3). The belief evaluation process will start with the belief at the leaf node of the proposed belief txee, On.Sabbatical(Smith, next year)). The system will first gather its evidence pe~aining to the belief, which includes I) a warranted belief ~ that Dr. Smith has postponed his sabbatical until 1997 (Postponed- Sabbatical(Smith, J997)), 2) a warranted belief that Dr. Smith postponing his sabbatical until 1997 sup- ports the belief that he is not going on sabbatical next year (supports(Postponed-Sabbatical(Smith,1997), -~On-SabbaticaI(Smith, next year)), 3) a strong belief that Dr. Smith will not be a visitor at IBM next year (-~visitor(Smith, IBM, next year)), and 4) a warranted belief that Dr. Smith not being a visitor at IBM next aThe strength of a belief is classified as: warranted, strong, or weak, based on the endorsement of the belief. 138 year supports the belief that he is not going on sab- batical next year (supports(-~visitor(Smith, IBM, next year), -,On-Sabbatical(Smith, next year)), perhaps be- cause Dr. Smith has expressed his desire to spend his sab- batical only at IBM). The belief revision mechanism will then be invoked to determine the system's belief about On-Sabbatical(Smith, next year) based on the system's own evidence and the user's statement. Since beliefs (1) and (2) above constitute a warranted piece of evidence against the proposed belief and beliefs (3) and (4) consti- tute a strong piece of evidence against it, the system will not accept On-Sabbatical(Smith, next year). The system believes that being on sabbatical implies a faculty member is not teaching any courses; thus the pro- posed evidential relationship will be accepted. However, the system will not accept the top-level proposed belief, -,Teaches(Smith, A/), since the system has a prior belief to the contrary (as expressed in utterance ( 1 )) and the only evidence provided by the user was an implication whose antecedent was not accepted. 4.2 Modifying Unaccepted Proposals The collaborative planning principle in (Whittak~ and Stenton, 1988; Walker, 1992) suggests that "conversants must provide evidence of a detected discrepancy in belief as soon as possible." Thus, once an agent detects a rele- vant conflict, she must notify the other agent of the con- flict and initiate a negotiation subdialogne to resolve it-- to do otherwise is to fail in her responsibility as a collab- orative agent. We capture the attempt to resolve a con- flict with the problem-solving action Modify-Proposal, whose goal is to modify the proposal to a form that will potentially be accepted by both agents. When applied to belief modification, Modify-Proposal has two specializa- tions: Correct-Node, for when a proposed belief is not accepted, and Correct-Relation, for when a proposed ev- idential relationship is not accepted. Figure 2 shows the problem-solving recipes 4 for Correct-Node and its subac- tion, Modify-Node, that is responsible for the actual mod- ification of the proposal. The applicability conditions 5 of Correct-Node specify that the action can only be invoked when _sl believes that _node is not acceptable while _s2 believes that it is (when _sl and _s2 disagree about the proposed belief represented by ..node). However, since this is a collaborative interaction, the actual modification can only be performed when both ..sl and _s2 believe that _node is not acceptable w that is, the conflict between _sl and .s2 must have been resolved. This is captured by 4A recipe (Pollack, 1986) is a template for performing ac- tions. It contains the applicabifity conditions for performing an action, the subactions comprising the body of an action, etc. SApplicabflity conditions are conditions that must already be satisfied in order for an action to be reasonable to pursue, whereas an agent can try to achieve unsatisfied preconditions. Action: ~y~: Appl Cond: Const: Body: Goal: Action: ~ype: Appi Cond: Precond: Body: Goal: Figure 2: Correct-Node(_s I, .s2, .propow, d) Decomposition believe(_s 1,--acceptable(..node)) believe(_s2, acceptable(_node)) error-in-plan(_node,..proposed) Modify-Node(..s l,_s2,_proposed,..node) Insert-Correction(.s 1, ..s2, _proposed) accoptable(_proposed) Modify-Node(..s I ,..s2,.4noposed,.suxle) Specialization believe( .s 1, .-,acceptable( ...node ) ) believe(.s2,-,acceptable(_node)) Remove-Node(_sl,_s2,_proposed,..node) Alter-Node(.s l,_s2,.proposed,.node) mod~ed(.proposed) The Correct-Node and Modify-Node Recipes the applicability condition and precondition of Mod/fy- Node. ~ attempt to satisfy the precondition causes the system to post as a mutual belief to be achieved the belief that ..node is not acceptable, leading the system to adopt discourse actions to change _s2's beliefs, thus initiating a collaborative negotiation subdialogne, e 4.2,1 Selecting the Focus of Modification When multiple conflicts arise between the system and the user regarding the user's proposal, the system must identify the aspect of the proposal on which it should fo- cus in its pursuit of conflict resolution. For example, in the case where Correct-Node is selected as the specializa- tion of Modify-Proposal, the system must determine how the parameter node in Correct-Node should be instanti- ated. The goal of the modification process is to resolve the agents' conflicts regarding the unaccepted top-level proposed beliefs. For each such belief, the system could provide evidence against the befief itself, address the un- accepted evidence proposed by the user to eliminate the user's justification for the belief, or both. Since collab- orative agents are expected to engage in effective and efficient dialogues, the system should address the unac- cepted belief that it predicts will most quickly resolve the top-level conflict. Therefore, for each unaccepted top-level belief, our process for selecting the focus of modificatkm involves two steps: identifying a candidate foci tree from the proposed belief tree, and selecting a eThis subdialogue is considered an interrupt by Whittaker, Stenton, and Walker (Whittaker and Stenton, 1988; Walker and Whittaker, 1990), initiated to negotiate the truth of a piece of in- formation. However, the utterances they classify as interrupts include not only our negotiation subdialogues, generated for the purpose of modifying a proposal, but also clarification sub- dialogues, and information-sharing subdialogues (Chu-Carroll and Carberry, 1995), which we contend should be part of the evaluation process. 139 focus from the candidate foci tree using the heuristic "at- tack the belief(s) that will most likely resolve the conflict about the top-level belief." A candidate loci tree contains the pieces of evidence in a proposed belief tree which, if disbelieved by the user, might change the user's view of the unaccepted top-level proposed belief (the root node of that belief tree). It is identified by performing a depth- first search on the proposed belief tree. When a node is visited, both the belief and the evidential relationship between it and its parent are examined. If both the be- lief and relationship were accepted by the evaluator, the search on the current branch will terminate, since once the system accepts a belief, it is irrelevant whether it accepts the user's support for that belief (Young et al., 1994). Otherwise, this piece of evidence will be included in the candidate loci tree and the system will continue to search through the evidence in the belief tree proposed as support for the unaccepted belief and/or evidential relationship. Once a candidate foci tree is identified, the system should select the focus of modification based on the like- lihood of each choice changing the user's belief about the top-level belief. Figure 3 shows our algorithm for this selection process. Given an unaccept~ belief (.bel) and the beliefs proposed to support it, Select-Focus. Modification will annotate_bel with 1) its focus of mod- ification (.bel.focus), which contains a set of beliefs (.bel and/or its descendents) which, if disbelieved by the user, are predicted to cause him to disbelieve _bel, and 2) the system's evidence against_bel itself (_hel.s-attack). Select-Focus-Modification determines whether to at- tack _bel's supporting evidence separately, thereby elim- inating the user's reasons for holding ..b¢l, to atta~ ..bel itself, or both. However, in evainating the effectiveness of attacking the proposed evidence for.bel, the system must determine whether or not it is possible to successfully re- fute a piece of evidence (i.e., whether or not the system believes that sufficient evidence is available to convince the user that a piece of proposed evidence is invalid), and if so, whether it is mote effective to attack the evidence it- self or its support. Thus the algorithm recursively applies itself to the evidence proposed as support for _bel which was not accepted by the system (step 3). In this recursive process, the algorithm annotates each unaccepted belief or evidential relationship proposed to support _bel with its focus of modification (-beli.focus) and the system's evidence against it (_beli.s-attack). _bell.focus contains the beliefs selected to be addressed in order to change the user's belief about ..beli, and its value will be nil if the system predicts that insufficient evidence is available to change the user's belief about -bell. Based on the information obtained in step 3, Select. Focus-Modification decides whether to attack the evi- dence proposed to support _bel, or _bel itself (step 4). Its preference is to address the unaccepted evidence, be- Select .Focus-Modlflcatlon(_bel): 1. _bel.u-evid +-- system's beliefs about the user's evidence pertaining to _bel _bel.s-attack 4- system's own evidence against _bel 2. If _bel is a leaf node in the candidate foci tree, 2.1 If Predict(_bel, _bel.u-evid + _bel.s-attack) = -~_bel then _bel.focus ,-- .bel; return 2.2 Else .bel.focus t- nil; return 3. Select focus for each of .bel's children in the candidate foci tree, .belx ..... ..bel,~: 3.1 If supports(_beli,_bel) is accepted but .beli is not, Select-Focus-Modlficatioa(.bel~ ). 3.2 Else if .beli is accepted but supports(_beli,.bel) is not, Sdect-Focus-Modlficatlon(.beli,.bel). 3.3 Else Select-Focu-Modificatioa(.bel~) and Select- Focus-Modification( supports(_beli ,.bel)) 4. Choose between attacking the Woposed evidence for .bel and attacking ..bel itself: 4.1 eand-set ~-- {..beli I .beli E unaccepted user evidence for _bel A ..beli.focus ~ nil} 4.2 //Check if addressing _bol's unaccepted evidence is suffu:ient If Predkt(.bel, _bel.u-evid - cand-set) = --,.~l (i.e., the user's disbelief in all unaecepted evidence which . the system can refute will cause him to reject _bel), min-set ~- Select-Mtu-Set(_bel,cand-set) ..bel.focus ~- U_bel~ ¢_min-set ..beli.focus 4.3 //Check if addressing .bel itself is s~fcient Else if Predlct(.bel, ..bel.u-evid + .bel.s-attack) = -,.bel (i.e., the system's evidence against .bel will cause the user to reject _bel), .bel.focus ~-- .bel 4.4 //Check if addressing both .l~el and its unaccepted evidence is s~Ofcient Else if Predkt(..bel, _bel.s-attaek + .bel.u-evid - canal-set) = -,_bet, rain-set +-- Select-Mln-Set(.beL cand-set + _bel) .bel.focus +-- U.beli~dnin-set ..beli.focus U .bel 4.5 Else _bel.focus +-- nil Figure 3: Selecting the Focus of Modification cause McKeown's focusing rules suggest that continuing a newly introduced topic (about which there is more to be said) is preferable to returning to a previous topic OVIcK- cown, 1985). Thus the algorithm first considers whether or not attacking the user's support for ..bel is sufficient to convince him of--,-bel (step 4.2). It does so by gathering (in cand-set) evidence proposed by the user as direct sup- port for _bel but which was not accepted by the system and which the system predicts it can successfully refute (i.e., =beli.focus is not nil). The algorithm then hypothe- sizes that the user has changed his mind about each belief in cand-set and predicts how this will affect the user's belief about .bel (step 4.2). If the user is predicted to ac- cept --,..bel under this hypothesis, the algorithm invokes Select-Min-Set to select a minimum subset of cand-set as the unaccepted beliefs that it would actually pursue, and the focus of modification (..bel.focus) will be the union of 140 the focus for each of the beliefs in this minimum subset. If attacking the evidence for _bel does not appear to be sufficient to convince the user of -~_bel, the algorithm checks whether directly attacking _bel will accomplish this goal. If providing evidence directly against _bel is predicted to be successful, then the focus of modifica- tion is _bcl itself (step 4.3). If directly attacking _bel is also predicted to fail, the algorithm considers the ef- fect of attacking both ..bel and its unaccepted proposed evidence by combining the previous two prediction pro- cesses (step 4.4). If the combined evidence is still pre- dicted to fail, the system does not have sufficient evidence to change the user's view of_bel; thus, the focus of mod- ification for .bel is nil (step 4.5). 7 Notice that steps 2 and 4 of the algorithm invoke a function, Predict, that makes use of the belief revision mechanism (Galliers, 1992) dis- cussed in Section 4.1 to predict the user's acceptance or unacceptance of..bel based on the system's knowledge of the user's beliefs and the evidence that could be presented to him (Logan et al., 1994). The result of Select-Focus- Modification is a set of user beliefs (in _bel.focus) that need to be modified in order to change the user's belief about the unaccepted top-level belief. Thus, the negations of these beliefs will be posted by the system as mutual beliefs to be achieved in order to perform the Mod/fy actions. 4.2.2 Selecting Justification for a Claim Studies in communication and social psychology have shown that evidence improves the persuasiveness of a message (Luchok and McCroskey, 1978; Reynolds and Burgoon, 1983; Petty and Cacioppo, 1984; Hampie, 1985). Research on the quantity of evidence indicates that there is no optimal amount of evidence, but that the use of high-quality evidence is consistent with persua- sive effects (Reinard, 1988). On the other hand, Cn'ice's maxim of quantity (Grice, 1975) specifies that one should not contribute more information than is required, s Thus, it is important that a collaborative agent selects suffmient and effective, but not excessive, evidence to justify an intended mutual belief. To convince the user ofa belief,_bel, our system selects appropriate justification by identifying beliefs that could 7In collaborative dialogues, an agent should reject a pro- posal only ff she has strong evidence against it. When an agent does not have sufficient information to determine the accep- tance of a proposal, she should initiate an information-sharing subdialogue to share information with the other agent and re- evaluate the proposal (Chu-Carroll and Carberry, 1995). Thus, further research is needed to determine whether or not the focus of modification for a rejected belief will ever be nil in collabo- rative dialogues. sWalker (1994) has shown the importance of IRU's Odor- mationally Redundant Utterances) in efficient discourse. We leave including appropriate IRU's for future work. be used to support_bel and applying filtering heuristics to them. The system must first determine wbether justifica- tion for_bel is needed by predicting whether or not merely informing the user of _bel will be sufficient to convince him of _bel. If so, no justification will be presented. If justification is predicted to be necessary, the system will first construct the justification chains that could be used to support _bel. For each piece of evidence t~t could be used to directly support ..bel, the system first predicts whether the user will accept the evidence without justi- fication. If the user is predicted not to accept a piece of evidence (evidi), the system will augment the evidence to be presented to the user by posting evidi as a mutual be- lief to be achieved, and selecting propositions that could serve as justification for it. This results in a recursive process that returns a chain of belief justifications that could be used to support.bel. Once a set of beliefs forming justification chains is identified, the system must then select from this set those belief chains which, when presented to the user, are pre- dicted to convince the user of .bel. Our system will first construct a singleton set for each such justification chain and select the sets containing justification which, when presented, is predicted to convince the user of _bel. If no single justification chain is predicted to be sufficient to change the nser's beliefs, new sets will be constructed by combining the single justification chains, and the se- lection ~ is repeated. This will produce a set of possible candidate justification chains, and three heuris- tics will then be applied to select from among them. The first heuristic prefers evidence in which the system is most confident since high-quality evidence produces more at- titude change than any other evidence form (Luchok and McCroskey, 1978). Furthermore, the system can better justify a belief in which it has high confidence should the user not accept it. The second heuristic prefers evidence that is novel to the user, since studies have shown that ev- idence is most persuasive ff it is previously unknown to the hearer (Wyer, 1970; Morley, 1987). The third heuris- tic is based on C.nice's maxim of quantity and prefers justification chains that contain the fewest beliefs. 4.2.3 Example After the evaluation of the di~ogue model in Figure 1, Modify-Proposal is invoked because the top-level pro- posed belief is not accepted. In selecting the focus of modification, the system will first identify the candidate foci tree and then invoke the Select-Focus-Modification algorithm on the belief at the root node of the candidate foci tree. The candidate foci tree will be identical to the proposed belief tree in Figure 1 since both the top-level proposed belief and its proposed evidence were rejected during the evaluation process. This indicates that the fo- cus of modification could be either -~Teaches(Smith,AI) 141 or On-Sabbatical(Smith, next year) (since the evidential relationship between them was accepted). When Select- Focus-Modification is applied to --,Teaches(Smith,Al), the algorithm will first be recursively invoked on On- Sabbatical(Smith, next year) to determine the focus for modifying the child belief (step 3.1 in Figure 3). Since the system has two pieces of evidence against On- Sabbatical(Smith, next year), 1) a warranted piece of evidence containing Postponed-Sabbatical(Smittg1997) and supports( Postponed-Sabbatical(Smith,1997),-,On- Sabbatical(Smith, next year)), and 2) a strong piece of evidence containing --,visitor(Smith, IBM, next year) and supports(-,visitor(Smith, IBM, next year),-,On- Sabbatical(Smith, next year)), the evidence is pre- dicted to be sufficient to change the user's be- lief in On-Sabbatical(Smith, next year), and hence -,Teaches(Smith, A1); thus, the focus of modification will be On-Sabbatical(Smith, next year). The Correct-Node specialization of Modify-Proposal will be invoked since the focus of modification is a belief, and in order to sat- isfy the precondition of Modify.Node (Figure 2), MB( S, U, -~ On-Sabbatical(Smith, next year)) will be posted as a mu- tual belief to be achieved. Since the user has a warranted belief in On- Sabbatical(Smith, next year) ('indicated by the seman- tic form of utterance (3)), the system will predict th~ merely informing the user of the intended mutual belief is not sufficient to change his belief; therefore R will select justificatkm from the two available pieces of evi- dence supporting -,On.Sabbatical(Smith, next year) pre- sented earlier. The system will predict that either piece of evidence combined with the proposed mutual belief is sufficient to change the user's belief; thus, the filter- ing heuristics are applied. The first heuristic will cause the system to select Postponed.Sabbatical(Smith, 1997) and supports(Postponed-Sabbatical(Smith, 1997),-,On- Sabbatical(Smith, next year)) as support, since it is the evidence in which the system is more confident. The system will try to establish the mutual beliefs 9 as an attempt to satisfy the precondition of Modify-Node. This will cause the system to invoke Inform cKscourse actions to generate the following utterances: (4) S: Dr. Smith is not going on sabbatical next year. (5) He postponed his sabbatical until 199Z If the user accepts the system's utterances, thus satisfy- ing the precondition that the conflict be resolved, Modify- Node can be performed and changes made to the original proposed beliefs. Otherwise, the user may propose mod- 9Only MB( S, U, Postponed-Sabbatical( Smith, 1997)) will be proposed as justification because the system believes that the evidential relationship needed to complete the inference is held by a stereotypical user. ifications to the system's proposed modifications, result- ing in an embedded negotiation sub4iaJogue. 5 Conclusion This paper has presented a computational strategy for en- gaging in collaborative negotiation to square away con- flicts in agents' beliefs. The model captures features specific to collaborative negotiation. It also suppom ef- fective and efficient dialogues by identifying the focus of modification based on its predicted success in resolving the conflict about the top-level belief and by using heuris- tics motivated by research in social psychology to select a set of evidence to justify the proposed modification of beliefs. Furthermore, by capturing collaborative negoti- ation in a cycle of Propose-Evaluate-Modify actions, the evaluation and modification processes can be applied re, cursively to capture embedded negotiation subdialogues. Acknowledgments Discussions with Candy Sidner, Stephanie Elzer, and Kathy McCoy have been very helpful in the development of this work. Comments from the anonymous reviewers have also been very useful in preparing the final version of this paper. References James Allen. 1991. Discourse structure in the TRAINS project. In Darpa Speech and Natural Language Work- shop. Lawrence Birnb~nm; Margot Flowexs, and Rod McGuire. 1980. Towards an AI model of argumentation. In Proceedings of the National Conference on Artificial Intelligence, pages 313-315. Alison Cawsey, Julia Galliers, Brian Logan, Steven Reec.e, and Karen Sparck Jones. 1993. Revising be- fiefs and intentions: A unified framework for agent interaction. In The Ninth Biennial Conference of the Society for the Study of Artificial Intelligence and Sim- ulation of Behaviour, pages 130-139. Jennifer Chu-Carroll and Sandra Carberry. 1994. A plan- based model for response generation in collaborative task-oriented dialogues. In Proceedings of the Twelfth National Conference on Artificial Intelligence, pages 799-805. Jennifer Chu-Carroll and Sandra Carberry. 1995. Gener- ating information-sharing subdialognes in expert-user consultation. In Proceedings of the 14th International Joint Conference on Artificial Intelligence. To appear. Jennifer Chu-Carroll. 1995. A Plan-BasedModelforRe- sponse Generation in Collaborative Consultation Di. alogues. Ph.D. thesis, University of Delaware. Forth- coming. Paul R. Cohen. 1985. Heuristic Reasoning about Un- certainty: An Artificial Intelligence Approach. Pitman Publishing Company. 142 Robin Cohen. 1987. Analyzing the structure of argu- mentative discourse. ComputationalLinguistcis, 13(1- 2): 11-24, January-June. Columbia University Transcripts. 1985. Transcripts de- rived from audiotape conversations made at Columbia University, New York, NY. Provided by Kathleen McKeown. Julia R. Galliers. 1992. Autonomous belief revision and communication. In Gardenfors, editor, BeliefRevision. Cambridge University Press. H. Paul Grice. 1975. Logic and conversation. In Peter Cole and Jerry L. Morgan, editors, Syntax and Seman- tics 3: Speech Acts, pages 41-58. Academic Press, Inc., New York. Barbara J. Grosz and Caadace L. Sidner. 1990. Plans for discourse. In Cohen, Morgan, and Pollack, editors, Intentions in Communication, chapter 20, pages 417- 444. MIT Press. Dale Hample. 1985. Refinements on the cognitive model of argument: Concreteness, involvement and group scores. The Western Journal of Speech Communica- tion, 49:267-285. Aravind K. Joshi. 1982. Mutual beliefs in question- answer systems. In N.V. Smith, editor, Mutual Knowl- edge, chapter 4, pages 181-197. Academic Press. Lynn Lambert and Sandra Carberry. 1991. A tripartite plan-based model of dialogue. In Proceedings of the 29th Annual Meeting of the Association for Computa- tional Linguistics, pages 47-54. Brian Logan, Steven Reece, Alison Cawsey, Julia Gal- tiers, and Karen Sparck Jones. 1994. Belief revision and dialogue management in information retrieval. Technical Report 339, University of Cambridge, Com- puter Lalx)ratory. Joseph A. Luchok and James C. McCroskey. 1978. The effect of quality of evidence on attitude change and source credibility. The Southern Speech Communica- tion Journal, 43:371-383. Mark T. Maybury. 1993. Communicative acts for gen- erating natural language arguments. In Proceedings of the National Conference on Artificial Intelligence, pages 357-364. Kathleen R. McKeown. 1985. Text Generation : Using Discourse Strategies and Focus Constraints to Gen- erate Natural Language Text. Cambridge University Press. DonaldD. Morley. 1987. Subjective message constructs: A theory of persuasion. Conmmnication Monographs, 54:183-203. Richard E. Petty and John T. Cacioppo. 1984. The ef- fects of involvement on responses to argument quantity and quality: Central and peripheral routes to persua- sion. Journal of Personality and Social Psychology, 46(1):69-81. Martha E. Pollack. 1986. A model of plan inference that distinguishes between the beliefs of actors and ob- servers. In Proceedings of the 24th Annual Meeting of the Association for Computational Linguistics, pages 207-214. Alex Quilici. 1992. Arguing about planning alternatives. In Proceedings of the 14th International Conference on Computational Linguistics, pages 906-910. Rachel Reichman. 1981. Modeling informal debates. In Proceedings of the 7th International Joint Conference on Artificial Intelligence, pages 19-24. John C. Reinard. 1988. The empirical study of the per- suasive effects of evidence, the status after fifty years of research. Human Communication Research, 15(1):3- 59. Rodaey A. Reynolds and Michael Burgoon. 1983. Be- lief processing, reasoning, and evidence. In Bostrom, editor, Communication Yearbook 7, chapter 4, pages 83-104. Sage Publications. Candace L. Sidner. 1992. Using discourse to negotiate in collaborative activity: An artificial language. In AAA-92 Workshop: Cooperation Among Heteroge- neous Intelligent Systems, pages 121-128. Candace L. Sidner. 1994. An artificial discourse lan- guage for collaborative negotiation. In Proceedings of the Twelfth National Conference on Artificial Intelli- gence, pages 814-819. SKI Transcripts. 1992. Transcripts derived from audio- tape conversations made at SRI International, Menlo Park, CA. Prepared by Jacqueline Kowtko under the direction of Patti Price. Katia Sycara. 1989. Argumentation: Planning other agents' plans. In Proceedings of the l l th International Joint Conference on Artificial Intelligence, pages 517- 523. Marflyn Walker and Steve Whinak~. 1990. Mixed ini- tiative in dialogue: An investigation into discourse seg- mentation. In Proceedings of the 28th Annual Meet- ing of the Association for Computational Linguistics, pages 70-78. Marilyn A. Walker. 1992. Redundancy in collaborative dialogue. In Proceedings of the 15th International Conference on Computational Linguistics, pages 345- 351. Marilyn A. Walker. 1994. Discourse and deliberation: Testing a collaborative strategy. In Proceedings of the 15th International Conference on Computational Linguistics. Bonnie Webber and Atavind Joshi. 1982. Taking the initiative in natural language data base interactions: Justifying why. In Proceedings of COLING-82, pages 413-418. Steve Whittaker and Phil Stenton. 1988. Cues and con- trol in expert-client dialogues. In Proceedings of the 26th Annual Meeting of the Association for Computa- tional Linguistics, pages 123-130. Robert S. Wyer, Jr. 1970. Information redundancy, in- consistency, and novelty and their role in impression formation. Journal of Experimental Social Psychol- ogy, 6:111-127. R. Michael Young, Johanna D. Moore, and Martha E. Pollack. 1994. Towards a principled representation of discourse plans. In Proceedings of the Sixteenth Annual Meeting of the Cognitive Science Society, pages 946-951. 143 | 1995 | 19 |
Automatic Induction of Finite State Transducers for Simple Phonological Rules Daniel Gildea and Daniel Jurafsky International Computer Science Institute and University of California at Berkeley {gildea,jurafsky} @icsi.berkeley.edu Abstract This paper presents a method for learning phonological rules from sample pairs of un- derlying and surface forms, without negative evidence. The learned rules are represented as finite state transducers that accept underlying forms as input and generate surface forms as output. The algorithm for learning them is an extension of the OSTIA algorithm for learn- ing general subsequential finite state transduc- ers. Although OSTIA is capable of learning arbitrary s.-f.s.t's in the limit, large dictionaries of actual English pronunciations did not give enough samples to correctly induce phonolog- ical rules. We then augmented OSTIA with two kinds of knowledge specific to natural lan- guage phonology, biases from "universal gram- mar". One bias is that underlying phones are often realized as phonetically similar or iden- tical surface phones. The other biases phono- logical rules to apply across natural phonolog- ical classes. The additions helped in learning more compact, accurate, and general transduc- ers than the unmodified OSTIA algorithm. An implementation of the algorithm successfully learns a number of English postlexical rules. 1 Introduction Johnson (1972). first observed that traditional phonolog- ical rewrite rules can be expressed as regular relations if one accepts the constraint that no rule may reapply directly to its own output. This means that finite state transducers can be used to represent phonological rules, greatly simplifying the problem of parsing the output of phonological rules in order to obtain the underlying, lex- ical forms (Karttunen 1993). In this paper we explore an- other consequence of FST models of phonological rules: their weaker generative capacity also makes them easier to learn. We describe our preliminary algorithm for learn- ing rules from sample pairs of input and output strings, and the results we obtained. In order to take advantage of recent work in transducer induction, we have chosen to represent rules as subse- quential finite state transducers. Subsequential finite state transducers are a subtype of finite state transduc- ers with the following properties: 1. The transducer is deterministic, that is, there is only one arc leaving a given state for each input symbol. 2. Each time a transition is made, exactly one symbol of the input string is consumed. 3. A unique end of string symbol is introduced. At the end of each input string, the transducer makes an additional transition on the end of string symbol. 4. All states are accepting. The length of the output strings associated with a subse- quential transducer's transitions is not constrained. The subsequential transducer for the English flapping rule in 1 is shown in Figure 1; an underlying t is realized as a flap after a stressed vowel and any number of r's, and before an unstressed vowel. (1) t ~ dx / (r r* V 2 The OSTIA Algorithm Our phonological-rule induction algorithm is based on augmenting the Onward Subsequential Transducer Infer- ence Algorithm (OSTIA) of Oncina et al. (1993). This section outlines the OSTIA algorithm to provide back- ground for the modifications that follow. OSTIA takes as input a training set of input-output pairs. The algorithm begins by constructing a tree trans- ducer which covers all the training samples. The root of the tree is the transducer's initial state, and each leaf of the tree corresponds to the end of an input sample. The output symbols are placed as near the root of the tree as possible while avoiding conflicts in the output of a given arc. An example of the result of this initial tree construction is shown in Figure 2. At this point, the transducer covers all and only the strings of the training set. OSTIA now attempts to gen- eralize the transducer, by merging some of its states to- gether. For each pair of states (s, t) in the transducer, the algorithm will attempt to merge s with t, building a new 9 #:t , b:bae K - ] ae:0 t:0 n:n #:0 d:0 ~ #:0 : - @ Figure 2: Onward Tree Transducer for "bat", "batter", and "band" with Flapping Applied c c: c T .Y" " V :dxV ~.~ r:tr.. #:t Figure 1: Subsequential Transducer for English Flap- ping: Labels on arcs are of the form (input sym- bol):(output symbol). Labels with no colon indicate the same input and output symbols. 'V' indicates any un- stressed vowel, "v" any stressed vowel, 'dx' a flap, and 'C' any consonant other than 't', 'r' or 'dx'. '#' is the end of string symbol. Figure 3: Result of Merging States 0 and 1 of Figure 2 far. However, when trying to learn phonological rules from linguistic data, the necessary training set may not be available. In particular, systematic phonological con- straints such as syllable structure may rule out the neces- sary strings. The algorithm does not have the language bias which would allow it to avoid linguistically unnatural transducers. b:bae ae:0 n:nd t:0 m m l er ."dxer #:t state with all of the incoming and outgoing transitions of s and f. The result of the first merging operation on the transducer of Figure 2 is shown in Figure 3, and the end result of the OSTIA alogrithm in shown in Figure 4. 3 Problems Using OSTIA to learn Phonological Rules The OSTIA algorithm can be proven to learn any subse- quential relation in the limit. That is, given an infinite sequence of valid input/output pairs, it will at some point derive the target transducer from the samples seen so Figure 4: Final Result of Merging Process on Transducer from Figure 2 For example, OSTIA's tendency to produce overly "clumped" transducers is illustrated by the arcs with out "b ae" and "n d" in the transducer in Figure 4, or even Fig- ure 2. OSTIA's default behavior is to emit the remainder of the output string for a transduction as soon as enough input symbols have been seen to uniquely identify the input string in the training set. This results in machines which may, seemingly at random, insert or delete se- quences of four or five phonemes, something which is 10 linguistically implausible. In addition, the incorrect dis- tribution of output symbols prevents the optimal merging of states during the learning process, resulting in large and inaccurate transducers. Another example of an unnatural generalization is shown in 4, the final transducer induced by OSTIA on the three word training set of Figure 2. For example, the transducer of Figure 4 will insert an 'ae' after any 'b', and delete any 'ae' from the input. Perhaps worse, it will fail completely upon seeing any symbol other than 'er' or end-of-string after a 't'. While it might be unreasonable to expect any transducer trained on three samples to be perfect, the transducer of Figure 4 illustrates on a small scale how the OSTIA algorithm might be improved. Similarly, if the OSTIA algorithm is training on cases of flapping in which the preceding environment is ev- ery stressed vowel but one, the algorithm has no way of knowing that it can generalize the environment to all stressed vowels. The algorithm needs knowledge about classes of phonemes to fill in accidental gaps in training data coverage. 4 Using Alignment Information Our first modification of OSTIA was to add the bias that, as a default, a phoneme is realized as itself, or as a sim- ilar phone. Our algorithm guesses the most probable phoneme to phoneme alignment between the input and output strings, and uses this information to distribute the output symbols among the arcs of the initial tree trans- ducer. This is demonstrated for the word "importance" in Figures 5 and 6. ih m p oal r t ah n s IIII /111 ih m p oal dx ah n t s Figure 5: Alignment of "importance" with flapping, r- deletion and t-insertion The modification proceeds in two stages. First, a dynamic programming method is used to compute a correspondence between input and output phonemes. The alignment uses the algorithm of Wagner & Fischer (1974), which calculates the insertions, deletions, and substitutions which make up the minimum edit distance between the underlying and surface strings. The costs of edit operations are based on phonetic features; we used 26 binary articulatory features. The cost function for sub- stitutions was equal to the number of features changed between the two phonemes. The cost of insertions and deletions was 6 (roughly one quarter the maximum pos- sible substitution cost). From the sequence of edit opera- tions, a mapping of output phonemes to input phonemes is generated according to the following rules: • Any phoneme maps to an input phoneme for which it substitutes • Inserted phonemes map to the input phoneme im- mediately following the first substitution to the left of the inserted phoneme Second, when adding a new arc to the tree, all the un- used output phonemes up to and including those which map to the arc's input phoneme become the new ar- c's output, and are now marked as having been used. When walking down branches of the tree to add a new input/output sample, the longest common prefix, n, of the sample's unused output and the output of each arc is cal- culated. The next n symbols of the transduction's output are now marked as having been used. If the length, l, of the arc's output string is greater than n, it is necessary to push back the last l - n symbols onto arcs further down the tree. A tree transducer constructed by this process is shown in Figure 7, for comparison with the unaligned version in Figure 2. Results of our alignment algorithm are summarized in §6. The denser distribution of output symbols resulting from the alignment constrains the merging of states early in the merging loop of the algorithm. Interestingly, pre- venting the wrong states from merging early on allows more merging later, and results in more compact trans- ducers. 5 Generalizing Behavior With Decision Trees In order to allow OSTIA to make natural generalizations in its rules, we added a decision tree to each state of the machine, describing the behavior of that state. For exam- ple, the decision tree for state 2 of the machine in Figure 1 is shown in Figure 8. Note that if the underlying phone is an unstressed vowel ([-cons,-stress]), the machine out- puts a flap, followed by the underlying vowel, otherwise it outputs a 't' followed by the underlying phone. The decision trees describe the behavior of the machine at a given state in terms of the next input symbol by generalizing from the arcs leaving the state. The decision trees classify the arcs leaving each state based on the arc's input symbol into groups with the same behavior. The same 26 binary phonetic features used in calculating edit distance were used to classify phonemes in the decision trees. Thus the branches of the decision tree are labeled with phonetic feature values of the arc's input symbol, and the leaves of the tree correspond to the different behaviors. By an arc's behavior, we mean its output string considered as a function of its input phoneme, and its destination state. Two arcs are considered to have the same behavior if they agree each of the following: • the index i of the output symbol corresponding to the input symbol (determined from the alignment procedure) • the difference of the phonetic feature vectors of the input symbol and symbol i of the output string • the prefix of length i - 1 of the output string 11 ~ t : d x 6ah:ah 7 n:n 8 s:ts 9 Figure 6: Resulting initial transducer for "importance" b:b ~ ae:ae t:0 d:d #:0 Figure 7: Initial Tree Transducer Constructed with Alignment Information: Note that output symbols have been pushed back across state 3 during the construction cons stress 2 1 2 Outcomes: 1: Output: dx [ ], Destination State: 0 2: Output: t [ ], Destination State: 0 3: On end of string: Output: t, Destination State: 0 Figure 8: Example Decision Tree: This tree describes the behavior of State 2 of the transducer in Figure 1. [ ] in the output string indicates the arc's input symbol (with no features changed). • the suffix of the output string beginning at position i+1 • the destination state After the process of merging states terminates, a deci- sion tree is induced at each state to classify the outgoing arcs. Figure 9 shows a tree induced at the initial state of the transducer for flapping. Using phonetic features to build a decision tree guar- antees that each leaf of the tree represents a natural class of phonemes, that is, a set of phonemes that can be de- scribed by specifying values for some subset of the pho- netic features. Thus if we think of the transducer as a set of rewrite rules, we can now express the context of each rule as a regular expression of natural classes of preceding phonemes. stress j " .. 1 tense rounded 2 7---..<. y-offglide _,\--..< high 1 w-off glide /',,: prim-stress -/\+ 1 2 Outcomes: 1: Output: [ ], Destination State: 0 2: Output: [ ], Destination State: 1 prim-stress ix+ 1 2 On end of string: Output: nil, Destination State: 0 Figure 9: Decision Tree Before Pruning: The initial state of the flapping transducer Some induced transducers may need to be generalized even further, since the input transducer to the decision 12 tree learning may have arcs which are incorrect merely because of accidental prior structure. Consider again the English flapping rule, which applies in the context of a preceding stressed vowel. Our algorithm first learned a transducer whose decision tree is shown in Figure 9. In this transducer all arcs leaving state 0 correctly lead to the flapping state on stressed vowels, except for those stressed vowels which happen not to have occurred in the training set. For these unseen vowels (which consisted of the rounded diphthongs 'oy' and 'ow' with secondary stress), the transducers incorrectly returns to state 0. In this case, we wish the algorithm to make the generalization that the rule applies after all stressed vowels. stress 1 2 50,000 training samples, and Figure 12 shows some per- formance results. (2) t --* dx/(Zr * V V~,~t .- r(,~ r~NC V _ f ~ VcV c :tc r:tr V:dxV[ ~ ~" #:t Figure 11: Flapping Transducer Induced from 50,000 Samples Figure I0: The Same Decision Tree After Pruning This type of generalization can be accomplished by pruning the decision trees at each state of the machine. Pruning is done by stepping through each state of the machine and pruning as many decision nodes as possible at each state. The entire training set of transductions is tested after each branch is pruned. If any errors are found, the outcome of the pruned node's other child is tested. If errors are still found, the pruning operation is reversed. This process continues at the fringe of the decision tree until no more pruning is possible. Figure 10 shows the correct decision tree for flapping, obtained by pruning the tree in Figure 9. The process of pruning the decision trees is compli- cated by the fact that the pruning operations allowed at one state depend on the status of the trees at each other state. Thus it is necessary to make several passes through the states, attempting additional pruning at each pass, un- til no more improvement is possible. In addition, testing each pruning operation against the entire training set is expensive, but in the case of synthetic data it gives the best results. For other applications it may be desirable to keep a cross validation set for this purpose. 6 Results and Discussion We tested our induction algorithm using a synthetic cor- pus of 99,279 input/output pairs. Each pair consisted of an underlying and a surface pronunciation of an individ- ual word of English. The underlying string of each pair was taken from the phoneme-based CMU pronunciation dictionary. The surface string was generated from each underlying form by mechanically applying the one or more rules we were attempting to induce in each experi- ment. In our first experiment, we applied the flapping rule in (2) to training corpora of between 6250 and 50,000 words. Figure 11 shows the transducer induced from Samples 6250 12500 25000 50000 OSTIA w/o Alignment OSTIA w/Alignment States % Error States % Error 19 2.32 257 16.40 141 4.46 192 3.14 3 0.34 3 0.14 3 0.06 3 0.01 Figure 12: Results Using Alignment Information on En- glish Flapping As can be seen from Figure 12, the use of alignment information in creating the initial tree transducer dra- matically decreases the number of states in the learned transducer as well as the error performance on test data. The improved algorithm induced a flapping transducer with the minimum number of states with as few as 6250 samples. The use of alignment information also reduced the learning time; the additional cost of calculating align- ments is more than compensated for by quicker merging of states. The algorithm also successfully induced transducers with the minimum number of states for the t-insertion and t-deletion rules below, given only 6250 samples. In our second experiment, we applied our learning algorithm to a more difficult problem: inducing multiple rules at once. A data set was constructed by applying the t-insertion rule in (3), the t-deletion rule in (4) and the flapping rule already seen in (2) one after another. As is seen in Figure 13, a transducer of minimum size (five states) was obtained with 12500 or more sample transductions. (3) 0 ---, t/n s (4) t---,O/n [+vocalic] -stress The effects of adding decision tress at each state of the machine for the composition of t-insertion, t-deletion and flapping are shown in Figure 14. 13 Samples 6250 12500 25000 50000 OSTIA w/Alignment to a rule such as States % Error 6 0.93 5 0.20 5 0.09 5 0.04 Figure 13: Results on Three Rules Composed Method OSTIA Alignment Add D-trees Prune D-trees States %Error 329 22.09 5 0.20 5 0.04 5 0.01 Figure 14: Results on Three Rules Composed 12,500 Training, 49,280 Test Figure 15 shows the final transducer induced from this corpus of 12,500 words with pruned decision trees. r t' r~._._IC s .... V C:t[ r:t[] n,V :t[] Figure 15: Three Rule Transducer Induced from 12,500 Samples An examination of the few errors (three samples) in the induced flapping and three-rule transducers points out a flaw in our model. While the learned transducer correctly makes the generalization that flapping occurs after any stressed vowel, it does not flap after two stressed vowels in a row. This is possible because no samples containing two stressed vowels in a row (or separated by an 'r') immediately followed by a flap were in the training data. This transducer will flap a 't' after any odd number of stressed vowels, rather than simply after any stressed vowel. Such a rule seems quite unnatural phonologically, and makes for an odd context-sensitive rewrite rule. Any sort of simplest hypothesis criterion applied to a system of rewrite rules would prefer a rule such as --+ V -+ v which is the equivalent of the transducer learned from the training data. This suggests that, the traditional for- malism of context-sensitive rewrite rules contains im- plicit generalizations about how phonological rules usu- ally work that are not present in the transducer system. We hope that further experimentation will lead to a way of expressing this language bias in our induction system. 7 Related Work Johnson (1984) gives one of the first computational al- gorithms for phonological rule induction. His algorithm works for rules of the form (5) a ---+ b/C where C is the feature matrix of the segments around a. Johnson's algorithm sets up a system of constraint equations which C must satisfy, by considering both the positive contexts, i.e., all the contexts Ci in which a b occurs on the surface, as well as all the negative contexts Cj in which an a occurs on the surface. The set of all positive and negative contexts will not generally deter- mine a unique rule, but will determine a set of possible rules. Touretzky et al. (1990) extended Johnson's insight by using the version spaces algorithm of Mitchell (1981) to induce phonological rules in their Many Maps architec- ture. Like Johnson's, their system looks at the underly- ing and surface realizations of single segments. For each segment, the system uses the version space algorithm to search for the proper statement of the context. Riley (1991) and Withgott & Chen (1993) first pro- posed a decision-tree approach to segmental mapping. A decision tree is induced for each phoneme, classifying possible realizations of the phoneme in terms of contex- tual factors such as stress and the surrounding phonemes. However, since the decision tree for each phoneme is learned separately, the the technique misses generaliza- tions about the behavior of similar phonemes. In addi- tion, no generalizations are made about similar context phonemes. In a transducer based formalism, general- izations about similar context phonemes naturally follow from generalizations about individual phonemes' behav- ior, as the context is represented by the current state of the machine, which in turn depends on the behavior of the machine on the previous phonemes. We hope that our hybrid model will be more successful at learning long distance dependencies than the simple decision tree approach. To model long distance rules such as vowel harmony in a simple decision tree approach, one must add more distant phonemes to the features used to learn the decision tree. In a transducer, this information is represented in the current state of the transducer. 14 8 Conclusion Inferring finite state transducers seems to hold promise as a method for learning phonological rules. Both of our ini- tial augmentations of OSTIA to bias it toward phonologi- cal naturalness improve performance. Using information on the alignment between input and output strings al- lows the algorithm to learn more compact, more accurate transducers. The addition of decision trees at each state of the resulting transducer further improves accuracy and results in phonologically more natural transducers. We believe that further and more integrated uses of phonolog- ical naturalness, such as generalizing across similar phe- nomena at different states of the transducer, interleaving the merging of states and generalization of transitions, and adding memory to the model of transduction, could help even more. Our current algorithm and most previous algorithms are designed for obligatory rules. These algorithms fall completely when faced with optional, probabilistic rules, such as flapping. This is the advantage of probabilistic approaches such as the Riley/Withgott approach. One area we hope to investigate is the generalization of our algorithm to probabilistic rules with probabilistic finite- state transducers, perhaps by augmenting PFST induction techniques such as Stolcke & Omohundro (1994) with insights from phonological naturalness. Besides aiding in the development of a practical tool for learning phonological rules, our results point to the use of constraints from universal grammar as a strong factor in the machine and possibly human learning of natural language phonology. RILEY, MICHAEL D. 1991. A statistical model for gener- ating pronunciation networks. In IEEE ICASSP-91, 737-740. STOLCKE, ANDREAS, 8¢ STEPHEN OMOHUNDRO. 1994. Best-first model merging for hidden Markov model induction. Technical Report TR-94-003, Interna- tional Computer Science Institute, Berkeley, CA. TOURETZKY, DAVID S., GILLETTE ELVGREN III, & DEIRDRE W. WHEELER. 1990. Phonological rule induction: An architectural solution. In Proceed- ings of the 12th Annual Conference of the Cognitive Science Society (COGSCI-90), 348-355. WAGNER, R. A., & M. J. FISCHER. 1974. The string-to- string correction problem. Journal of the Associa- tion for Computation Machinery 21.168-173. WITHGOTT, M. M., & E R. CHEN. 1993. Computation Models of American Speech. Center for the Study of Language and Information. Acknowledgments Thanks to Jerry Feldman, Eric Fosler, Isabel Galiano-Ronda, Lauri Karttunen, Jose Oncina,Andreas Stolcke, and Gary Tajch- man. This work was partially funded by ICSI. References JOHNSON, C. DOUGLAS. 1972. FormalAspects of Phono- logical Description. The Hague: Mouton. JOHNSON, MARK. 1984. A discovery procedure for certain phonological rules. In Proceedings of the Tenth International Conference on Computational Linguistics, 344-347, Stanford. KARTI'UNEN, LAURI. 1993. Finite-state constraints. In The Last Phonological Rule, ed. by John Goldsmith. University of Chicago Press. MITCHELL, TOM M. 1981. Generalization as search. In Readings in Artificial Intelligence, ed. by Bon- nie Lynn Webber & Nils J. Nilsson, 517-542. Los Altos: Moi'gan Kaufmann. ONCINA, JO$1~, PEDRO GARC[A, & ENRIQUE VIDAL. 1993. Learning subsequential transducers for pat- tern recognition tasks. IEEE Transactions on Pattern Analysis and Machine Intelligence 15.448-458. 15 | 1995 | 2 |
A Uniform Treatment of Pragmatic Inferences in Simple and Complex Utterances and Sequences of Utterances Daniel Marcu and Graeme Hirst Department of Computer Science University of Toronto Toronto, Ontario Canada M5S 1A4 {marcu, gh}©cs, toronto, edu Abstract Drawing appropriate defeasible infe- rences has been proven to be one of the most pervasive puzzles of natu- ral language processing and a recur- rent problem in pragmatics. This pa- per provides a theoretical framework, called stratified logic, that can ac- commodate defeasible pragmatic infe- rences. The framework yields an al- gorithm that computes the conversa- tional, conventional, scalar, clausal, and normal state implicatures; and the presuppositions that are associa- ted with utterances. The algorithm applies equally to simple and complex utterances and sequences of utteran- ces. 1 Pragmatics and Defeasibility It is widely acknowledged that a full account of na- tural language utterances cannot be given in terms of only syntactic or semantic phenomena. For ex- ample, Hirschberg (1985) has shown that in order to understand a scalar implicature, one must analyze the conversants' beliefs and intentions. To recognize normal state implicatures one must consider mutual beliefs and plans (Green, 1990). To understand con- versationM implicatures associated with indirect re- plies one must consider discourse expectations, dis- course plans, and discourse relations (Green, 1992; Green and Carberry, 1994). Some presuppositions are inferrable when certain lexical constructs (fac- tives, aspectuals, etc) or syntactic constructs (cleft and pseudo-cleft sentences) are used. Despite all the complexities that individualize the recognition stage for each of these inferences, all of them can be de- feated by context, by knowledge, beliefs, or plans of the agents that constitute part of the context, or by other pragmatic rules. Defeasibili~y is a notion that is tricky to deal with, and scholars in logics and pragmatics have learned to circumvent it or live with it. The first observers of the phenomenon preferred to keep defeasibility out- side the mathematical world. For Frege (1892), Rus- sell (1905), and Quine (1949) "everything exists"; therefore, in their logical systems, it is impossible to formalize the cancellation of the presupposition that definite referents exist (Hirst, 1991; Marcu and Hirst, 1994). We can taxonomize previous approa- ches to defea~ible pragmatic inferences into three ca- tegories (we omit here work on defeasibility related to linguistic phenomena such as discourse, anaphora, or speech acts). 1. Most linguistic approaches account for the de- feasibility of pragmatic inferences by analyzing them in a context that consists of all or some of the pre- vious utterances, including the current one. Con- text (Karttunen, 1974; Kay, 1992), procedural ru- les (Gazdar, 1979; Karttunen and Peters, 1979), lexical and syntactic structure (Weischedel, 1979), intentions (Hirschberg, 1985), or anaphoric cons- traints (Sandt, 1992; Zeevat, 1992) decide what pre- suppositions or implicatures are projected as prag- matic inferences for the utterance that is analyzed. The problem with these approaches is that they as- sign a dual life to pragmatic inferences: in the initial stage, as members of a simple or complex utterance, they are defeasible. However, after that utterance is analyzed, there is no possibility left of cancelling that inference. But it is natural to have implicatures and presuppositions that are inferred and cancelled as a sequence of utterances proceeds: research in conversation repairs (I-Iirst et M., 1994) abounds in such examples. We address this issue in more detail in section 3.3. 2. One way of accounting for cancellations that occur later in the analyzed text is simply to extend the boundaries within which pragmatic inferences are evaluated, i.e., to look ahead a few utterances. Green (1992) assumes that implicatures are connec- ted to discourse entities and not to utterances, but her approach still does not allow cancellations across discourse units. 3. Another way of allowing pragmatic inferences to be cancelled is to assign them the status of de- feasible information. Mercer (1987) formalizes pre- 144 suppositions in a logical framework that handles de- faults (Reiter, 1980), but this approach is not tracta- ble and it treats natural disjunction as an exclusive- or and implication as logical equivalence. Computational approaches fail to account for the cancellation of pragmatic inferences: once presuppo- sitions (Weischedel, 1979) or implicatures (Hirsch- berg, 1985; Green, 1992) are generated, they can never be cancelled. We are not aware of any forma- lism or computational approach that offers a unified explanation for the cancellability of pragmatic infe- rences in general, and of no approach that handles cancellations that occur in sequences of utterances. It is our aim to provide such an approach here. In doing this, we assume the existence, for each type of pragmatic inference, of a set of necessary conditi- ons that must be true in order for that inference to be triggered. Once such a set of conditions is met, the corresponding inference is drawn, but it is as- signed a defeasible status. It is the role of context and knowledge of the conversants to "decide" whe- ther that inference will survive or not as a pragma- tic inference of the structure. We put no boundaries upon the time when such a cancellation can occur, and we offer a unified explanation for pragmatic in- ferences that are inferable when simple utterances, complex utterances, or sequences of utterances are considered. We propose a new formalism, called "stratified logic", that correctly handles the pragmatic infe- rences, and we start by giving a very brief intro- duction to the main ideas that underlie it. We give the main steps of the algorithm that is defined on the backbone of stratified logic. We then show how different classes of pragmatic inferences can be cap- tured using this formalism, and how our algorithm computes the expected results for a representative class of pragmatic inferences. The results we report here are obtained using an implementation written in Common Lisp that uses Screamer (Siskind and McAllester, 1993), a macro package that provides nondeterministic constructs. 2 Stratified logic 2.1 Theoretical foundations We can offer here only a brief overview of stratified logic. The reader is referred to Marcu (1994) for a comprehensive study. Stratified logic supports one type of indefeasible information and two types of defeasible information, namely, infelicitously defea- sible and felicitously defeasible. The notion of infe- licitously defeasible information is meant to capture inferences that are anomalous to cancel, as in: (1) * John regrets that Mary came to the party but she did not come. The notion of felicitously defeasible information is meant to capture the inferences that can be cancel- led without any abnormality, as in: T d ..L d T' _k' T" _L" Felicitously Defea.sible Layer Infelicitously Defeasible Layer Undefeasible Layer Figure 1: The lattice that underlies stratified logic (2) John does not regret that Mary came to the party because she did not come. The lattice in figure 1 underlies the semantics of stratified logic. The lattice depicts the three levels of strength that seem to account for the inferences that pertain to natural language semantics and pragma- tics: indefeasible information belongs to the u layer, infelicitously defeasible information belongs to the i layer, and felicitously defeasible information be- longs to the d layer. Each layer is partitioned accor- ding to its polarity in truth, T ~, T i, T a, and falsity, .L =, .l J, .1_ d. The lattice shows a partial order that is defined over the different levels of truth. For exam- ple, something that is indefeasibly false, .l_ u, is stron- ger (in a sense to be defined below) than something that is infelicitously defeasibly true, T i, or felici- tously defeasibly false, .L a. Formally, we say that the u level is stronger than the i level, which is stronger than the d level: u<i<d. At the syntactic level, we allow atomic formulas to be labelled according to the same underlying lattice. Compound formulas are obtained in the usual way. This will give us formu- las such as regrets u ( John, come(Mary, party)) ---, cornel(Mary, party)), or (Vx)('-,bachelorU(x) --~ (malea( ) ^ The satisfaction relation is split according to the three levels of truth into u-satisfaction, i-satisfaction, and d-satisfaction: Definition 2.1 Assume ~r is an St. valuation such that t~ = di E • and assume that St. maps n-ary predicates p to relations R C 7~ × ... × 79. For any atomic formula p=(tl, t2,... ,t,), and any stratified valuation a, where z E {u, i, d} and ti are terms, the z-satisfiability relations are defined as follows: • a ~u p~(tl,...,tn) iff(dx,...,dnl E 1~ ~ • iff (dl,...,dn) E R u UR--ffUR i • o, ~u pa(tx,...,t,) iff (dz,..., d,) E R"U'R-¢URIU-~URa • tr ~ip~(tl,...,t, ) iff(dt,...,d,) E R i . cr ~ipi(tt,...,t,) iff(dl,...,d,) E R i • pd(tl,...,t,) ig (dl,...,d,) E R i U~TUR d • o" ~ap~(tz,...,tn) iff(dl,...,dn) E R a • ¢(tl,...,t,) iff (all,..., d,) e R d 145 • o" ~d pd(tl,...,tn ) iff (di,...,dr,) C= R d Definition 2.1 extends in a natural way to negated and compound formulas. Having a satisfaction de- finition associated with each level of strength provi- des a high degree of flexibility. The same theory can be interpreted from a perspective that allows more freedom (u-satisfaction), or from a perspective that is tighter and that signals when some defeasible in- formation has been cancelled (i- and d-satisfaction). Possible interpretations of a given set of utteran- ces with respect to a knowledge base are computed using an extension of the semantic tableau method. This extension has been proved to be both sound and complete (Marcu, 1994). A partial ordering, <, determines the set of optimistic interpretations for a theory. An interpretation m0 is preferred to, or is more optimistic than, an interpretation ml (m0 < ml) if it contains more information and that information can be more easily updated in the fu- ture. That means that if an interpretation m0 makes an utterance true by assigning to a relation R a defensible status, while another interpretation ml makes the same utterance true by assigning the same relation R a stronger status, m0 will be the preferred or optimistic one, because it is as informative as mi and it allows more options in the future (R can be defeated). Pragmatic inferences are triggered by utterances. To differentiate between them and semantic infe- rences, we introduce a new quantifier, V vt, whose semantics is defined such that a pragmatic inference of the form (VVtg)(al(,7) --* a2(g)) is instantiated only for those objects t' from the universe of dis- course that pertain to an utterance having the form al(~- Hence, only if the antecedent of a pragma- tic rule has been uttered can that rule be applied. A recta-logical construct uttered applies to the logi- cal translation of utterances. This theory yields the following definition: Definition 2.2 Let ~b be a theory described in terms of stratified first-order logic that appropriately for- malizes the semantics of lezical items and the ne- cessary conditions that trigger pragmatic inferences. The semantics of lezical terms is formalized using the quantifier V, while the necessary conditions that pertain to pragmatic inferences are captured using V trt. Let uttered(u) be the logical translation of a given utterance or set of utterances. We say that ut- terance u pragmatically implicates p if and only if p d or p i is derived using pragmatic inferences in at least one optimistic model of the theory ~ U uttered(u), and if p is not cancelled by any stronger informa- tion ('.p~,-.pi _.pd) in any optimistic model schema of the theory. Symmetrically, one can define what a negative pragmatic inference is. In both cases, W uttered(u) is u-consistent. 2.2 The algorithm Our algorithm, described in detail by Marcu (1994), takes as input a set of first-order stratified formu- las • that represents an adequate knowledge base that expresses semantic knowledge and the necessary conditions for triggering pragmatic inferences, and the translation of an utterance or set of utterances uttered(u). The Mgorithm builds the set of all possi- ble interpretations for a given utterance, using a ge- neralization of the semantic tableau technique. The model-ordering relation filters the optimistic inter- pretations. Among them, the defeasible inferences that have been triggered on pragmatic grounds are checked to see whether or not they are cancelled in any optimistic interpretation. Those that are not cancelled are labelled as pragmatic inferences for the given utterance or set of utterances. 3 A set of examples We present a set of examples that covers a repre- sentative group of pragmatic inferences. In contrast with most other approaches, we provide a consistent methodology for computing these inferences and for determining whether they are cancelled or not for all possible configurations: simple and complex ut- terances and sequences of utterances. 3.1 Simple pragmatic inferences 3.1.1 Lexical pragmatic inferences A factive such as the verb regret presupposes its complement, but as we have seen, in positive envi- ronments, the presupposition is stronger: it is accep- table to defeat a presupposition triggered in a nega- tive environment (2), but is infelicitous to defeat one that belongs to a positive environment (1). There- fore, an appropriate formalization of utterance (3) and the req~fisite pragmatic knowledge will be as shown in (4). (3) John does not regret that Mary came to the party. (4) uttered(-,regrets u (john, come( ,,ry, party))) (VU'=, y, z)(regras (=, come(y, co e i (y, z) ) (Vu'=, y, z)( regret," (=, come(y, z)) -* corned(y, z) ) The stratified semantic tableau that corresponds to theory (4) is given in figure 2. The tableau yields two model schemata (see figure 3); in both of them, it is defeasibly inferred that Mary came to the party. The model-ordering relation < establishes m0 as the optimistic model for the theory because it contains as much information as ml and is easier to defeat. Model m0 explains why Mary came to the party is a presupposition for utterance (3). 146 "~regrets(john, come(mary, party)) (Vx, y, z)(-~regrets(x, come(y, z) ) ---* corned(y, z) ) (Vx, y, z)(regrets(x, come(y, z)) --* comei(y, z)) I -.regrets(john, come(mary, party)) -- corned(mary, party) regrets(john, come(mary,party)) --* comei(mary, party) regrets(john, come(mary, party)) corned(mary, party) u-closed -.regrets(john, come(mary, party)) come i(mary, party) m_0 mL1 Figure 2: Stratified tableau for John does not regret that Mary came to the party. Schema # Indefeasible Infelicitously defeasible ",regrets ~ (john, come(mary, party) -.regTets ~(joh., come(mary, party) mo ml come ~ ( mary, party) Felicitously defeasible corned(mary, party) cornea(mary, party) Figure 3: Model schemata for John does not regret that Mary came to the party. Schema # Indefeasible mo went"( some( boys ), theatre) -.went"( all( boys ), theatre) Infelicitously Felicitously defeasible de feasible -',wentd( most( boys ), theatre) -.wentd( many( boys ), theatre) -,wentd(all(boys), theatre) Figure 4: Model schema for John says that some of thc boys went to the theatre. Schema # Indefeasible In]elicitously Felicitously de]easible de feasible mo we,,t"( some(boy,), theatre) ,oe,,t" ( most( boys ), theatre) went~(many(boys), theatre) went~(all(boys), theatre) d ".went (most(boys),theatre) d -.went (many(boys), theatre) -~wentd(all(boys), theatre) Figure 5: Model schema for John says that some of the boys went to the theatre. In fact all of them went to the theatre. 147 3.1.2 Scalar implicatures Consider utterance (5), and its implicatu- res (6). (5) John says that some of the boys went to the theatre. (6) Not {many/most/all} of the boys went to the theatre. An appropriate formalization is given in (7), where the second formula captures the defeasible scalar im- plicatures and the third formula reflects the relevant semantic information for all. (r) uttered(went(some(boys), theatre)) went" (some(boys), theatre) ---* (-~wentd(many(boys), theatre)A ",wentd(most(boys), theatre)^ -~wentd(aii(boys), theatre)) went" (all(boys), theatre) (went" (most(boys), theatre)A went" (many(boys), theatre)^ went"( some(boys), theatre) ) The theory provides one optimistic model schema (figure 4) that reflects the expected pragmatic in- ferences, i.e., (Not most/Not many/Not all) of the boys went to the theatre. 3.1.3 Simple cancellation Assume now, that after a moment of thought, the same person utters: (8) John says that some of the boys went to the theatre. In fact all of them went to the thea- tre. By adding the extra utterance to the initial theory (7), uttered(went(ail(boys),theatre)), one would obtain one optimistic model schema in which the conventional implicatures have been cancelled (see figure 5). 3.2 Complex utterances The Achilles heel for most theories of presupposition has been their vulnerability to the projection pro- blem. Our solution for the projection problem does not differ from a solution for individual utterances. Consider the following utterances and some of their associated presuppositions (11) (the symbol t> pre- cedes an inference drawn on pragmatic grounds): (9) Either Chris is not a bachelor or he regrets that Mary came to the party. (10) Chris is a bachelor or a spinster. (11) 1> Chris is a (male) adult. Chris is not a bachelor presupposes that Chris is a male adult; Chris regrets that Mary came to the party presupposes that Mary came to the party. There is no contradiction between these two presuppositions, so one would expect a conversant to infer both of them if she hears an utterance such as (9). Howe- ver, when one examines utterance (10), one observes immediately that there is a contradiction between the presuppositions carried by the individual com- ponents. Being a bachelor presupposes that Chris is a male, while being a spinster presupposes that Chris is a female. Normally, we would expect a con- versant to notice this contradiction and to drop each of these elementary presuppositions when she inter- prets (10). We now study how stratified logic and the model- ordering relation capture one's intuitions. 3.2.1 Or-- non-cancellation An appropriate formalization for utterance (9) and the necessary semantic and pragmatic know- ledge is given in (12). (12) l uttered(-~bachelor(Chris)V regret(Chris, come(Mary, party))) (- bachelor" (Chris)V regret" (Chris, come(Mary, party))) -~(-~bachelord( Chris)A regret d( chris, come(Mary, party))) --,male(Mary) (Vx )( bachelor" ( x ) .--+ I male"(x) A adultU(z) A "-,married"(x)) (VUtx)(-4bachelorU(=) --~ marriedi(x)) (vUt x )(-~bachelor"( x ) --~ adulta( x ) ) (vu'x)(--,bachelorU(x) .-, maled(=)) y, z)(- regret"(=, come(y, z) ) cored(y, ,)) (vv'=, y, z )( regret" ( =, ome(y, ) ) - come i (y, z ) ) Besides the translation of the utterance, the initial theory contains a formalization of the defeasible im- plicature that natural disjunction is used as an exclu- sive or, the knowledge that Mary is not a name for males, the lexical semantics for the word bachelor, and the lexical pragmatics for bachelor and regret. The stratified semantic tableau generates 12 model schemata. Only four of them are kept as optimistic models for the utterance. The models yield Mary came to the party; Chris is a male; and Chris is an adult as pragmatic inferences of utterance (9). 3.2.2 Or- cancellation Consider now utterance (10). The stratified se- mantic tableau that corresponds to its logical theory yields 16 models, but only Chris is an adult satisfies definition 2.2 and is projected as presupposition for the utterance. 3.3 Pragmatic inferences in sequences of utterances We have already mentioned that speech repairs con- stitute a good benchmark for studying the genera- 148 tion and cancellation of pragmatic inferences along sequences of utterances (McRoy and Hirst, 1993). Suppose, for example, that Jane has two friends -- John Smith and John Pevler -- and that her room- mate Mary has met only John Smith, a married fel- low. Assume now that Jane has a conversation with Mary in which Jane mentions only the name John because she is not aware that Mary does not know about the other John, who is a five-year-old boy. In this context, it is natural for Mary to become confu- sed and to come to wrong conclusions. For example, Mary may reply that John is not a bachelor. Alt- hough this is true for both Johns, it is more appro- priate for the married fellow than for the five-year- old boy. Mary knows that John Smith is a married male, so the utterance makes sense for her. At this point Jane realizes that Mary misunderstands her: all the time Jane was talking about John Pevler, the five-year-old boy. The utterances in (13) constitute a possible answer that Jane may give to Mary in order to clarify the problem. (13) a. No, John is not a bachelor. b. I regret that you have misunderstood me. c. He is only five years old. The first utterance in the sequence presuppo- ses (14). (14) I> John is a male adult. Utterance (13)b warns Mary that is very likely she misunderstood a previous utterance (15). The war- ning is conveyed by implicature. (15) !> The hearer misunderstood the speaker. At this point, the hearer, Mary, starts to believe that one of her previous utterances has been elabo- rated on a false assumption, but she does not know which one. The third utterance (13)c comes to cla- rify the issue. It explicitly expresses that John is not an adult. Therefore, it cancels the early presupposi- tion (14): (16) ~ John is an adult. Note that there is a gap of one statement between the generation and the cancellation of this presup- position. The behavior described is mirrored both by our theory and our program. 3.4 Conversational implicatures in indirect replies The same methodology can be applied to mode- ling conversational impIicatures in indirect replies (Green, 1992). Green's algorithm makes use of dis- course expectations, discourse plans, and discourse relations. The following dialog is considered (Green, 1992, p. 68): (17) Q: Did you go shopping? A: a. My car's not running. b. The timing belt broke. c. (So) I had to take the bus. Answer (17) conveys a "yes", but a reply consisting only of (17)a would implicate a "no". As Green no- tices, in previous models of implicatures (Gazdar, 1979; Hirschberg, 1985), processing (17)a will block the implicature generated by (17)c. Green solves the problem by extending the boundaries of the analysis to discourse units. Our approach does not exhibit these constraints. As in the previous example, the one dealing with a sequence of utterances, we obtain a different interpretation after each step. When the question is asked, there is no conversational impli- cature. Answer (17)a makes the necessary conditi- ons for implicating "no" true, and the implication is computed. Answer (17)b reinforces a previous con- dition. Answer (17)c makes the preconditions for implicating a "no" false, and the preconditions for implicating a "yes" true. Therefore, the implicature at the end of the dialogue is that the conversant who answered went shopping. 4 Conclusions Unlike most research in pragmatics that focuses on certain types of presuppositions or implicatures, we provide a global framework in which one can ex- press all these types of pragmatic inferences. Each pragmatic inference is associated with a set of ne- cessary conditions that may trigger that inference. When such a set of conditions is met, that infe- rence is drawn, but it is assigned a defeasible status. An extended definition of satisfaction and a notion of "optimism" with respect to different interpreta- tions yield the preferred interpretations for an ut- terance or sequences of utterances. These interpre- tations contain the pragmatic inferences that have not been cancelled by context or conversant's know- ledge, plans, or intentions. The formalism yields an algorithm that has been implemented in Common Lisp with Screamer. This algorithm computes uni- formly pragmatic inferences that are associated with simple and complex utterances and sequences of ut- terances, and allows cancellations of pragmatic infe- rences to occur at any time in the discourse. Acknowledgements This research was supported in part by a grant from the Natural Sciences and Engineering Research Council of Canada. 149 References G. Frege. 1892. 0bet sinn und bedeutung. Zeit- schrift fiir Philos. und Philos. Kritik, 100:373-394. reprinted as: On Sense and Nominatum, In Feigl H. and Sellars W., editors, Readings in Philoso- phical Analysis, pages 85-102, Appleton-Century- Croft, New York, 1947. G.J.M. Gazdar. 1979. Pragmatics: Implicature, Presupposition, and Logical Form. Academic Press. N. Green and S. Carberry. 1994. A hybrid reasoning model for indirect answers. In Proceedings 3Pnd Annual Meeting of the Association for Computa- tional Linguistics, pages 58-65. N. Green. 1990. Normal state implicature. In Pro- ceedings 28th Annual Meeting of the Association for Computational Linguistics, pages 89-96. N. Green. 1992. Conversational implicatures in in- direct replies. In Proceedings 30th Annual Meeting of the Association for Computational Linguistics, pages 64-71. J.B. Hirschberg. 1985. A theory of scalar impli- cature. Technical Report MS-CIS-85-56, Depart- ment of Computer and Information Science, Uni- versity of Pennsylvania. Also published by Gar- land Publishing Inc., 1991. G. Hirst, S. McRoy, P. Heeman, P. Edmonds, and D. Horton. 1994. Repairing conversational mi- sunderstandings and non-understandings. Speech Communication, 15:213-229. G. Hirst. 1991. Existence assumptions in knowledge representation. Artificial Intelligence, 49:199-242. L. Karttunen and S. Peters. 1979. Conventional im- plicature. In Oh C.K. and Dinneen D.A, editors, Syntaz and Semantics, Presupposition, volume 11, pages 1-56. Academic Press. L. Karttunen. 1974. Presupposition and linguistic context. Theoretical Linguistics, 1:3-44. P. Kay. 1992. The inheritance of presuppositions. Linguistics £4 Philosophy, 15:333-379. D. Marcu. 1994. A formalism and an algorithm for computing pragmatic inferences and detecting infelicities. Master's thesis, Dept. of Computer Science, University of Toronto, September. Also published as Technical Report CSRI-309, Com- puter Systems Research Institute, University of Toronto. D. Marcu and G. Hirst. 1994. An implemented for- malism for computing linguistic presuppositions and existential commitments. In H. Bunt, R. Mus- kens, and G. Rentier, editors, International Work- shop on Computational Semantics, pages 141-150, December. S. McRoy and G. Hirst. 1993. Abductive expla- nation of dialogue misunderstandings. In Pro- ceedings, 6th Conference of the European Chapter of the Association for Computational Linguistics, pages 277-286, April. R.E. Mercer. 1987. A Default Logic Approach to the Derivation of Natural Language Presuppositions. Ph.D. thesis, Department of Computer Science, University of British Columbia. W.V.O. Quine. 1949. Designation and existence. In Feigl H. and Sellars W., editors, Readings in Philosophical Analysis, pages 44-51. Appleton- Century-Croft, New York. R. Reiter. 1980. A logic for default reasoning. Ar- tificial Intelligence, 13:81-132. B. Russell. 1905. On denoting. Mind n.s., 14:479- 493. reprinted in: Feigl H. and Sellars W. editors, Readings in Philosophical Analysis, pages 103- 115. Applcton-Century-Croft, New York, 1949. R.A. van der Sandt. 1992. Presupposition projec- tion as anaphora resolution. Journal of Seman- tics, 9:333-377. J.M. Siskind and D.A. McAllester. 1993. Screamer: A portable efficient implementation of nondeter- ministic Common Lisp. Technical Report IRCS- 93-03, University of Pennsylvania, Institute for Research in Cognitive Science, July 1. R.M. Weischedel. 1979. A new semantic compu- tation while parsing: Presupposition and entail- ment. In Oh C.K. and Dinneen D.A, editors, Syn- ta~ and Semantics, Presupposition, volume 11, pa- ges 155-182. Academic Press. H. Zeevat. 1992. Presupposition and accommoda- tion in update semantics. Journal of Semantics, 9:379-412. 150 | 1995 | 20 |
D-Tree Grammars Owen Rambow CoGenTex, Inc. 840 Hanshaw Road Ithaca, NY 14850 owen@cogent ex. com K. Vijay-Shanker Department of Computer Information Science University of Delaware Newark, DE 19716 vii ay©udel, edu David Weir School of Cognitive & Computing Sciences University of Sussex Brighton, BN1 9HQ, UK. david, weir~cogs, susx. ac. uk Abstract DTG are designed to share some of the advantages of TAG while overcoming some of its limitations. DTG involve two com- position operations called subsertion and sister-adjunction. The most distinctive fea- ture of DTG is that, unlike TAG, there is complete uniformity in the way that the two DTG operations relate lexical items: subsertion always corresponds to comple- mentation and sister-adjunction to modi- fication. Furthermore, DTG, unlike TAG, can provide a uniform analysis for wh- movement in English and Kashmiri, des- pite the fact that the wh element in Kash- miri appears in sentence-second position, and not sentence-initial position as in Eng- lish. 1 Introduction We define a new grammar formalism, called D-Tree Grammars (DTG), which arises from work on Tree- Adjoining Grammars (TAG) (Joshi et al., 1975). A salient feature of TAG is the extended domain of lo- cality it provides. Each elementary structure can be associated with a lexical item (as in Lexicalized TAG (LTAG) (Joshi ~ Schabes, 1991)). Properties related to the lexical item (such as subcategoriza- tion, agreement, certain types of word order varia- tion) can be expressed within the elementary struc- ture (Kroch, 1987; Frank, 1992). In addition, TAG remain tractable, yet their generative capacity is suf- ficient to account for certain syntactic phenomena that, it has been argued, lie beyond Context-Free Grammars (CFG) (Shieber, 1985). TAG, however, has two limitations which provide the motivation for this work. The first problem (discussed in Section 1.1) is that the TAG operations of substitution and ad- junction do not map cleanly onto the relations of complementation and modification. A second pro- blem (discussed in Section 1.2) has to do with the inability of TAG to provide analyses for certain syn- tactic phenomena. In developing DTG we have tried to overcome these problems while remaining faith- ful to what we see as the key advantages of TAG (in particular, its enlarged domain of locality). In Sec- tion 1.3 we introduce some of the key features of DTG and explain how they are intended to address the problems that we have identified with TAG. 1,1 Derivations and Dependencies In LTAG, the operations of substitution and adjunc- tion relate two lexical items. It is therefore natural to interpret these operations as establishing a di- rect linguistic relation between the two lexical items, namely a relation of complementation (predicate- argument relation) or of modification. In purely CFG-based approaches, these relations are only im- plicit. However, they represent important linguistic intuition, they provide a uniform interface to se- mantics, and they are, as Schabes ~ Shieber (1994) argue, important in order to support statistical pa- rameters in stochastic frameworks and appropriate adjunction constraints in TAG. In many frameworks, complementation and modification are in fact made explicit: LFG (Bresnan & Kaplan, 1982) provides a separate functional (f-) structure, and dependency grammars (see e.g. Mel'~uk (1988)) use these no- tions as the principal basis for syntactic represen- tation. We will follow the dependency literature in referring to complementation and modification as syntactic dependency. As observed by Rambow and Joshi (1992), for TAG, the importance of the dependency structure means that not only the deri- ved phrase-structure tree is of interest, but also the operations by which we obtained it from elementary structures. This information is encoded in the deri- vation tree (Vijay-Shanker, 1987). However, as Vijay-Shanker (1992) observes, the TAG composition operations are not used uniformly: while substitution is used only to add a (nominal) complement, adjunction is used both for modifica- tion and (clausal) complementation. Clausal com- plementation could not be handled uniformly by substitution because of the existence of syntactic phenomena such as long-distance wh-movement in English. Furthermore, there is an inconsistency in 151 the directionality of the operations used for comple- mentation in TAG@: nominal complements are sub- stituted into their governing verb's tree, while the governing verb's tree is adjoined into its own clausal complement. The fact that adjunction and substitu- tion are used in a linguistically heterogeneous man- ner means that (standard) "lAG derivation trees do not provide a good representation of the dependen- cies between the words of the sentence, i.e., of the predicate-argument and modification structure. adore S ~ adore Mary / OBJ\ seem hotdog claim S U ~ [MOD I sUBJ Mary / OBJ \ seem spicy he hotdog claim I MOD MOD~MOD I SUBJ small spicy small he Figure 1: Derivation trees for (1): original definition (left); Schabes & Shieber definition (right) For instance, English sentence (1) gets the deriva- tion structure shown on the left in Figure 11 . (1) Small spicy hotdogs he claims Mary seems to adore When comparing this derivation structure to the dependency structure in Figure 2, the following pro- blems become apparent. First, both adjectives de- pend on hotdog, while in the derivation structure small is a daughter of spicy. In addition, seem de- pends on claim (as does its nominal argument, he), and adore depends on seem. In the derivation struc- ture, seem is a daughter of adore (the direction does not express the actual dependency), and claim is also a daughter of adore (though neither is an argument of the other). claim SUB J~"~OMP he seem I COMP adore SUB~BJ Mary hotdog MOD ~.~OD spicy small Figure 2: Dependency tree for (1) Schabes & Shieber (1994) solve the first problem 1For clarity, we depart from standard TAG notational practice and annotate nodes with lexemes and arcs with grammatical function: by distinguishing between the adjunction of modi- fiers and of clausal complements. This gives us the derivation structure shown on the right in Figure 1. While this might provide a satisfactory treatment of modification at the derivation level, there are now three types of operations (two adjunctions and sub- stitution) for two types of dependencies (arguments and modifiers), and the directionality problem for embedded clauses remains unsolved. In defining DTG we have attempted to resolve these problems with the use of a single operation (that we call subsertion) for handling Ml comple- mentation and a second operation (called sister- adjunction) for modification. Before discussion these operations further we consider a second pro- blem with TAG that has implications for the design of these new composition operations (in particular, subsertion). 1.2 Problematic Constructions for TAG TAG cannot be used to provide suitable analyses for certain syntactic phenomena, including long- distance scrambling in German (Becket et hi., 1991), Romance Clitics (Bleam, 1994), wh-extraction out of complex picture-NPs (Kroch, 1987), and Kashmiri wh-extraction (presented here). The problem in de- scribing these phenomena with TAG arises from the fact (observed by Vijay-Shanker (1992)) that adjoi- ning is an overly restricted way of combining structu- res. We illustrate the problem by considering Kash- miri wh-extraction, drawing on Bhatt (1994). Wh- extraction in Kashmiri proceeds as in English, ex- cept that the wh-word ends up in sentence-second position, with a topic from the matrix clause in sentence-initial position. This is illustrated in (2a) for a simple clause and in (2b) for a complex clause. (2) a. rameshan kyaa dyutnay tse RameshzRG whatNOM gave yOUDAT What did you give Ramesh? b. rameshan kyaal chu baasaan [ ki RameshzRG what is believeNperf that me kor ti] IZRG do What does Ramesh beheve that I did? Since the moved element does not appear in sentence-initial position, the TAG analysis of English wh-extraction of Kroch (1987; 1989) (in which the matrix clause is adjoined into the embedded clause) cannot be transferred, and in fact no linguistically plausible TAG analysis appears to be available. In the past, variants of TAG have been develo- ped to extend the range of possible analyses. In Multi-Component TAG (MCTAG) (Joshi, 1987), trees are grouped into sets which must be adjoined to- gether (multicomponent adjunction). However, MC- TAG lack expressive power since, while syntactic re- lations are invariably subject to c-command or do- minance constraints, there is no way to state that 152 two trees from a set must be in a dominance rela- tion in the derived tree. MCTAG with Domination Links (MCTAG-DL) (Becker et al., 1991) are multi- component systems that allow for the expression of dominance constraints. However, MCTAG-DL share a further problem with MCTAG: the derivation struc- tures cannot be given a linguistically meaningful in- terpretation. Thus, they fail to address the first pro- blem we discussed (in Section 1.1). 1.3 The DTG Approach Vijay-Shanker (1992) points out that use of ad- junction for clausal complementation in TAG corre- sponds, at the level of dependency structure, to sub- stitution at the foot node s of the adjoined tree. Ho- wever, adjunction (rather than substitution) is used since, in general, the structure that is substituted may only form part of the clausal complement: the remaining substructure of the clausal complement appears above the root of the adjoined tree. Un- fortunately, as seen in the examples given in Sec- tion 1.2, there are cases where satisfactory analyses cannot be obtained with adjunction. In particular, using adjunction in this way cannot handle cases in which parts of the clausal complement are required to be placed within the structure of the adjoined tree. The DTG operation of subsertion is designed to overcome this limitation. Subsertion can be viewed as a generalization of adjunction in which com- ponents of the clausal complement (the subserted structure) which are not substituted can be inters- persed within the structure that is the site of the subsertion. Following earlier work (Becket et al., 1991; Vijay-Shanker, 1992), DTG provide a mecha- nism involving the use of domination links (d-edges) that ensure that parts of the subserted structure that are not substituted dominate those parts that are. Furthermore, there is a need to constrain the way in which the non-substituted components can be interspersed 3. This is done by either using ap- propriate feature constraints at nodes or by means of subsertion-insertion constraints (see Section 2). We end this section by briefly commenting on the other DTG operation of sister-adjunction. In TAG, modification is performed with adjunction of modi- fier trees that have a highly constrained form. In particular, the foot nodes of these trees are always daughters of the root and either the leftmost or rightmost frontier nodes. The effect of adjoining a 2In these cases the foot node is an argument node of the lexical anchor. SThis was also observed by Rambow (1994a), where an integrity constraint (first defined for an tD/LP version of TAG (Becket et aJ., 1991)) is defined for a MCTAG-DL version called V-TAG. However, this was found to be in- sufficient for treating both long-distance scrambling and long-distance topicalization in German. V-TAG retains adjoining (to handle topicalization) for this reason. tree of this form corresponds (almost) exactly to the addition of a new (leftmost or rightmost) subtree below the node that was the site of the adjunction. For this reason, we have equipped DTG with an ope- ration (sister-adjunction) that does exactly this and nothing more. From the definition of DTG in Sec- tion 2 it can be seen that the essential aspects of Schabes & Shieber (1994) treatment for modifica- tion, including multiple modifications of a phrase, can be captured by using this operation 4. After defining DTG in Section 2, we discuss, in Section 3, DTG analyses for the English and Kash- miri data presented in this section. Section 4 briefly discusses DTG recognition algorithms. 2 Definition of D-Tree Grammars A d-tree is a tree with two types of edges: domi- nation edges (d-edges) and immediate domination edges (i-edges). D-edges and i-edges express domi- nation and immediate domination relations between nodes. These relations are never rescinded when d- trees are composed. Thus, nodes separated by an i-edge will remain in a mother-daughter relationship throughout the derivation, whereas nodes separated by an d-edge can be equated or have a path of any length inserted between them during a derivation. D-edges and i-edges are not distributed arbitrarily in d-trees. For each internal node, either all of its daughters are linked by i-edges or it has a single daughter that is linked to it by a d-edge. Each node is labelled with a terminal symbol, a nonterminal symbol or the empty string. A d-tree containing n d-edges can be decomposed into n + 1 components containing only i-edges. D-trees can be composed using two operations: subsertion and sister-adjunction. When a d-tree a is subserted into another d-tree/3, a component of a is substituted at a frontier nonterminal node (a substitution node) of/3 and all components of a that are above the substituted component are in- serted into d-edges above the substituted node or placed above the root node. For example, consider the d-trees a and /3 shown in Figure 3. Note that components are shown as triangles. In the compo- sed d-tree 7 the component a(5) is substituted at a substitution node in /3. The components, a(1), a(2), and a(4) of a above a(5) drift up the path in/3 which runs from the substitution node. These components are then inserted into d-edges in/3 or above the root of/3. In general, when a component c~(i) of some d-tree a is inserted into a d-edge bet- ween nodes ~/1 and r/2 two new d-edges are created, the first of which relates r/t and the root node of a(i), and the second of which relates the frontier 4Santorini and Mahootian (1995) provide additional evidence against the standard TAG approach to modifi- cation from code switching data, which can be accounted for by using sister-adjunction. 153 a = ~ insertion [ t ~ insertion [ i i ! ~ substitution p i ! t ! Figure 3: Subsertion node of a(i) that dominates the substituted com- ponent to T/2. It is possible for components above the substituted node to drift arbitrarily far up the d-tree and distribute themselves within domination edges, or above the root, in any way that is compati- ble with the domination relationships present in the substituted d-tree. DTG provide a mechanism called subsertion-insertlon constraints to control what can appear within d-edges (see below). The second composition operation involving d- trees is called sister-adjunction. When a d-tree a is sister-adjoined at a node y in a d-tree fl the com- posed d-tree 7 results from the addition to /~ of a as a new leftmost or rightmost sub-d-tree below 7/. Note that sister-adjunction involves the addition of exactly one new immediate domination edge and that severM sister-adjunctions can occur at the same node. Sister-adjoining constraints specify where d-trees can be sister-adjoined and whether they will be right- or left-sister-adjoined (see below). A DTG is a four tuple G = (VN, VT, S, D) where VN and VT are the usual nonterminal and termi- nal alphabets, S E V~ is a distinguished nonter- minal and D is a finite set of elementary d-trees. A DTG is said to be lexicalized if each d-tree in the grammar has at least one terminal node. The elementary d-trees of a grammar G have two addi- tionM annotations: subsertion-insertion constraints and sister-adjoining constraints• These will be de- scribed below, but first we define simultaneously DTG derivations and subsertion-adjoining trees (SA- trees), which are partial derivation structures that can be interpreted as representing dependency in- formation, the importance of which was stressed in the introduction 5. Consider a DTG G = (VN, VT,S, D). In defining SA-trees, we assume some naming convention for the elementary d-trees in D and some consistent or- dering on the components and nodes of elementary d-trees in D. For each i, we define the set of d-trees TI(G) whose derivations are captured by SA-trees of height i or less. Let To(G) be the set D of elemen- tary d-trees of G. Mark all of the components of each d-tree in To(G) as being substitutable 6. Only com- ponents marked as substitutable can be substituted in a subsertion operation. The SA-tree for ~ E To(G) consists of a single node labelled by the elementary d-tree name for a. For i > 0 let ~(G) be the union of the set ~-I(G) with the set of all d-trees 7 that can be produced as follows. Let a E D and let 7 be the result of subser- ting or sister-adjoining the d-trees 71,- •., 7k into a where 71, • -., 7k are all in Ti- I(G), with the subser- tions taking place at different substitution nodes in as the footnote. Only substitutable components of 71,..-, 3'k can be substituted in these subsertions. Only the new components of 7 that came from a are marked as substitutable in 7. Let Vl,..., ~'k be the SA-trees for 71,..-,7k, respectively. The SA-tree r for 7 has root labelled by the name for a and k sub- trees rt,. •., rk. The edge from the root of r to the root of the subtree ri is labelled by li (1 < i < k) de- fined as follows. Suppose that 71 was subserted into a and the root of r/is labelled by the name of some c~ ~ E D. Only components of a ~ will have been mar- ked as substitutable in 7/- Thus, in this subsertion some component cJ(j) will have been substituted at a node in a with address n. In this case, the la- bel l~ is the pair (j, n). Alternatively, 7i will have S I)ue to space limitations, in the following definiti- ons we are forced to be somewhat imprecise when we identify a node in a derived d-tree with the node in the elementary d-trees (elementary nodes) from which it was derived. This is often done in TAG literature, and hope- fully it will be clear what is intended. eWe will discuss the notion of substitutability further in the next section. It is used to ensure the $A-tree is a tree. That is, an elementary structure cannot be subserted into more than one structure since this would be counter to our motivations for using subsertion for complementation. 154 been d-sister-adjoined at some node with address n in a, in which case li will be the pair (d, n) where d e { left, right }. The tree set T(G) generated by G.is defined as the set of trees 7 such that: 7' E T/(G) for some i 0; 7 ~ is rooted with the nonterminal S; the frontier of 7' is a string in V~ ; and 7 results from the removal of all d-edges from 7'. A d-edge is removed by merging the nodes at either end of the edge as long as they are labelled by the same symbol. The string language L(G) associated with G is the set of terminal strings appearing on the frontier of trees in T(G). We have given a reasonably precise definition of SA-trees since they play such an important role in the motivation for this work. We now describe infor- mally a structure that can be used to encode a DTG derivation. A derivation graph for 7 E T(G) results from the addition of insertion edges to a SA-tree r for 7. The location in 7 of an inserted elementary component a(i) can be unambiguously determined by identifying the source of the node (say the node with address n in the elementary d-tree a') with which the root of this occurrence of a(i) is merged with when d-edges are removed. The insertion edge will relate the two (not necessarily distinct) nodes corresponding to appropriate occurrences of a and a' and will be labelled by the pair (i, n). Each d-edge in elementary d-trees has an associa- ted subsertion-insertion constraint (SIC). A SIC is a finite set of elementary node addresses (ENAs). An I=NA ~} specifies some elementary d-tree a E D, a component of a and the address of a node within that component of a. If a ENA y/is in the SIC asso- ciated with a d-edge between 7z and r/2 in an elemen- tary d-tree a then ~/cannot appear properly within the path that appears from T/t to T/2 in the derived tree 7 E T(G). Each node of elementary d-trees has an associa- ted sister-adjunction constraint (SAC). A SAC is a finite set of pairs, each pair identifying a direction (left or right) and an elementary d-tree. A SAC gi- ves a complete specification of what can be sister- adjoined at a node. If a node ~/is associated with a SAC containing a pair (d, a) then the d-tree a can be d-sister-adjoined at r/. By definition of sister- adjunction, all substitution nodes and all nodes at the top of d-edges can be assumed to have SACs that are the empty-set. This prevents sister-adjunction at these nodes. In this section we have defined "raw" DTG. In a more refined version of the formalism we would as- sociate (a single) finite-valued feature structure with each node 7. It is a matter of further research to de- termine to what extent SICs and SACs can be stated globally for a grammar, rather than being attached 7Trees used in Section 3 make use of such feature structures. to d-edges/nodes s. See the next section for a brief discussion of linguistic principles from which a gram- mar's SICs could be derived. 3 Linguistic Examples In this section, we show how an account for the data introduced in Section 1 can be given with DTG. 3.1 Getting Dependencies Right: English S I a o ! s NP VP [fro: +1 0 ! vP[fin: +] v s v I I claim-, seems S' NP S o (hotdogs) t S. ~P vP[~: +l (Mary) l vP[fin: -I V NP I I to adore e s i ! ! vP[rm: +1 vP[f~: -I Figure 4: D-trees for (1) In Figure 4, we give a DTG that generates sent- ence (1). Every d-tree is a projection from a lexical anchor. The label of the maximal projection is, we assume, determined by the morphology of the an- chor. For example, if the anchor is a finite verb, it will project to S, indicating that an overt syntactic ("surface") subject is required for agreement with it (and perhaps case-assignment). Furthermore, a finite verb may optionally also project to S' (as in the d-tree shown for claims), indicating that a wh- moved or topicalized element is required. The fi- nite verb seems also projects to S, even though it does not itself provide a functional subject. In the case of the to adore tree, the situation is the in- verse: the functional subject requires a finite verb Sin this context, it might be beneficiM to consider the expression of a feature-based lexicalist theory such as HPSG in DTG, similar to the compilation of HPSG to TAG (Kasper et al., 1995). 155 to agree with, which is signaled by the fact that its component's root and frontier nodes are labelled S and VP, respectively, but the verb itself is not finite and therefore only projects to VP[-fin]. Therefore, the subject will have to raise out of its clause for agreement and case assignment. The direct object of to adore has wh-moved out of the projection of the verb (we include a trace for the sake of clarity). S' NP S N' NP VP AdjP AdjP N he V S I i I I Adj Adj hotdogs claims NP VP small spicy Mary seems VP V NP I t to adore e Figure 5: Derived tree for (1) We add SlCs to ensure that the projections are respected by components of other d-trees that may be inserted during a derivation. A SIC is associa- ted with the d-edge between VP and S node in the seems d-tree to ensure that no node labelled S ~ can be inserted within it - i.e., it can not be filled by with a wh-moved element. In contrast, since both the subject and the object of to adore have been moved out of the projection of the verb, the path to these arguments do not carry any SIC at all 9. We now discuss a possible derivation. We start out with the most deeply embedded clause, the ad- ores clause. Before subserting its nominal argu- ments, we sister-adjoin the two adjectival trees to the tree for hotdogs. This is handled by a SAC asso- ciated with the N' node that allows all trees rooted in AdjP to be left sister-adjoined. We then sub- sert this structure and the subject into the to adore d-tree. We subsert the resulting structure into the seems clause by substituting its maximal projection node, labelled VP[fin: -], at the VP[fin: -] frontier node of seems, and by inserting the subject into the d-edge of the seems tree. Now, only the S node of the seems tree (which is its maximal projection) is substitutable. Finally, we subsert this derived struc- 9We enforce island effects for wh-movement by using a [+extract] feature on substitution nodes. This corre- sponds roughly to the analysis in TAG, where islandhood is (to a large extent) enforced by designating a particular node as the foot node (Kroch & Joshi, 1986). ture into the claims d-tree by substituting the S node of seems at the S complement node of claims, and by inserting the object of adores (which has not yet been used in the derivation) in the d-edge of the claims d-tree above its S node. The derived tree is shown in Figure 5. The SA-tree for this derivation corresponds to the dependency tree given previously in Figure 2. Note that this is the only possible derivation invol- ving these three d-trees, modulo order of operations. To see this, consider the following putative alternate derivation. We first subsert the to adore d-tree into the seems tree as above, by substituting the anchor component at the substitution node of seems. We insert the subject component of fo adore above the anchor component of seems. We then subsert this derived structure into the claims tree by substitu- ting the root of the subject component of to adore at the S node of claims and by inserting the S node of the seems d-tree as well as the object component of the to adore d-tree in the S'/S d-edge of the claims d-tree. This last operation is shown in Figure 6. The resulting phrase structure tree would be the same as in the previously discussed derivation, but the deri- vation structure is linguistically meaningless, since to adore world have been subserted into both seems and claims. However, this derivation is ruled out by the restriction that only substitutable components can be substituted: the subject component of the adore d-tree is not substitutable after subsertion into the seems d-tree, and therefore it cannot be substi- tuted into the claims d-tree. S ~ NP S i (hotdogs) t S ! S Substitution NP ~ + l (Mary) V VP[fin: -] seems V NP I I to adore e Insertions S' S NP VP [fm: +1 J i VP [fin: +] V S ' t claims Figure 6: An ill-formed derivation In the above discussion, substitutability played a 156 central role in ruling out the derivation. We observe in passing that the SIC associated to the d-edge in the seems d-tree also rules out this derivation. The derivation requires that the S node of seems be in- serted into the SI/S d-edge of claims. However, we would have to stretch the edge over two components which are both ruled out by the SIC, since they vio- late the projection from seems to its S node. Thus, the derivation is excluded by the independently mo- tivated Sits, which enforce the notion of projection. This raises the possibility that, in grammars that ex- press certain linguistic principles, substitutability is not needed for ruling out derivations of this nature. We intend to examine this issue in future work. 3.2 Getting Word Order Right: Kashmiri [ twO~:: -~ NP VP ! (ramesha~) ' F top:-'1 ' / wS:-I Aux VP (chu) NP VP e V VP _ I baasaan fin:" [ tw°~:: q] NP VP (kyaa) ' I" tol): "1 COMP VP (ki) NP VP (m~) ~ NP VP I I e V I kor Figure 7: D-trees for (2b) Figure 7 shows the matrix and embedded clauses for sentence (2b). We use the node label VP throug- hout and use features such as top (for topic) to diffe- rentiate different levels of projection. Observe that in both trees an argument has been fronted. Again, we will use the SlCs to enforce the projection from a lexical anchor to its maximal projection. Since the direct object of kor has wh-moved out of its clause, the d-edge connecting it to the maximal projection of its verb has no SIC. The d-edge connecting the maximal projection of baasaan to the Aux compo- nent, however, has a SIC that allows only VP[wh: +, top: -] nodes to be inserted. v r +1 ~ L fi,,: +J . VP ~n5: rameshas ~ f i n : +J Aux VP ¢hu I vp e ~f~Vp [ fin:' -t,,J tw°~:: -] baaaaan COMP VP ki NP VP me NP VP I I e V I kor Figure 8: Derived d-tree for (2b) The derivation proceeds as follows. We first sub- sert the embedded clause tree into the matrix clause tree. After that, we subsert the nominal arguments and function words. The derived structure is shown in Figure 8. The associated SA-tree is the desired, semantically motivated, dependency structure: the embedded clause depends on the matrix clause. In this section, we have discussed examples where the elementary objects have been obtained by pro- jecting from lexical items. In these cases, we over- come both the problems with TAG considered in Section 1. The SlCs considered here enforce the same notion of projection that was used in obtai- ning the elementary structures. This method of ar- riving at SlCs not only generalizes for the English and Kashmiri examples but also appears to apply to the case of long-distance scrambling and topicaliza- tion in German. 157 4 Recognition It is straightforward to ".~lapt the polynomial-time El<Y-style recognition algorithm for a lexicalized UVG-DI. of Rarnbow (1994b) for DTG. The entries in this array recording derivations of substrings of input contain a set of elementary nodes along with a multi-set of components that must be in~rted above during bottom-up recognition. These components are added or removed at substitution and insertion. The algorithm simulates traversal of a derived tree; checking for SICS and SACs can be done easily. Bec- anse of lexicalization, the size of these multi-sets is polynomially bounded, from which the polynomial time and space complexity of the algorithm follows. For practical purposes, especially for lexicalized grammars, it is preferable to incorporate some ele- ment of prediction. We are developing a polynomial- time Earley style parsing algorithm. The parser re- turns a parse forest encoding all parses for an input string. The performance of this parser is sensitive to the grammar and input. Indeed it appears that for grammars that lexicalize CFG and for English gram- mar (where the structures are similar to the I_TAG developed at University of Pennsylvania (XTAG Re- search Group, 1995)) we obtain cubic-time comple- xity. 5 Conclusion DTG, like other formalisms in the TAG family, is lexi- calizable, but in addition, its derivations are them- selves linguistically meaningful. In future work we intend to examine additional linguistic data, refining aspects of our definition as needed. We will also study the formal properties of DTG, and complete the design of the Earley style parser. Acknowledgements We would like to thank Rakesh Bhatt for help with the Kashmiri data. We are also grateful to Tilman Becker, Gerald Gazdar, Aravind Joshi, Bob Kasper, Bill Keller, Tony Kroch, Klans Netter and the ACL- 95 referees. R, ambow was supported by the North Atlantic Treaty Organization under a Grant awar- ded in 1993, while at TALANA, Universitd Paris 7. References T. Becket, A. Joshi, & O. Rainbow. 1991. Long distance scrambling and tree adjoining grammars. In EACL- 91, 21-26. R. Bhatt. 1994. Word order and case in Kashmiri. Ph.D. thesis, Univ. Illinois. T. Bleam. 1994. Clitic climbing in spanish: a GB per- spective. In TAG+ Workshop, Tech. Rep. TALANA- RT-94-01, Universit~ Paris 7, 16-19. J. Bzesnan & R. Kapl~n. 1982. Lexical-functional gram- mar: A formM system for grammatical representa~ tion. It, J. Bresnan, ed., The Mental Representation o] Grammatical Relations. MIT Press. R. Frank. 1992. Syntactic Locality and Tree Adjoining Grammar: Grammatical, Acquisition and Processing Perspectives. Ph.D. thesis, Dept. Comp. & Inf. Sc., Univ. Pennsylvania. A. Joshi. 1987. An introduction to tree adjoining gram- mars. In A. Manaster-Ramer, ed., Mathematica o] Language, 87-114. A. Joshi, L. Levy, & M. Takahashi. 1975. Tree adjunct grammars. J. Comput. Syst. Sci., 10(1):136-163. A. Joshi & Y. Schabes. 1991. Tree-adjoining grammars and lexicalized grammars. In M. Nivat & A. Podelski, eds., Definability and Recognizability o/Sets of Trees. R. Kasper, E. Kiefer, K. Netter, & K. Vijay-Shanker 1995. Compilation of HPSG to TAG. In ACL-95. A. Kroch. 1987. Subjacency in a tree adjoining gram- mar. In A. Manaster-Ramer, ed., Mathematics o/Lan- guage, 143-172. A. Kroch. 1989. Asymmetries in long distance extrac- tion in a Tree Adjoining Grammar. In Mark Baltin & Anthony Kroch, editors, Alternative Conceptions of Phrase Structure, 66-98. A. Kroch & A. Joshi. 1986. Analyzing extraposition in a tree adjoining grammar. In G. Huck & A. Ojeda, eds., Syntax ~ Semantics: Discontinuous Constitu- ents, 107-149. I. Mel'~uk. 1988. Dependency Syntax: Theory and Prac- tice. O. Rambow. 1994. Formal and Computational Aspects olNaturol Language Syntax. Ph.D. thesis, Dept. Corn- put. & Inf. Sc., Univ. Pennsylvania. O. Rambow. 1994. Multiset-Valued Linear Index Gram- mars. In ACL-94, 263-270. O. Rainbow & A. Joshi. 1992. A formal look at de- pendency grammars and phrase-structure grammars, with special consideration of word-order phenomena. In 1stern. Workshop on The Meaning-Text Theory, Darmstadt. Arbeitspapiere der GMD 671, 47-66. B. Santorini & S. Mahootian. 1995. Codeswitching and the syntactic status of adnominal adjectives. Lingua, 95. Y. Schabes & S. Shieber. 1994. An alternative con- ception of tree-adjoining derivation. Comput. Ling., 20(1):91-124. S. Shieber. 1985. Evidence against the context-freeness of natural language. Ling. ~ Phil., 8:333-343. K. Vijay-Shanker. 1987. A Study o] Tree Adjoining Grammars. Ph.D. thesis, Dept. Comput. & Inf. Sc., Univ. Pennsylvania. K. Vijay-Shanker. 1992. Using descriptions of trees in a tree adjoining grammar. Comput. Ling., 18(4):481- 517. The XTAG Research Group. 1995. A lexicalized tree ad- joining grammar for English. Tech. Rep. IRCS Report 95-03, Univ. Pennsylvania. 158 | 1995 | 21 |
The intersection of Finite State Automata and Definite Clause Grammars Gertjan van Noord Vakgroep Alfa-informatica & BCN Rijksuniversiteit Groningen vannoord@let, rug. nl Abstract Bernard Lang defines parsing as ~ cal- culation of the intersection of a FSA (the input) and a CFG. Viewing the input for parsing as a FSA rather than as a string combines well with some approaches in speech understanding systems, in which parsing takes a word lattice as input (rather than a word string). Furthermore, certain techniques for robust parsing can be modelled as finite state transducers. In this paper we investigate how we can generalize this approach for unification grammars. In particular we will concen- trate on how we might the calculation of the intersection of a FSA and a DCG. It is shown that existing parsing algorithms can be easily extended for FSA inputs. However, we also show that the termi- nation properties change drastically: we show that it is undecidable whether the in- tersection of a FSA and a DCG is empty (even if the DCG is off-line parsable). Furthermore we discuss approaches to cope with the problem. 1 Introduction In this paper we are concerned with the syntactic analysis phase of a natural language understanding system. Ordinarily, the input of such a system is a sequence of words. However, following Bernard Lang we argue that it might be fruitful to take the input more generally as a finite state automaton (FSA) to model cases in which we are uncertain about the actual input. Parsing uncertain input might be nec- essary in case of ill-formed textual input, or in case of speech input. For example, if a natural language understand- ing system is interfaced with a speech recognition component, chances are that this c o ~ t is un- certain about the actual string of words that has been uttered, and thus produces a word lattice of the most promising hypotheses, rather than a single se- quence of words. FSA of course generalizes such word lattices. As another example, certain techniques to deal with ill-formed input can be characterized as finite state transducers (Lang, 1989); the composition of an input string with such a finite state transducer results in a FSA that can then be input for syntac- tic parsing. Such an approach allows for the treat- ment of missing, extraneous, interchanged or mis- used words (Teitelbaum, 1973; Saito and Tomita, 1988; Nederhof and Bertsch, 1994). Such techniques might be of use both in the case of written and spoken language input. In the latter case another possible application concerns the treat- ment of phenomena such as repairs (Carter, 1994). Note that we allow the input to be a full FSA (possibly including cycles, etc.) since some of the above-mentioned techniques indeed result in cy- cles. Whereas an ordinary word-graph always de- fines a finite language, a FSA of course can easily de- fine an infinite number of sentences. Cycles might emerge to treat unknown sequences of words, i.e. sentences with unknown parts of unknown lengths (Lang, 1988). As suggested by an ACL reviewer, one could also try to model haplology phenomena (such as the's in English sentences like 'The chef at Joe's hat', where 'Joe's" is the name of a restaurant) using a finite state transducer. In a straightforward approach this would also lead to a finite-state automaton with cycles. It can be shown that the computation of the in- tersection of a FSA and a CFG requires only a rain- 159 imal generalization of existing parsing algorithms. We simply replace the usual string positions with the names of the states in the FSA. It is also straight- forward to show that the complexity of this process is cubic in the number of states of the FSA (in the case of ordinary parsing the number of states equals n + 1) (Lang, 1974; Billot and Lang, 1989) (assuming the right-hand-sides of grammar rules have at most two categories). In this paper we investigate whether the same techniques can be applied in case the grammar is a constraint-based grammar rather than a CFG. For specificity we will take the grammar to be a Definite Clause Grammar (DCG) (Pereira and Warren, 1980). A DCG is a simple example of a family of constraint- based grammar formalisms that are widely used in natural language analysis (and generation). The main findings of this paper can be extended to other members of that family of constraint-based gram- mar formalisms. 2 The intersection of a CFG and a FSA The calculation of the intersection of a CFG and a FSA is very simple (Bar-Hillel et al., 1961). The (context-free) grammar defining this intersection is simply constructed by keeping track of the state names in the non-terminal category sym- bols. For each rule 9[o -'-' Xl...X. there are rules (Xoqoq) "-* (Xlqoql)(X2qlqa) .. . (X,q,-lq), for all q0...q.. Furthermore for each transition 6(qi, or) = qt we have a rule (orqiqk) --~ or. Thus the intersection of a FSA and a CFG is a CFG that exactly derives all parse-trees. Such a grammar might be called the parse-forest grammar. Although this construction shows that the in- tersection of a FSA and a CFG is itself a CFG, it is not of practical interest. The reason is that this • construction typically yields an enormous arnount of rules that are 'useless'. In fact the (possibly enor- mously large) parse forest grammar might define an empty language (if the intersection was empty). Luckily "ordinary" recognizers/parsers for CFG can be easily generalized to construct this intersection yielding (in typical cases) a much smaller grammar. Checking whether the intersection is empty or not is then usually very simple as well: only in the latter case will the parser terminate succesfully. To illustrate how a parser can be generalized to accept a FSA as input we present a simple top-down parser. A context-free grarnxrmr is represented as a definite-clause specification as follows. We do not wish to define the sets of terminal and non-terminal symbols explicitly, these can be understood from the rules that are defined using the relation rule / 2, and where symbols of the ~ are prefixed with '-' in the case of terminals and '+' in the case of non-terminals. The relation top/1 defines the start symbol. The language L' = a"b" is defined as: top (s) . rule(s, [-a,+s,-b]). rule(s, []) . In order to illustrate how ordinary parsers can be used to compute the intersection of a FSA and a CFG consider first the definite-clause specification of a top-down parser. This parser runs in polyno- mial time if implemented using Earle), deduction or XOLDT resolution (Warren, 1992). It is assumed that the input string is represented by the trans / 3 predicate. parse (P0, P) :- top (Cat), parse (+Cat,P0,P). parse (-Cat, P0, P) :- trans ( P0, Cat, P ), side_effect(p(Cat,P0,P) --> Cat) . parse (+Cat, P0, P) :- rule (Cat, Ds}, parse_ds (Ds, P0, P, His ), side_effect(p(Cat,P0,P) --> His) . parse_ds([],P,P, []) . parse_ds([HlT],P0,P, [p(H, P0,Pl) [His]) :- parse(H, P0, Pl), parse_ds (T, PI, P,His) . The predicate side_effect is used to construct the parse forest grammar. The predicate always suc- coeds, and as a side-effect asserts that its argument is a rule of the parse forest grammar. For the sen- fence 'a a b b' we obtain the parse forest grammar: p(s,2,2) --> []. p(s,l,3) --> [p(-a, 1,2) ,p(+s, 2,2) ,p(-b, 2,3) ] . p(s,0,4) --> [p(-a,0,1),p(+s,l,3),p(-b,3,4) ] . p(a,l,2) --> a. p(a,0,1) --> a. p(b,2,3) --> b. p(b,3,4) --> b. The reader easily verifies that indeed this grammar generates (a isomorphism of) the single parse tree of this example, assuming of course that the start symbol for this parse-forest grammar is p ( s, 0,4 ). In the parse-forest grammar, complex symbols are non-terminals, atomic symbols are terminals. Next consider the definite clause specification of a FSA. We define the transition relation using the relation trans/3. For start states, the relation 1 60 a,qO,ql I a s,qO,q2 s,ql,q2 a,ql,qO s,qO,q2 a a,qO,ql s,qLq:' b,q2,q2 a a,ql,q0 s,q0,q0 b,q2,q2 b I I a b b,q2,q2 I b b,q2,q2 I b Figure 1: A parse-tree extracted from the parse forest grammar start/1 should hold, and for final states the relation final/1 should hold. Thus the following FSA, defin- ing the regular language L = (aa)*b + (i.e. an even number of a's followed by at least one b) is given as: start(qO), final(q2). trans(qO,a,ql), trans(ql,a,qO). trans(qO,b, q2). trans(q2,b, q2). Interestingly, nothing needs to be changed to use the same parser for the computation of the intersec- tion of a FSA and a CFG. If our input 'sentence' now is the definition of trans / 3 as given above, we ob- tain the following parse forest granunar (where the start symbol is p ( s, q0, q2 ) ): p(s,qO,qO) --> []. p(s,ql,ql) --> []. p (s,ql,q2) --> [p (-a, ql,qO) ,p (+s,qO,qO) ,p (-b, q0,q2) ]. p (s,q0,q2) --> [p (-a, qO,ql) ,p (+s,ql,q2) ,p (-b, q2,q2) ]. p (s,ql,q2) --> [p (-a,ql,q0) ,p (+s,q0,q2) ,p (-b,q2,q2) ]. p(a,q0,ql) --> a. p(a,ql,q0) --> a. p(b,q0,q2) --> ]3. p(b,q2,q2) --> ]3. Thus, even though we now use the same parser for an infinite set of input sentences (represented by the FSA) the parser still is able to come up with a parse forest grammar. A possible derivation for this grammar constructs the following (abbrevi- ated) parse tree in figure 1. Note that the construc- tion of Bar Hillel would have yielded a grammar with 88 rules. 3 The intersection of a DCG and a FSA In this section we want to generalize the ideas de- scribed above for CFG to DCG. First note that the problem of calculating the in- tersection of a DCG and a FSA can be solved triv- ially by a generalization of the construction by (Bar- Hillel et al., 1961). However, if we use that method we will end up (typically) with an enormously large forest grammar that is not even guaranteed to con- tain solutions *. Therefore, we are interested in methods that only generate a small subset of this; e.g. if the intersection is empty we want an empty parse-forest grammar. The straightforward approach is to generalize ex- isting recognition algorithms. The same techniques that are used for calculating the intersection of a FSA and a CFG can be applied in the case of DCGs. In order to compute the intersection of a DCG and a FSA we assume that FSA are represented as before. DCGs are represented using the same notation we used for context-free grammars, but now of course the category symbols can be first-order terms of ar- bitrary complexity (note that without loss of gener- ality we don't take into account DCGs having exter- ]In fact, the standard compilation of DCG into Prolog clauses does something similar using variables instead of actual state names. This also illustrates that this method is not very useful yet; all the work has still to be done. 161 As 10111 B2 10 A1 1 B1 lU A2 10111 B~ 10 Aa 10 B3 0 Figure 2: Instance of a PCP problem. AI BI 1 + 111 A1 1 B1 111 A3 10 + B3 = 101111110 = 101111110 Figure 3: Illustration of a solution for the PCP problem of figure 2. nal actions defined in curly braces). But if we use existing techniques for parsing DCGs, then we are also confronted with an undecid- ability problem: the recognition problem for DCGs is undecidable (Pereira and Warren, 1983). A for- tiori the problem of deciding whether the intersec- tion of a FSA and a DCG is empty or not is undecid- able. This undecidability result is usually circum- vented by considering subsets of DCGs which can be recognized effectively. For example, we can restrict the attention to DCGs of which the context- free skeleton does not contain cycles. Recognition for such 'off-line parsable' grammars is decidable (Pereira and Warren, 1983). Most existing constraint-based parsing algo- rithms will terminate for grammars that exhibit the property that for each string there is only a finite number of possible derivations. Note that off-line parsability is one possible way of ensuring that this is the case. This observation is not very helpful in establish- ing insights concerning interesting subclasses of DCGs for which termination can be guaranteed (in the case of FSA input). The reason is that there are now two sources of recursion: in the DCG and in the FSA (cycles). As we saw earlier: even for CFG it holds that there can be an infinite number of analyses for a given FSA (but in the CFG this of course does not imply undecidability). 3.1 Intersection of FSA and off-line parsable DCG is undecidable I now show that the question whether the intersec- tion of a FSA and an off-line parsable DCG is empty is undecidable. A yes-no problem is undecidable (cf. (Hopcroft and Ullman, 1979, pp.178-179)) if there is no algorithm that takes as its input an instance of the problem and determines whether the answer to that instance is 'yes' or 'no'. An instance of a prob- lem consists of a particular choice of the parameters of that problem. I use Post's Correspondence Problem (PCP) as a well-known undecidable problem. I show that if the above mentioned intersection problem were decid- able, then we could solve the PCP too. The follow- ing definition and example of a PCP are taken from (Hopcroft and Ullman, 1979)[chapter 8.5]. An instance of PCP consists of two lists, A = vx... vk and B = wl... wk of strings over some al- phabet ~,,. Tl~s instance has a solution if there is any sequence of integers il... i,~, with m > 1, such that Vii, '0i2, • • ", Vim ~ 'Wil ~ f~Li2, • " • ~ ~im " The sequence il, • •., im is a solution to this instance of PCP. As an example, assume that :C = {0,1}. Furthermore, let A = (1, 10111, 10) and B = 011, 10, 0). A solution to this instance of PCP is the sequence 2,1,1,3 (obtaining the sequence 10111Ul0). For an illustration, cf. figure 3. Clearly there are PCP's that do not have a solu- tion. Assume again that E = {0, 1}. Furthermore let A = (1) and B = (0). Clearly this PCP does not have a solution. In general, however, the problem 162 trans (q0,x, q0) . start (q0) . final (q0) . top (s) . rule(s, [-r(X, [],X, [])]) . rule(r(A0,A,B0,B), [-r(A0,AI,B0,BI), -r(AI,A, BI,B)]). rule(r([llA], A, [I,I,IIB],B), [+x]) . rule(r([l,0,1,1,11A],A, [I,0]B], B),[+x]). rule(r([l,01A], A, [01B], B),[+x]). % FSA % start symbol DCG % require A's and B's match % combine two sequences of % blocks % block AI/BI % block A2/B2 % block A3/B3 Figure 4: The encoding for the PCP problem of figure 2. whether some PCP has a solution or not is not de- cidable. This result is proved by (Hopcroft and Ull- man, 1979) by showing that the halting problem for Turing Machines can be encoded as an instance of Post's Correspondence Problem. First I give a simple algorithm to encode any in- stance of a PCP as a pair, consisting of a FSA and an off-line parsable DCG, in such a way that the ques- tion whether there is a solution to this PCP is equiv- alent to the question whether the intersection of this FSA and DCG is empty. Encoding of PCP. 1. For each I < i < k (k the length of lists A and B) define a DCG rule (the i - th member of A is al ... am, and the i-th member of B is bl... b,): r([al ... a,~lA], A, [bl .. . b, iB], B) ~ [z]. 2. Furthermore, there is a rule r(Ao,A, Bo, B) --+ r( Ao, A1, Bo, B1), r( A1, A, BI, B). 3. Furthermore, there is a rule s ~ r(X, [],X, []). Also, s is the start category of the DCG. 4. Finally, the FSA consists of a single state q which is both the start state and the final state, and a single transition ~(q, z) = q. This FSA generates =*. Observe that the DCG is off-line parsable. The underlying idea of the algorithm is really very simple. For each pair of strings from the lists A and B there will be one lexical entry (deriving the terminal z) where these strings are represented by a difference-list encoding. Furthermore there is a gen- eral combination rule that simply concatenates A- strings and concatenates B-strings. Finally the rule for s states that in order to construct a succesful top category the A and B lists must match. The resulting DCG, FSA pair for the example PCP is given in figure 4: Proposition The question whether the intersec- tion of a FSA and an off-line parsable DCG is empty is undecidable. Proo£ Suppose the problem was decidable. In that case there would exist an algorithm for solving the problem. This algorithm could then be used to solve the PCP, because a PCP ~r has a solution if and only if its encoding given above as a FSA and an off-line parsable DCG is not empty. The PCP problem how- ever is known to be undecidable. Hence the inter- section question is undecidable too. 3.2 What to do? The following approaches towards the undecidabil- ity problem can be taken: • limit the power of the FSA • limit the power of the DCG • compromise completeness • compromise soundness These approaches are discussed now in turn. Limit the FSA Rather than assuming the input for parsing is a FSA in its full generality, we might as- sume that the input is an ordinary word graph (a FSA without cycles). Thus the techniques for robust processing that give rise to such cycles cannot be used. One exam- ple is the processing of an unknown sequence of words, e.g. in case there is noise in the input and it is not clear how many words have been uttered during this noise. It is not clear to me right now what we loose (in practical terms) if we give up such cycles. Note that it is easy to verify that the question whether the intersection of a word-graph and an off- line parsable DCG is empty or not is decidable since 163 it reduces to checking whether the DCG derives one of a finite number of strings. Limit the DCG Another approach is to limit the size of the categories that are being employed. This is the GPSG and F-TAG approach. In that case we are not longer dealing with DCGs but rather with CFGs (which have been shown to be insufficient in general for the description of natural languages). Compromi~ completeness Completeness in this context means: the parse forest grammar contains all possible parses. It is possible to compromise here, in such a way that the parser is guaranteed to terminate, but sometimes misses a few parse-trees. For example, if we assume that each edge in the FSA is associated with a probability it is possible to define a threshold such that each partial result that is derived has a probability higher than the thres- hold. Thus, it is still possible to have cycles in the FSA, but anytime the cycle is 'used' the probabil- ity decreases and if too many cycles are encountered the threshold will cut off that derivation. Of course this implies that sometimes the in- tersection is considered empty by this procedure whereas in fact the intersection is not. For any thres- hold it is the case that the intersection problem of off-line parsable DCGs and FSA is decidable. Compromise soundness Soundness in this con- text should be understood as the property that all parse trees in the parse forest grammar are valid parse trees. A possible way to ensure termination is to remove all constraints from the DCG and parse according to this context-free skeleton. The result- ing parse-forest grammar will be too general most of the times. A practical variation can be conceived as fol- lows. From the DCG we take its context-free skele- ton. This skeleton is obtained by removing the con- straints from each of the grammar rules. Then we compute the intersection of the skeleton with the in- put FSA. This results in a parse forest grammar. Fi- nally, we add the corresponding constraints from the DCG to the grammar rules of the parse forest gral'nrrlaro This has the advantage that the result is still sound and complete, although the size of the parse forest grammar is not optimal (as a consequence it is not guaranteed that the parse forest grammar con- tains a parse tree). Of course it is possible to experi- ment with different ways of taking the context-free skeleton (including as much information as possible / useful). ACknowledgments I would like to thank Gosse Bouma, Mark-Jan Nederhof and John Nerbonne for comments on this paper. Furthermore the paper benefitted from re- marks made by the anonymous ACL reviewers. References Y. Bar-Hillel, M. Perles, and E. Shamir. 1961. On formal properties of simple phrase structure grammars. Zeitschrifl fttr Phonetik, SprachWis- senschafl und Kommunicationsforschung, 14:143-- 172. Reprinted in Bar-Hillel's Language and Information - Selected Essays on their Theory and Application, Addison Wesley series in Logic, 1964, pp. 116-150. S. Billot and B. Lang. 1989. The structure of shared parse forests in ambiguous parsing. In 27th An- nual Meeting of the Association for Computational Linguistics, pages 143-151, Vancouver. David Carter. 1994. Chapter 4: Linguistic analysis. In M-S. Agnts, H. Alshawi, I. Bretan, D. Carter, K. Ceder, M. Collins, IL Crouch, V. Digalakis, B Ekholm, B. Gamb~ick, J. Kaja, J. Karlgren, B. Ly- berg, P. Price, S. Pulman, M. Rayner, C. Samuels- son, and T. Svensson, editors, Spoken Language Translator: First Year Report. SICS Sweden / SRI Cambridge. SICS research report R94:03, ISSN 0283-3638. Barbara Grosz, Karen Sparck Jones, and Bonny Lynn Webber, editors. 1986. Readings in Natural Language Processing. Morgan Kauf- John E. Hopcroft and Jeffrey D. Ullman. 1979. In- troduction to Automata Theory, Languages and Com- putation. Addison Wesley. Bernard Lang. 1974. Deterministic techniques for efficient non-deterministic parsers. In J. Loeckx, editor, Proceedings of the Second Colloquium on Au- tomata, Languages and Programming. Also: Rap- port de Recherche 72, IRIA-Laboria, Rocquen- court (France). Bernard Lang. 1988. Parsing incomplete sentences. In Proceedings of the 12th International Conference on Computational Linguistics (COLING), Budapest. Bernard Lang. 1989. A generative view of ill- formed input processing. In ATR Symposium on Basic Research for Telephone Interpretation (ASTI), Kyoto Japan. Mark-Jan Nederhof and Eberhard Bertsch. 1994. Linear-time suffix recognition for deterministic 164 languages. Technical Report CSI-R9409, Comput- ing Science Institute, KUN Nijmegen. Fernando C.N. Pereira and David Warren. 1980. Definite clause grammars for language analysis - a survey of the formalism and a comparison with augmented transition networks. Artificial Intelli- gence, 13~ reprinted in (Grosz et al., 1986). Femando C.N. Pereira and David Warren. 1983. Parsing as deduction. In 21st Annual Meeting of the Association for Computational Linguistics, Cam- bridge Massachusetts. H. Saito and M. Tomita. 1988. Parsing noisy sentences. In Proceedings of the 12th International Conference on Computational Linguistics (COLING), pages 561-566, Budapest. R. Teitelbaum. 1973. Context-free error analysis by evaluation of algebraic power series. In Proceed- ings of the Fifth Annual ACM Symposium on Theory of Computing, Austin, Texas. David S. Warren. 1992. Memoing for logic pro- grams. Communications of the ACM, 35(3):94-111. 165 | 1995 | 22 |
TAL Recognition in O(M(n2)) Time Sanguthevar Rajasekaran Dept. of CISE, Univ. of Florida raj~cis.ufl.edu Shibu Yooseph Dept. of CIS, Univ. of Pennsylvania [email protected] Abstract We propose an O(M(n2)) time algorithm for the recognition of Tree Adjoining Lan- guages (TALs), where n is the size of the input string and M(k) is the time needed to multiply two k x k boolean matrices. Tree Adjoining Grammars (TAGs) are for- malisms suitable for natural language pro- cessing and have received enormous atten- tion in the past among not only natural language processing researchers but also al- gorithms designers. The first polynomial time algorithm for TAL parsing was pro- posed in 1986 and had a run time of O(n6). Quite recently, an O(n 3 M(n)) algorithm has been proposed. The algorithm pre- sented in this paper improves the run time of the recent result using an entirely differ- ent approach. 1 Introduction The Tree Adjoining Grammar (TAG) formalism was introduced by :loshi, Levy and Takahashi (1975). TAGs are tree generating systems, and are strictly more powerful than context-free grammars. They belong to the class of mildly context sensitive gram- mars (:loshi, et al., 1991). They have been found to be good grammatical systems for natural lan- guages (Kroch, Joshi, 1985). The first polynomial time parsing algorithm for TALs was given by Vi- jayashanker and :loshi (1986), which had a run time of O(n6), for an input of size n. Their algorithm had a flavor similar to the Cocke-Younger-Kasami (CYK) algorithm for context-free grammars. An Earley-type parsing algorithm has been given by Schabes and Joshi (1988). An optimal linear time parallel parsing algorithm for TALs was given by Palls, Shende and Wei (1990). In a recent paper, Rajasekaran (1995) shows how TALs can be parsed in time O(n3M(n)). In this paper, we propose an O(M(n2)) time recognition algorithm for TALs, where M(k) is the time needed to multiply two k x k boolean matri- ces. The best known value for M(k) is O(n 2"3vs) (Coppersmith, Winograd, 1990). Though our algo- rithm is similar in flavor to those of Graham, Har- rison, & Ruzzo (1976), and Valiant (1975) (which were Mgorithms proposed for recognition of Con- text Pree Languages (CFLs)), there are crucial dif- ferences. As such, the techniques of (Graham, et al., 1976) and (Valiant, 1975) do not seem to extend to TALs (Satta, 1993). 2 Tree Adjoining Grammars A Tree Adjoining Grammar (TAG) consists of a quintuple (N, ~ U {~}, I, A, S), where N is a finite set of nonterminal symbols, is a finite set of terminal symbols disjoint from N, is the empty terminal string not in ~, I is a finite set of labelled initial trees, A is a finite set of auxiliary trees, S E N is the distinguished start symbol The trees in I U A are called elementary trees. All internal nodes of elementary trees are labelled with nonterminal symbols. Also, every initial tree is la- belled at the root by the start symbol S and has leaf nodes labelled with symbols from ~3 U {E}. An auxiliary tree has both its root and exactly one leaf (called the foot node ) labelled with the same non- terminal symbol. All other leaf nodes are labelled with symbols in E U {~}, at least one of which has a label strictly in E. An example of a TAG is given in figure 1. A tree built from an operation involving two other trees is called a derived tree. The operation involved is called adjunction. Formally, adjunction is an op- eration which builds a new tree 7, from an auxiliary tree fl and another tree ~ (a is any tree - initial, aux- iliary or derived). Let c~ contain an internal node m labelled X and let fl be the auxiliary tree with root node also labelled X. The resulting tree 7, obtained by adjoining fl onto c~ at node m is built as follows (figure 2): 166 Initial tree O~ S I E G = {{S},{a,b,c,e }, { or}, { ~}, S} S S b S* Figure 1: Example of a TAG Auxiliary tree 1. The subtree of a rooted at m, call it t, is excised, leaving a copy of m behind. 2. The auxiliary tree fl is attached at the copy of m and its root node is identifed with the copy of m. 3. The subtree t is attached to the foot node of fl and the root node of t (i.e. m) is identified with the foot node of ft. This definition can be extended to include adjunc- tion constraints at nodes in a tree. The constraints include Selective, Null and Obligatory adjunction constraints. The algorithm we present here can he modified to include constraints. For our purpose, we will assume that every inter- nal node in an elementary tree has exactly 2 children. Each node in a tree is represented by a tuple < tree, node index, label >. (For brevity, we will refer to a node with a single variable m whereever there is no confusion) A good introduction to TAGs can be found in (Partee, et al., 1990). 3 Context Free recognition in O( M(n)) Time The CFG G = (N,~,P, A1), where N is a set of Nonterminals {A1, A2, .., Ak}, is a finite set of terminals, P is a finite set of productions, A1 is the start symbol is assumed to be in the Chomsky Normal Form. Valiant (1975) shows how the recognition problem can be reduced to the problem of finding Transitive Closure and how Transitive Closure can be reduced to Matrix Multiplication. Given an input string aza2 .... an E ~*, the recur- sive algorithm makes use of an (n+l)× (n+l) upper triangular matrix b defined by hi,i+1 = {Ak I(Ak --* a,) E P}, bi,j = ¢, for j • i + 1 and proceeds to find the transitive closure b + of this matrix. (If b + is the transitive closure, then Ak E b. +. ¢:~ Ak-~ ai .... aj-1) $,J Instead of finding the transitive closure by the cus- tomary method based on recursively splitting into disjoint parts, a more complex procedure based on 'splitting with overlaps' is used. The extra cost in- volved in such a strategy can be made almost negligi- ble. The algorithm is based on the following lemma Lemma : Let b be an n x n upper triangular ma- trix, and suppose that for any r > n/e, the tran- sitive closure of the partitions [1 < i,j < r] and [n- r < i,j < n] are known. Then the closure of b can be computed by I. performing a single matrix multiplication, and 2. finding the closure of a 2(n - r) × 2(n - r) up- per triangular matrix of which the closure of the partitions[1 < i,j < n- r] and [n- r < i,j < 2(n - r)] are known. Proof: See (Valiant, 1975)for details The idea behind (Valiant, 1975) is based on visu- alizing Ak E b+j as spanning a tree rooted at the node Ak with l~aves ai through aj-1 and internal nodes as nonterminals generated from Ak according to the productions in P. Having done this, the fol- lowing observation is made : Given an input string al...a, and 2 distinct sym- bol positions, i and j, and a nonterminal Ak such that Ak E b + ., where i' < i,j' > j, then 3 a non- I P3 terminal A k, which is a descendent of Ak in the b + . where tree rooted at Ak, such that A k, E i d' i" < i, j" > j and A k, has two children Ak~ and Ak2 such thatAk~ Eb +, andAk2 Eb +..withi<s<j. A k, can be thought of as a minimal node in this sense.(The descendent relation is both reflexive and transitive) Thus, given a string al...a, of length n, (say r = 2/3), the following steps are done : 167 t Figure 2: Adjunction Operation k t 1. Find the closure of the first 2/3 ,i.e. all nodes spanning trees which are within the first 2/3 . 2. Find the closure of the last 2/3 , i.e. all nodes spanning trees which are within the last 2/3. 3. Do a composition operation (i.e. matrix multi- plication) on the nodes got as a result of Step 1 with nodes got as a result of Step 2. 4. Reduce problem size to az...an/zal+2n/3...an and find closure of this input. The point to note is that in step 3, we can get rid of the mid 1/3 and focus on the remaining problem size. This approach does not work for TALs because of the presence of the adjunction operation. Firstly, the data structure used, i.e. the 2- dimensional matrix with the given representation, is not sufficient as adjunction does not operate on contiguous strings. Suppose a node in a tree domi- nates a frontier which has the substring aiaj to the left of the foot node and akat to the right of the footnode. These substrings need not be a contigu- ous part of the input; in fact, when this tree is used for adjunction then a string is inserted between these two suhstrings. Thus in order to represent a node, we need to use a matrix of higher dimension, namely dimension 4, to characterize the substring that ap- pears to the left of the footnode and the substring that appears to the right of the footnode. Secondly, the observation we made about an entry E b + is no longer quite true because of the presence of adjunction. Thirdly, the technique of getting rid of the mid 1/3 and focusing on the reduced problem size alone, does not work as shown in figure 3: Suppose 3' is a derived tree in which 3 a node rn on which adjunction was done by an auxiliary tree ft. Even if we are able to identify the derived tree 71 rooted at m, we have to first identify fl before we can check for adjunction, fl need not be realised as a result of the composition operation involving the nodes from the first and last 2/3's ,(say r =2/3). Thus, if we discard the mid 1/3, we will not be able to infer that the adjunction had indeed taken place at node m. 4 Notations Before we introduce the algorithm, we state the no- tations that will be used. We will be making use of a 4-dimensional matrix A of size (n + 1) x (n + 1) x (n + 1) x (n + 1), where n is the size of the input string. (Vijayashanker, Joshi, 1986) Given a TAG G and an input string aza2..an, n > 1, the entries in A will be nodes of the trees of G. We say, that a node m (= < 0, node index, label >) E A(i,j, k, l) iff m is a node in a derived tree 7 and the subtree of 7 rooted at m has a yield given by either ai+l...ajXak+l...al (where X is the footnode of r/, j < k) or ai+l .... az (when j = k). If a node m E A(i,j,k,l}, we will refer to m as spanning a tree (i,j,k,l). When we refer to a node m being realised as a result of composition of two nodes ml and rnP, we mean that 3 an elementary tree in which m is the parent of ml and m2. A Grown Auxiliary Tree is defined to be either a tree resulting from an adjunction involving two auxiliary trees or a tree resulting from an adjunction involving an auxiliary tree and a grown auxiliary tree. Given a node m spanning a tree (i,j,k,l), we define the last operation to create this tree as follows : if the tree (i,j,k,l) was created in a series of op- erations, which also involved an adjunction by an auxiliary tree (or a grown auxiliary tree) (i, Jl, kz, l) onto the node m, then we say that the last opera- tion to create this tree is an adjunction operation; else the last operation to create the tree (i,j,k,l) is a composition. The concept of last operation is useful in modelling the steps required, in a bottom-up fashion, to create 168 n .. x 71 Node m has label X /, '3' Derived tree 71 Figure 3: Situation where we cannot infer the adjunction if we simply get rid of the mid 1/3 a tree. 5 Algorithm Given that the set of initial and auxiliary trees can have leaf nodes labelled with e, we do some prepro- cessing on the TAG G to obtain an Association List (ASSOC LIST) for each node. ASSOC LIST (m), where m is a node, will be useful in obtaining chains of nodes in elementary trees which have children la- belled ~. Initialize ASSOC LIST (m) = ¢, V m, and then call procedure MAKELIST on each elementary tree, in a top down fashion starting with the root node. Procedure MAKELIST (m) Begin 1. If m is a leaf then quit 2. If m has children ml and me both yielding the empty string at their frontiers (i.e. m spans a subtree yielding e) then ASSOC LIST (ml) = ASSOC LIST (m) u {m) ASSOC LIST (m2) = ASSOC LIST (m) U (m} 3. If m has children m1 and me, with only me yielding the empty string at its frontier, then ASSOC LIST (ml) = ASSOC LIST (m) u {m) End We initially fill A(i,i+l,i+l,i+l) with all nodes from Smt,Vml, where S,~1 = {ml} O AS- SOC LIST (ml), ml being a node with the same label as the input hi+l, for 0 < i < n-1. We also fill A(i,i,j,j), i < j, with nodes from S,~2, Vm2, where Sin2 = {me) tJ ASSOC LIST (me), me being a foot node. All entries A(i,i,i,i), 0 < i < n, are filled with nodes from Sraa,Vm3, where S,n3 = { m3} U AS- SOC LIST (mS), m3 having label ¢. Following is the main procedure, Compute Nodes, which takes as input a sequence rlr2 ..... rp of symbol positions (not necessarily contiguous). The proce- dure outputs all nodes spanning trees (i,j,k,O, with {i, 1} E {rl,r2 ..... ~'ip } and {j,k} E {rl,r I Jr Z,...,rp}. The procedure is initially called with the sequence 012..n corresponding to the input string aa ..... an. The matrix A is updated with every call to this pro- cedure and it is updated with the nodes just realised and also with the nodes in the ASSOC LISTs of the nodes just realised. Procedure Compute Nodes ( rl r2 ..... rp ) Begin 1. Ifp = 2, then a. Compose all nodes E A(rl,j, k, re) with all nodes E A(re,re, re, re), rt < j < k < re. Update A . b. Compose all nodes E A(rl,rl,rl,rx) with all nodes E A(rt, j, k, r2), rt < j < k < re. Update A . e. Check for adjunctions involving nodes re- alised from steps a and b. Update A . d. Return 2. Compute Nodes ( rlr2 ..... rep/a ). 3. Compute Nodes ( rl+p/z ..... rp ). 4. a. Compose nodes realised from step 2 with nodes realised from step 3. b. Update A. 5. a. Check for all possible adjunctions involving the nodes realised as a result of step 4. b. Update A. 6. Compute Nodes ( rlre...rp/arl+2p/a...r p ) 169 End Steps la,lb and 4a can be carried out in the fol- lowing manner : Consider the composition of node ml with node me. For step 4a, there are two cases to take care of. Case 1 If node ml in a derived tree is the ancestor of the foot node, and node me is its right sibling, such that ml 6 A(i, j, k, l) and m2 E A(l, r, r, s), then their parent, say node m should belong to A(i,j,k,s). This composition of ml with me can be reduced to a boolean matrix multiplication in the following way: (We use a technique similar to the one used in (Ra- jasekaran, 1995)) Construct two boolean matrices B1, of size ((n 4- 1)2p/3) × (p/3) and Be, of size (p/3) x (p/3). Bl(ijk, l) = 1 iff ml E A(i,j,k,I) and i E {rl, .., rv/3} and 1 E {rl+p/3, ..r2p/3} = 0 otherwise Note that in B1 0 < j < k < n. BeEs ) = 1 iff me e A(I,r, r,s) and 1 E {r1+;13, ..rep/3} and s E {rl+ep/3, .., rp} -- 0 otherwise Clearly the dot product of the ijk th row of B1 with the s th column of Be is a 1 iff m E A(i, j, k, s). Thus, update A(i,j,k, s) with {m} U ASSOC LIST (m). Case 2 If node me in a derived tree is the ancestor of the foot node, and node ml is its left sibling, such that ml E A(i,j,j,l) and m2 E A(l,p, q, r), then their parent, say node m should belong to A(i,p,q,s). This can also be handled similar to the manner de- scribed for case 1. Update A(i,p,q,s) with {m} U ASSOC LIST (m). Notice that Case 1 also covers step la and Case 2 also covers step lb. Step 5a and Step lc can be carried out in the following manner : We know that if a node m E A(i,j,k,i), and the root ml of an auxiliary tree E A(r, i, i, s), then ad- joining the tree 7/, rooted at ml, onto the node m, results in the node m spanning a tree (rj,k,s), i.e. m E A(r, j, k, s). We can essentially use the previous technique of reducing to boolean matrix multiplication. Con- struct two matrices C1 and Ce of sizes (p2/9) x (n + 1) 2 and (n + 1) 2 x (n + 1) 2, respectively, as follows : Cl(ii, jk) = 1 iff 3ml, root of an auxiliary tree E A(i, j, k, l), with same label as m and Cl(il, jk) = 0 otherwise Note that in CI i E {rl,..,rpls}, i E {rl+2p/3 , .., rp}, and 0 _< j < k < n. Ce(qt, rs) = 1 iff m E A(q, r, s, t) -- 0 otherwise Note that inC2 0<q<r<s<t<n. Clearly the dot product of the ii th row of C1 with the rs th column of Ce is a 1 iff m E A(i,r,s,l). Thus, update A(i, r, s, l) with {m} U ASSOC LIST (m). The input string ala2...an is in the language gener- ated by the TAG G iff 3 a node labelled S in some A(O,j,j,n), 0 <_ j < n. 6 Complexity Steps la, lb and 4a can be computed in O(neM(p)). Steps 5a and le can be computed in O((ne/pe)eM(pg)). If T(p) is the time taken by the procedure Compute Nodes, for an input of size p, then T(p) = 3T(2p/3)4-O(n2M(p))4- O( ( ne /pe)e M (pe) ) where n is the initial size of the input string. Solving the recurrence relation, we get T(n) - O(M(ne)). 7 Proof of Correctness We will show the proof of correctness of the algo- rithm by induction on the length of the sequence of symbol positions. But first, we make an observation, given any two symbol positions (r~, rt), rt > r~ 4-1 , and a node m spanning a tree (i,j, k, l) such that i < rs and i _> rt with j and k in any of the possible combinations as shown in figure 4. 3 a node m' which is a descendent of the node m in the tree (i,j,k,l) and which either E ASSOC LIST(ml) or is the same as ml, with ml having one of the two properties mentioned be- low : 1. ml spans a tree (il,jl, kl, 11) such that the last operation to create this tree was a composition operation involving two nodes me and m3 with me spanning (ix, J2, k2, 12) and m3 spanning (12,j3, ks, ix). (with (r, < l~. < rt), 01 <- r,), (rt < !1) and either (j2 = kz,j3 = jl,k3 = kl) or (j2 = jl,k2 = kl,j3 = k3) ) 2. ml spans a tree (il,jl, kl, ll) such that the last operation to create this tree was an adjunction by an auxiliary tree (or a grown auxiliary tree) (il, j2, ke, Ix), rooted at node me, onto the node ml spanning the tree (je,jl, kl, k2) such that node me has either the property mentioned in (1) or belongs to the ASSOC LIST of a node 170 I I rs rt j k 2 3 4 j 5 Figure 4: Combinations j k j k k j k of j and k being considered which has the property mentioned in (1). (The labels of ml and me being the same) Any node satisfying the above observation will be called a minimal node w.r.t, the symbol positions (r,, r0. The minimM nodes can be identified in the follow- ing manner. If the node m spans (i,j, k, l) such that the last operation to create this tree is a composition of the form in figure ha, then m tO ASSOC LIST(m) is minimal. Else, if it is as shown in figure 5b, we can concentrate on the tree spanned by node ml and repeat the process. But, if the last operation to cre- ate (i, j, k, 1) was an adjunction as shown in figure 5c, we can concentrate on the tree (il, j, k, 11) ini- tially spanned by node m. If the only adjunction was by an auxiliary tree, on node m spanning tree (Q,j,k, lx) as shown in figure 5d, then the set of minimal nodes will include both m and the root ml of the auxiliary, tree and the nodes in their respec- tive ASSOC LISTs. But if the adjunction was by a grown auxiliary tree as shown in figure he, then the minimal nodes include the roots of/31,/32, ..,/3s, 7 and the node m. Given a sequence < rl,r2,..,rp >, we call (rq,r~+l) a gap, iff rq+l ¢ rq + 1. Identifying min- imal nodes w.r.t, every new gap created, will serve our purpose in determining all the nodes spanning trees (i, j, k, 1), with {i, l} e {rl, r2, .., rp}. Theorem : Given an increasing sequence < rl, r2, .., rp > of symbol positions and given a. V gaps (rq, rq+l), all nodes spanning trees (i,j,k,l} with rq < i < j < k < l < rq+l b. V gaps (rq, rq+l), all nodes spanning trees (i,j,k,l) such that either rq < i < rq+l or rq < l < rq+l c. V gaps (rq,rq+l) , all the minimal nodes for the gap such that these nodes span trees (i,j,k,l) with {i,l} E { rl,r2,..,rp } and i <_ 1 in addition to the initialization information, the algorithm computes all the nodes spanning trees (i,i,k,O with (i,l} ~ { r~,r~,..,rp } and i _< i < k<l. m Proof : Base Cases : For length = 1, it is trivial as this information is already known as a result of initialization. For length = 2, there are two cases to consider : 1. r2 = rl + 1, in which case a composition in- volving nodes from A(rl, rl, rl, rl) with nodes from A(rl, r2, r2, r2) and a composition involv- ing nodes from A(rl, r2, r2, r2) with nodes from A(r2, r2, r2, r2), followed by a check for adjunc- tion involving nodes realised from the previous two compositions, will be sufficient. Note that since there is only one symbol from the input (namely, ar~), and because an auxiliary tree has at least one label from ~, thus, checking for one adjunction is sufficient as there can be at most one adjunction. 2. r2 ~ rl + 1, implies that (rl,r2) is a gap. Thus, in addition to the information given as per the theorem, a composition involv- ing nodes from A(rl, j, k, r2) with nodes from A(r2,r2, r2,r2) and a composition involving nodes from A(rl,rl,rl,rl) with nodes from A(rl, j, k, r2), (rl < j < k < r2), followed by an adjunction involving nodes realised as a result of the previous two compositions will be sufficient as the only adjunction to take care of involves the adjunction of some auxiliary tree onto a node m which yields e, and m E A(rl, rl, rl, rl) or m E A(r2,r2,r2, r2). Induction hypothesis : V increasing sequence < rl,r2, ..,r~ > of symbol positions of length < p, (i.e q < p), the algorithm, given the information as 171 (5a) m r r s t (ab) m (5c) m auxiliary A • tree o~,.~//////2X grow. tree ///// ~k//~ i il ' j k ' ll ! (Se) i z I (M) root of auxiliary ra tree has property tree ~///J//~ i -'i 1 ' l 1 1 Grown aux tree formed by adjoining Ps " P2 Pl onto root of grown aux tree 7 Root of ~1 has property shown in (Sa) Figure 5: Identifying minimal nodes required by the theorem, computes all nodes span- ning trees (i,j,k,l) such that {i, l} e { rl, r2, .., rq } and i < j < k < I. Induction : Given an increasing sequence < rl, r~, .., rp, rp+l > of symbol positions together with the information required as per parts a,b,c of the theorem, the algorithm proceeds as fol- lows: 1. By the induction hypothesis, the algorithm correctly computes all nodes spanning trees (i,j,k,i) within the first 2/3, i.e, {i,l} E { rt, r2, .., r2(p+D/3 } and i < l . By the hypothe- sis, it also computes all nodes (i',j,k',l')within the last 2/3, i.e, { i ~, ! ~ } E {rl+(p+l)/3, .., rp+z} and i' < i'. 2. The composition step involving the nodes from the first and last 2/3 of the sequence < rl, r2, .., rp, rp+i >, followed by the adjunc- tion step captures all nodes m such that either a. m spans a tree (i,j,k,l)such that the last op- eration to create this tree was a composi- tion operation on two nodes ml and m2 with ml spanning (i,j',k;l'} and me span- ning (i;j",k",l). (with i E { rl, r2, .., r(p+l)/3 }, i E { rl+(p+l)/3,..,r2(p+D/3 } and I E ! ri+2(p+z)/3, .., rp+z }, and either (j' = k, j" =j, k" = k) or (j' =j, k'= k,j" = k') ). b. m spans a tree O,J, k,l) such that the last op- eration to create this tree was an adjunc- tion by an auxiliary or grown auxiliary tree (i,j',k',l), rooted at node mI, onto the node m spanning the tree (j',j,k,k') such that node ml has either the property mentioned in (1) or it belongs to the ASSOC LIST of a node which has the property mentioned in (1). (The labels of m and ml being the same) Note that, in addition to the nodes m captured from a or b, we will also be realising nodes E ASSOC LIST (m). The nodes captured as a result of 2 are the minimal nodes with respect to the gap (r(p+l)/a, rl+2(p+l)/3) with the additional property that the trees (i,j,k,l) they span are such that i E { rl, r2, .., r(p+l)]3 } and l E { rl+2(p+l)]3, .., rp+l }. Before we can apply the hypothesis on the se- quence < rx, r2, .., r(p+t)/3, rl+2(p+l)[3, ..rp+l >, we have to make sure that the conditions in parts a,b,c of the theorem are met for the new gap (r(p+1)/3, rl+2(p+l)/3). It is easy to see that con- ditions for parts a and b are met for this gap. We have also seen that as a result of step 2, all the mini- mal nodes w.r.t the gap (r(p+x)/3 , rl+2(p+l)/3), with 172 the desired property as required in part c have been computed. Thus applying the hypothesis on the sequence < rl, r2, .., r(p+l)[3, rl+2(p+l)/3, ..rp+l >, the algorithm in the end correctly computes all the nodes spanning trees (ij,k,1) with {i,l} E {rl,r2,..,rp+x } andi<j<k<l. D 8 Implementation The TAL recognizer given in this paper was im- plemented in Scheme on a SPARC station-10/30. Theoretical results in this paper and those in (Ra- jasekaran, 1995) clearly demonstrate that asymp- totically fast algorithms can be obtained for TAL parsing with the help of matrix multiplication al- gorithms. The main objective of the implementa- tion was to check if matrix multiplication techniques help in practice also to obtain efficient parsing algo- rithms. The recognizer implemented two different algo- rithms for matrix multiplication, namely the triv- ial cubic time algorithm and an algorithm that ex- ploits the sparsity of the matrices. The TAL recog- nizer that uses the cubic time algorithm has a run time comparable to that of Vijayashanker-]oshi's al- gorithm. Below is given a sample of a grammar tested and also the speed up using the sparse version over the ordinary version. The grammar used, generated the TAL anbnc n. This grammar is shown in figure 1. Interestingly, the sparse version is an order of magnitude faster than the ordinary version for strings of length greater than 7. i[ String abe aabbcc Answer Yes Yes Speedup [1 3.1 6.1 aabcabe No 8.0 abacabac No 11.7 aaabbbccc Yes 11.4 The above implementation results suggest that even in practice better parsing algorithms can be obtained through the use of matrix multiplication techniques. 9 Conclusions In this paper we have presented an O(M(n2)) time algorithm for parsing TALs, n being the length of the input string. We have also demonstrated with our implementation work that matrix multiplication techniques can help us obtain efficient parsing algo- rithms. Acknowledgements This research was supported in part by an NSF Re- search Initiation Award CCR-92-09260 and an ARO grant DAAL03-89-C-0031. References D. Coppersmith and S. Winograd, Matrix Multi- plication Via Arithmetic Progressions, in Proc. 19th Annual ACM Symposium on Theory of Com- puting, 1987,pp. 1-6. Also in Journal of Symbolic Computation, Vol. 9, 1990, pp. 251-280. S.L. Graham, M.A. Harrison, and W.L. Ruzzo, On Line Context Free Language Recognition in Less than Cubic Time, Proc. A CM Symposium on The- ory of Computing, 1976, pp. 112-120. A.K. Joshi, L.S. Levy, and M. Takahashi, Tree Ad- junct Grammars, Journal of Computer and Sys- tem Sciences, 10(1), 1975. A.K. Joshi, K. Vijayashanker and D. Weir, The Con- vergence of Mildly Context-Sensitive Grammar Formalisms, Foundational Issues of Natural Lan- guage Processing, MIT Press, Cambridge, MA, 1991,pp. 31-81. A. Kroch and A.K. Joshi, Linguistic Relevance of Tree Adjoining Grammars, Technical Report MS- CS-85-18, Department of Computer and Informa- tion Science, University of Pennsylvania, 1985. M. Palis, S. Shende, and D.S.L. Wet, An Optimal Linear Time Parallel Parser for Tree Adjoining Languages, SIAM Journal on Computin#,1990. B.H. Partee, A. Ter Meulen, and R.E. Wall, Stud- ies in Linguistics and Philosophy, Vol. 30, Kluwer Academic Publishers, 1990. S. Rajasekaran, TAL Parsing in o(n 6) Time, to ap- pear in SIAM Journal on Computing, 1995. G. Satta, Tree Adjoining Grammar Parsing and Boolean Matrix Multiplication, to be presented in the 31st Meeting of the Association for Computa- tional Linguistics, 1993. G. Satta, Personal Communication, September 1993. Y. Schabes and A.K. Joshi, An Earley-Type Parsing Algorithm for Tree Adjoining Grammars, Proc. 26th Meeting of the Association for Computa- tional Linguistics, 1988. L.G. Valiant, General Context-Free Recognition in Less than Cubic Time, Journal of Computer and System Sciences, 10,1975, pp. 308-315. K. Vijayashanker and A.K. Joshi, Some Computa- tional Properties of Tree Adjoining Grammars, Proc. 2~th Meeting of the Association for Com- putational Linguistics, 1986. 173 | 1995 | 23 |
Extraposition via Complex Domain Formation* Andreas Kathol and Carl Pollard Dept. of Linguistics , 1712 Neil Ave. Ohio State University Columbus, OH 43210, USA {kathol, pollard}©ling, ohio-stat e. edu Abstract We propose a novel approach to extraposi- tion in German within an alternative con- ception of syntax in which syntactic struc- ture and linear order are mediated not via encodings of hierarchical relations but in- stead via order domains. At the heart of our proposal is a new kind of domain for- mation which affords analyses of extrapo- sition constructions that are linguistically more adequate than those previously sug- gested in the literature. 1 Linearization without phrase structure Recent years have seen proposals for the elimina- tion of the phrase structure component in syntax in favor of levels of representation encompassing possi- bly nonconcatenative modes of serialization (Dowty, In press; Reape, 1993; Reape, 1994; Pollard et al., 1993). Instead of deriving the string representation from the yield of the tree encoding the syntactic structure of that sentence (as, for instance in GPSG, LFG, and--as far as the relationship between S- structure and PF, discounting operations at PF, is concerned--GB), these proposals suggest deriving the sentential string via a recursive process that op- erates directly on encodings of the constituent order of the subconstituents of the sentence. In Reape's proposal, which constitutes an extension of HPSG (Pollard and Sag, 1994), this information is con- tained in "(Word) Order Domains". On the other hand, the way that the surface representation is put together, i.e. the categories that have contributed to the ultimate string and the grammatical depen- dency relations (head-argument, head-adjunct, etc.) holding among them, will be called the "composi- tion structure" of that sentence, represented below by means of unordered trees. *Thanks to Bob Kasper for helpful discussions and suggestions. As an example, consider how a German V1 sen- tence, e.g. a question or conditional clause, is derived in such a system. 1 (1) Las Karl dasBuch read Karl the book E.g.: 'Did Karl read the book?' The representation in Figure 1 involves a number of order domains along the head projection of the clause ([1]-[3]). Each time two categories are com- bined, a new domain is formed from the domains of the daughters of that node, given as a list value for the feature DOM. While the nodes in the deriva- tion correspond to signs in the HPSG sort hierarchy (Pollard and Sag, 1994), the elements in the order domains, which we will refer to as domain objects, will minimally contain categorial and phonological information (the latter given in italics within angled brackets). The value of the DOM attribute thus con- sists of a list of domain objects. Ordering is achieved via linear precedence (LP) statements. In Reape's approach, there are in essence two ways in which a sign's DOM value can be integrated into that of its mother. When combining with its ver- bal head, a nominal argument such as das Buch in Figure 1 in general gives rise to a single domain ele- ment, which is "opaque" in the sense that adjacency relations holding within it cannot be disturbed by subsequent intervention of other domain objects. In contrast, some constituents contribute the contents of their order domains wholesale into the mother's domain. Thus, in Figure 1, both elements of the VP ([2]) domain become part of the higher clausal ([1]) domain. As a result, order domains allow elements that are not sisters in composition structure to be linearly ordered with respect to each other, contrary 1In Kathol and Pollard (1995), we argue for dispens- ing with binary-valued features such as INV(ERTED) or EXTRA(POSED) in favor of a multi-valued single feature TOPO(LOGY) which imposes a partition on the set of do- main elements of a clause according to membership in Topological Fields (see also Kathol (In progress)). Since nothing in the present proposal hinges on this detail, we keep with the more common binary features. 174 [1] I S -- V[SUBCAT O] DOM/ [ (las) ] (Karl) \ Lv[+'NV] ] [ ] ' NP[NOM] ' ( das Buch) rNP[NOM] [4] [DOM ([(KarO]) ] = V[SUBCAT (NP[NOM])] 1 [2] /DOM/ [("as) 1 ru,,., B,.,ch)] \/ L \ Lv[-FINV]j ' t NP[ACC] ] /J ,. . . ~ rVrSUB¢~T~'--T~rNO~I, ',,,,,,c,,,,,l ,3, ]} Figure h Derivation of V1 clause using order domains NP[ACC])] to ordinary HPSG, but in the spirit of "liberation" metarules (Zwicky, 1986). With Reape we assume that one crucial mecha- nism in the second type of order domain formation is the shuffle relation (Reape's sequence union), which holds of n lists L1, ..., L,-1, L,, iff L, consists of the elements of the first n-1 lists interleaved in such a way that the relative order among the original mem- bers of L1 through L,-1, respectively, is preserved in Ln. As a consequence, any precedence (but not ad- jacency) relations holding of domain elements in one domain are also required to hold of those elements in all other order domains that they are members of, which amounts to a monotonicity constraint on deriving linear order. Hence, if [1] in Figure 1 were to be expanded in the subsequent derivation into a larger domain (for instance by the addition of a sentential adverb), the relative order of subject and object in that domain could not be reversed within the new domain. The data structure proposed for domains in Reape (1993) is that of a list of objects of type sign. However, it has been argued (Pollard et al., 1993) that signs contain more information than is desirable for elements of a domain. Thus, a sign encodes its internal composition structure via its DAUGHTERS attribute, while its linear composition is available as the value of DOM. Yet, there are no known LP con- straints in any language that make reference to these types of information. We therefore propose an im- poverished data structure for elements of order do- mains which only consists of categorial and seman- tic information (viz. the value of SYNSEM (Pollard and Sag, 1994)) and a phonological representation. This means that whenever a constituent is addedto a domain as a single element, its information con- tent will be condensed to categorial and phonolog- ical information. 2 The latter is constrained to be the concatenation of the PHONOLOGY values of the domain elements in the corresponding sign's order 2For expository convenience, semantic information is systematically ignored in this paper. domain. We will refer to the relation between a sign S and its representation as a single domain object O as the compaction, given informally in (2): 3 (2) compaction([i-],El ) rsig. ] 53:/sYNSZM if] LD°" ([PHON [ 4.~],...,[PHON[~) [ dom-obj ] A ~: I s,,N~E~,~I~ LPHONI,Io ... o r-;-I To express this more formally, let us now define an auxiliary relation, joinF, which holds of two lists L1 and L2 only if L2 is the concatenation of val- ues for the feature F of the elements in L1 in the same order: 4 (3) joinF([Y],[~) -- (V]: 0 A [7]: O) V (cons(IF (El)], [-~-],[~ A joinF([?],[~) A append([';'],r~,[~]) ) This allows us to define compaction more precisely as in (4): (4) compaction([-i-],[~) _--- [sign ] ~:/sYNSEM ~/ LDOM~ J r dom-obj ] ^ ~: I SYNSE~,~-..I~I LPHON sL~ .] A joinp//oN ([7],[~) 3Here, "o" is a convenient functional notation for the append relation. 4Here cons is the relational analogue of the LISP function cons; i.e. cons holds among some element E and two lists L1 and L2 only if the insertion of E at the beginning of L1 yields L2. 175 VP "-" V[SUBCAT (NP)] ] I r/,zasBuch)] >] DoM j L D°M ([(das)], [(Buch)])] [DOM~( [V[+,NV] ] > A compaction([~],[~]) ^ shuffle(q , E], [D Figure 2: Domain formation using compaction and shuffle Given compaction and the earlier shnffle relation, the construction of the intermediate VP domain can be thought of as involving an instance of the Head- Complement Schema (Pollard and Sag, 1994), aug- mented with the relevant relational constraints on domain formation, as shown in Figure 2. 2 Extraposition via Order Domains Order domains provide a natural framework for or- der variation and discontinuous constituency. One of the areas in which this approach has found a natural application is extraposition of various kinds of con- stituents. Reape (1994) proposes the binary-valued feature EXTRA to order an extraposed VP last in the domain of the clause, using the LP statement in (5): (5) [--EXTRA] "~ [+EXTRA] Similarly, Nerbonne (1994) uses this feature to account for instance for extrapositions of relative clauses from NPs such as (6); the composition struc- ture proposed by Nerbonne for (6)is given in Fig- ure 3. (6) einen Hund fiittern [der Hunger hat] a dog feed that hunger has 'feed a dog that is hungry' The structure in Figure 3 also illustrates the fea- ture UNIONED, which Reape and Nerbonne assume to play a role in domain formation process. Thus, a constituent marked [UNIONED -Jr-] requires that the contents of its domain be shuffled into the domain of a higher constituent that it becomes part of (i.e. it is domain-unioned). For instance, in Figure 3, the [UNIONED +] specification on the higher NP occa- sions the VP domain to comprise not only the verb, but also both domain objects of the NP. Conversely, a [UNIONED --] marking in Reape's and Nerbonne's system effects the insertion of a single domain ob- ject, corresponding to the constituent thus specified. Therefore, in Figure 3, the internal structure of the relative clause domain becomes opaque once it be- comes part of the higher NP domain. 3 Shortcomings of Nerbonne's analysis One problematic aspect of Nerbonne's proposal con- cerns the fact that on his account, the extraposabil- ity of relative clauses is directly linked to the Head- Adjunct Schema that inter alia licenses the combi- nation of nominals with relative clauses. However, whether a clause can be extraposed is independent of its adjunct/complement status within the NP. Thus, (7) illdstrates the extraposition of a comple- ment clause (Keller, 1994): (7) Planck hat die Entdeckung gemacht Planck has the discovery made [dab Licht Teilchennatur hat]. that light particle.nature has 'Planck made the discovery that light has a particle nature.' The same also holds for other kinds of extraposable constituents, such as VPs and PPs. On Nerbonne's analysis, the extraposability of complements has to be encoded separately in the schema that licenses head-complement structures. This misses the gen- eralization that extraposability of some element is tied directly to the final occurrence within the con- stituent it is dislocated from. s Therefore, extrapos- ability should be tied to the linear properties of the constituent in question, not to its grammatical func- tion. A different kind of problem arises in the case of ex- tractions from prepositional phrases, as for instance in (S): (8) an einen Hund denken [der Hunger hat] of a dog think that hunger has 'think of a dog that is hungry' On the one hand, there has to be a domain object for an einen Hund in the clausal domain because this SNote that final occurrence is a necessary, but not sufficient condition. As is noted for instance in Keller (1994), NP complements (e.g. postnominal geni- tives) cannot be extraposed out of NPs despite their final occurrence. We attribute this fact to a general constraint against extraposed NPs in clauses, except for adverbial accusative NPs denoting time intervals. 176 VP oo.( <.r ], ], )] ~_. I r (einen Hund) ] [ ( der Hunger hat)| ] "REL-S UNIONED - [NP ] EXTRA"~- DoM ([ (eine.)1, [(Hund) ]) ool,,, ( [<Jet)] <Hunger) . ],[<v',O,>]) Figure 3: Extraposition of relative clause in Nerbonne 1994 [VoM element is subject to the same variations in linear order as PPs in general. On the other hand, the attachment site of the preposition will have to be higher than the relative clause because clearly, the relative clause modifies the nominal, but not the PP. As a potential solution one may propose to have the preposition directly be "integrated" (phonologi- cally and in terms of SYNSEM information) into the NP domain object corresponding to einen Hund. However, this would violate an implicit assumption made in order domain-based approaches to lineariza- tion to the effect that domain objects are inalterable. Hence, the only legitimate operations involve adding elements to an order domain or compacting that do- main to form a new domain object, but crucially, op- erations that nonmonotonically change existing do- main objects within a domain are prohibited. 4 Partial compaction In this section, we present an alternative to Ner- bonne's analysis based on an extension of the pos- sibilities for domain formation. In particular, we propose that besides total compaction and domain union, there is a third possibility, which we will call partial compaction. In fact, as will become clear be- low, total compaction and partial compatcion are not distinct possibilities; rather, the former is a sub- case of the latter. Intuitively, partial compaction allows designated domain objects to be "liberated" into a higher do- main, while the remaining elements of the source domain are compacted into a single domain object. To see how this improves the analysis of extraposi- tion, consider the alternative analysis for the exam- ple in (6), given in Figure 4. As shown in Figure 4, we assume that the or- der domain within NPs (or PPs) is essentially flat, and moreover, that domain objects for NP-internal prenominal constituents are prepended to the do- main of the nominal projection so that the linear string is isomorphic to the yield of the usual right- branching analysis trees for NPs. Adjuncts and complements, on the other hand, follow the nomi- nal head by virtue of their ["t-EXTRA] specification, which also renders them extraposable. If the NP combines with a verbal head, it may be partially compacted. In that case, the relative clause's do- main object (El) is inserted into the domain of the VP together with the domain object consisting of the same SYNSEM value as the original NP and that NP's phonology minus the phonology of the relative clause ([~]). By virtue of its [EXTRA "~-] marking, the domain object of the relative clause is now ordered last in the higher VP domain, while the remnant NP is ordered along the same lines as NPs in general. One important aspect to note is that on this ap- proach, the inalterability condition on domain ob- jects is not violated. Thus, the domain object of the relative clause ([~ in the NP domain is token- identical to the one in the VP domain. Moreover, the integrity of the remaining NP's domain object is not affected as--unlike in Nerbonne's analysis-- there is no corresponding domain object in the do- main of the NP before the latter is licensed as the complement of the verb fattern. In order to allow for the possibility of partially compacting a domain by replacing the compaction relation of (4) by the p-compaction relation, which is defined as follows: 177 I VP , IZ]/REL-~ L EXTRA -4- [] /r,e,ne.,] ] r' er n,erha"] DET [~oM ([(#,,n~)])] ^ p-compaction(l-i-l,[Z], lID) ^ shume(I[Zl), (~,l'q,~ I REL-S EXTRA + v,.\ LR~.'. J' uP [,:.')])] Figure 4: Extraposition via partial compaction (9) p-compaction ([~],[~],[~) [ sign "1 LDOMIZ] J [ dora-oh1 ] ^ [~: 1~_5]1 [PHON 7LT.J J ^ shume(m,[],~ A joineHoN (~J,[L]) Intuitively, the p-compaction relation holds of a sign S (~]), domain object O ([~, and a list of domain objects L (~]) only if O is t~-e compaction of S with L being a llst of domain objects "liberated" from the S's order domain. This relation is invoked for instance by the schema combining a head (H) with a complement (C): (10) [ i ~ ] [I-I:] [DOMF~ ] [C:] [] A p- compaction ([~],~],[~]) ^ shume(([~,ff],[E,ff]) ^ [B: zist ( [s~NSEM [EXTRA +]]) [ [ HEAD verb ^([]: <> v [ :LS'N EM-Lsu. AT <>]]) The third constraint associated with the Head- Complement Schema ensures that only those ele- ments that are marked as [EXTRA -t-]) within the smaller constituent can be passed into the higher do- main, while the last one prevents extraposition out of clauses (cf. Ross' Right Roof Constraint (Ross, 1967)). This approach is superior to Nerhonne's, as the extraposability of an item is correlated onlywith its linear properties (right-peripheral occurrence in a domain via [EXTRA +]), but not with its sta- tus as adjunct or complement. Our approach also makes the correct prediction that extraposition is only possible if the extraposed element is already final in the extraposition source. 6 In this sense, ex- traposition is subject to a monotonicity condition to the effect that the element in question has to occur in the same linear relationship in the smaller and the larger domains, viz. right-peripherally (modulo other extraposed constituents). This aspect clearly favors our approach over alternative proposals that treat extraposition in terms of a NONLOCAL depen- dency (Keller, 1994). In approaches of that kind, there is nothing, for example, to block extraposition of prenominal elements. Our approach allows an obvious extension to the case of extraposition from PPs which are prob- lematic for Nerbonne's analysis. Prepositions are prepended to the domain of NPs in the same way 6It should be pointed out that we do not make the as- sumption, often made in transformational grammar, that cases in which a complement (of a verb) can only occur extraposed necessitates the existence of an underlying non-extraposed structure that is never overtly realized. 178 Iv. )] DOM[5-']i [][ (Neienen Hund der ftunger hai) ], [ (vf£Ltteru) ] ] [ (einen) [] DoM T , A p-compaction(I-F],[], 0) ^ shutne(([~,0 ,[],El) Figure 5: Total compaction as a special case of compaction that determiners are to N domains. Along similar lines, note that extrapositions from topicalized constituents, noted by Nerbonne as a challenge for his proposal, do not pose a problem for our account. (11) Eine Dame ist an der Tiir a lady is at the door [die Sie sprechen will]. who you speak wants 'A lady is at the door who wants to talk to you.' If we assume, following Kathol (In progress), that topicalized constituents are part of the same clausal domain as the rest of the sentence, 7 then an ex- traposed domain object, inherited via partial com- paction from the topic, will automatically have to occur clause-finally, just as in the case of extraposi- tion from regular complements. So far, we have only considered the case in which the extraposed constituent is inherited by the higher order domain. However, the definition of the p- compaction relation in (12) also holds in the case where the list of liberated domain objects is empty, which amounts to the total compaction of the sign in question. As a result, we can regard total com- paction as a special case of the p-compaction relation in general. This means that as an alternative lin- earization of (6), we can also have the extraposition- less analysis in Figure 5. Therefore, there is no longer a need for the UNIONED feature for extraposition. This means that we can have a stronger theory as constraints on ex- traposability will be result of general conditions on the syntactic licensing schema (e.g. the Right Roof Constraint in (10)). But this means that whether or not something can be extraposed has been rendered exempt from lexical variation in principle---unlike in Reape's system where extraposability is a matter of lexical selection. rI.e. the initial placement of a preverbal constituent in a verb-second clause is a consequence of LP constraints within a flat clausal order domain. Moreover, while Reape employs this feature for the linearization of nonfinite complementation, it can be shown that the Argument Composition approach of Hinrichs & Nakazawa (Hinrichs and Nakazawa, 1994), among many others, is linguisti- cally superior (Kathol, In progress). As a result, we can dispense with the UNIONED feature altogether and instead derive linearization conditions from gen- eral principles of syntactic combination that are not subject to lexical variation. 5 Conclusion We have argued for an approach to extraposition from smaller constituents that pays specific atten- tion to the linear properties of the extraposition source, s To this end, we have proposed a more fine- grained typology of ways in which an order domain can be formed from smaller constituents. Crucially, we use relational constraints to define the interde- pendencies; hence our approach fits squarely into the paradigm in which grammars are viewed as sets of relational dependencies that has been advocated for instance in DSrre et al. (1992). Since the relational perspective also lies at the heart of computational formalisms such as CUF (DSrre and Eisele, 1991), the ideas presented here are expected to carry over into practical systems rather straightforwardly. We leave this task for future work. References Jochen DSrre and Andreas Eisele. 1991. A Compre- hensive Unification-Based Grammar Formalism. DYANA Deliverable R3.1.B, ESPRIT Basic Ac- tion BR3175. Jochen DSrre, Andreas Eisele, and Roland Seif- fert. 1992. Grammars as Relational Dependen- cies. AIMS Report 7, Institut fiir maschinelle Sprachverarbeitung, Stuttgart. 8 For similar ideas regarding English, see Stucky (1987). 179 David Dowty. In press. Towards a Minimalist The- ory of Syntactic Structure. In Horck and Sijtsma, editors, Discontinuous Constituency. Mouton de Gruyter. Erhard Hinrichs and Tsuneko Nakazawa. 1994. Linearizing finite AUX in German Verbal com- plexes. In John Nerbonne, Klaus Netter, and Carl Pollard, editors, German in Head-Driven Phrase Structure Grammar, pages 11-38. Stanford: CSLI Publications. Andreas Kathol and Carl Pollard. 1995. On the Left Periphery of German Subordinate Clauses. In West Coast Conference on Formal Linguistics, volume 14, Stanford University. CSLI Publica- tions/SLA. Andreas Kathol. In progress. Linearization-Based German Syntax. Ph.D. thesis, Ohio State Univer- sity. Frank Keller. 1994. Extraposition in HPSG. un- publ. ms., IBM Germany, Scientific Center Hei- delberg. John Nerbonne. 1994. Partial verb phrases and spu- rious ambiguities. In John Nerbonne, Klaus Net- ter, and Carl Pollard, editors, German in Head- Driven Phrase Structure Grammar, pages 109- 150. Stanford: CSLI Publications. Carl J. Pollard and Ivan A. Sag. 1994. Head-Driven Phrase Structure Grammar. CSLI Publications and University of Chicago Press. Carl Pollard, Robert Levine, and Robert Kasper. 1993. Studies in Constituent Ordering: Toward a Theory of Linearization in Head-Driven Phrase Structure Grammar. Grant Proposal to the Na- tional Science Foundation, Ohio State University. Mike Reape. 1993. A Formal Theory of Word Or- der: A Case Study in West Germanic. Ph.D. the- sis, University of Edinburgh. Mike Reape. 1994. Domain Union and Word Or- der Variation in German. In John Nerbonne, Klaus Netter, and Carl Pollard, editors, German in Head-Driven Phrase Structure Grammar, pages 151-198. Stanford: CSLI Publications. John Ross. 1967. Constraints on Variables in Syn- tax. Ph.D. thesis, MIT. Susan Stucky. 1987. Configurational Variation in English: A Study of Extraposition and Re- lated Matters. In Discontinuous Constituency, volume 20 of Syntax and Semantics, pages 377- 404. Academic Press, New York. Arnold Zwicky. 1986. Concatenation and libera- tion. In Papers from the 22nd Regional Meeting, Chicago Linguistic Society, pages 65-74. 180 | 1995 | 24 |
Statistical Sense Disambiguation with Relatively Small Corpora Using Dictionary Definitions Microsoft Institute North Ryde, NSW 2113, Australia [email protected] Alpha K. Luk Department of Computing Macquarie University NSW 2109, Australia Abstract Corpus-based sense disambiguation methods, like most other statistical NLP approaches, suffer from the problem of data sparseness. In this paper, we describe an approach which overcomes this problem using dictionary definitions. Using the definition- based conceptual co-occurrence data collected from the relatively small Brown corpus, our sense disambiguation system achieves an average accuracy comparable to human performance given the same contextual information. 1 Introduction Previous corpus-based sense disambiguation methods require substantial amounts of sense-tagged training data (Kelly and Stone, 1975; Black, 1988 and Hearst, 1991) or aligned bilingual corpora (Brown et al., 1991; Dagan, 1991 and Gale et al. 1992). Yarowsky (1992) introduces a thesaurus-based approach to statistical sense disambiguation which works on monolingual corpora without the need for sense-tagged training data. By collecting statistical data of word occurrences in the context of different thesaurus categories from a relatively large corpus (10 million words), the system can identify salient words for each category. Using these salient words, the system is able to disambiguate polysemous words with respect to thesaurus categories. Statistical approaches like these generally suffer from the problem of data sparseness. To estimate the salience of a word with reasonable accuracy, the system needs the word to have a significant number of occurrences in the corpus. Having large corpora will help but some words are simply too infrequent to make a significant statistical contribution even in a rather large corpus. Moreover, huge corpora are not generally available in all domains and storage and processing of very huge corpora can be problematic in some cases.Z In this paper, we describe an approach which attacks the problem of. data sparseness in automatic statistical sense disambiguation. Using definitions from LDOCE (Longman Dictionary of Contemporary English; Procter, 1978), co- occurrence data of concepts, rather than words, is collected from a relatively small corpus, the one million word Brown corpus. Since all the definitions in LDOCE are written using words from the 2000 word controlled vocabulary (or in our terminology, defining concepts), even our small corpus is found to be capable of providing statistically significant co- occurrence data at the level of the defining concepts. This data is then used in a sense disambiguation system. The system is tested on twelve words previously discussed in the sense disambiguation literature. The results are found to be comparable to human performance given the same contextual information. 2 Statistical Sense Disambiguation Using Dictionary Definitions It is well known that some words tend to co-occur with some words more often than with others. Similarly, looking at the meaning of the words, one should find that some concepts co-occur more often with some concepts than with others. For example, the concept crime is found to co-occur frequently with the concept punishment. This kind of conceptual relationship is not always reflected at the lexical level. For instance, in legal reports, the Statistical data is domain dependent. Data extracted from a corpus of one particular domain is usually not very useful for processing text of another domain. 181 concept crime will usually be expressed by words like offence or felony, etc., and punishment will be expressed by words such as sentence, fine or penalty, etc. The large number of different words of similar meaning is the major cause of the data sparseness problem. The meaning or underlying concepts of a word are very difficult to capture accurately but dictionary definitions provide a reasonable representation and are readily available. 2 For instance, the LDOCE definitions of both offence and felony contain the word crime, and all of the definitions of sentence, fine and penalty contain the word punishment. To disambiguate a polysemous word, a system can select the sense with a dictionary definition containing defining concepts that co-occur most frequently with the defining concepts in the definitions of the other words in the context. In the current experiment, this conceptual co-occurrence data is collected from the Brown corpus. 2.1 Collecting Conceptual Co-occurrence Data Our system constructs a two-dimensional table which records the frequency of co-occurrence of each pair of defining concepts. The controlled vocabulary provided by Longman is a list of all the words used in the definitions but, in its crude form, it does not suit our purpose. From the controlled vocabulary, we manually constructed a list of 1792 defining concepts. To minimise the size of the table and the processing time, all the closed class words and words which are rarely used in definitions (e.g., the days of the week, the months) are excluded from the list. To strengthen the signals, words which have the same semantic root are combined as one element in the list (e.g., habit and habitual are combined as {habit, habitual}). The whole LDOCE is pre-processed first. For each entry in LDOCE, we construct its corresponding conceptual expansion. The conceptual expansion of an entry whose headword is not a defining concept is a set of conceptual sets. Each conceptual set corresponds to a sense in the entry and contains all the defining concepts which occur in the definition of the sense. The entry of the noun sentence and its corresponding conceptual expansion 2 Manually constructed semantic frames could be more useful computationally but building semantic frames for a huge lexicon is an extremely expensive exercise. are shown in Figure 1. If the headword of an entry is a defining concept DC, the conceptual expansion is given as {{DC}}. The corpus is pre-segrnented into sentences but not pre-processed in any other way (sense-tagged or part-of-speech-tagged). The context of a word is defined to be the current sentence) The system processes the corpus sentence by sentence and collects conceptual co-occurrence data for each defining concept which occurs in the sentence. This allows the whole table to be constructed in a single run through the corpus. Since the training data is not sense tagged, the data collected will contain noise due to spurious senses of polysemous words. Like the thesaurus- based approach of Yarowsky (1992), our approach relies on the dilution of this noise by their distribution through all the 1792 defining concepts. Different words in the corpus have different numbers of senses and different senses have definitions of varying lengths. The principle adopted in collecting co-occurrence data is that every pair of content words which co-occur in a sentence should have equal contribution to the conceptual co- occurrence data regardless of the number of definitions (senses) of the words and the lengths of the definitions. In addition, the contribution of a word should be evenly distributed between all the senses of a word and the contribution of a sense should be evenly distributed between all the concepts in a sense. The algorithm for conceptual co- occurrence data collection is shown in Figure 2. 2.2 Using the Conceptual Co-occurrence Data for Sense Disambiguation To disambiguate a polysemous word W in a context C, which is taken to be the sentence containing W, the system scores each sense S of W, as defined in LDOCE, with respect to C using the following equations. score(S, C) = score(CS, C') - score(CS, GlobalCS) [1] where CS is the corresponding conceptual set of S, C' is the set of conceptual expansions of all content words (which are defined in LDOCE) in C and GlobalCS is the conceptual set containing all the 1792 defining concepts. 3 The average sentence length of the Brown corpus is 19.4 words. 182 Entry in LDOCE 1. (an order given by a judge which fixes) a punishment for a criminal found guilty in court 2. a group of words that forms a statement, command, exclamation, or question, usu. contains a subject and a verb, and (in writing) begins with a capital letter and ends with one of the marks. ! ? conceptual expansion { {order, judge, punish, crime, criminal, fred, guilt, court}, {group, word, form, statement, command, question, contain, subject, verb, write, begin, capital, letter, end, mark} } Figure 1. The entry of sentence (n.) in LDOCE and its corresponding conceptual expansion 1. Initialise the Conceptual Co-occurrence Data Table (CCDT) with initial value of 0 for 2. For each sentence S in the corpus, do a. Construct S', the set of conceptual expansions of all content words (which are defined in LDOCE) in S. b. For each unique pair of conceptual expansions (CE~, CEj) in S', do For each defining concept DC~mp in each conceptual set CS~m in CE~, do For each defining concept DCjnq in each conceptual set CSj, in CEj, do increase the values of the cells CCDT(DCimp, DCjnq) and CCDT(DCjnq, Dcirnp) by the product of w(DCimp) and w(DCjnq) where w(DCxyz) is the weight of DCxyz given by ! w(DC~ ) = ICE, I, IC%I each cell. Figure 2. The algorithm for collecting conceptual co-occurrence data score< CS, C'> = ve~S, core< CS, CE'> /I C'] for any concp, set CS and concp, exp. set C' [2] score(CS, CE') = max score(CS,CS') C8'~C£' for any concp, set CSand concp, exp. CE' [31 score( CS, CS') = voe'.es' ~'sc°re( eS'DC') /ICS'[ for any concp, sets CS and CS' [4] score(CS, DC')= ~f~ score(DC, DC') /[CS[ for any concp, set CS and def. concept DC' [5] score( DC, DC' ) = max(0, I ( DC, DC' )) for any def. concepts DC and DC' [6] I(DC, DC') is the mutual information 4 (Fano, 1961) between the 2 defining concepts DC and DC' given by: I(x,y) --- log s P(x,y) P(x). P(y) f(x,y).N I°g2 f(x). f(y) (using the Maximum Likelihood Estimator). f(x,y) is looked up directly from the conceptual co- occurrence data table, fix) and f(y) are looked up from a pre-constructed list off(DC) values, for each defining concept DC: f(OC) = ~_,f(DC, DC') VDC' 4 Church and Hanks (1989) use Mutual Information to measure word association norms. 183 N is taken to be the total number of pairs of words processed, given by ~ f ( DC)/2 since for each pair of surface words processed, LI( c) V/~C is increased by 2. Our scoring method is based on a probabilistic model at the conceptual level. In a standard model, the logarlthm of the probability of occurrence of a conceptual set {x,, x~ ..... xm} in the context of the conceptual set {y~, y~.....y,} is given by log2 P(xl,x2 ..... x,,lyl,y2 ..... y,) "~ ~=l ( "j~.__ll(x,,Yj)+l°g2 P(xi)) assuming that each P(x~) is independent of each other given y~, y2...., y, and each P(Y.i) is independent of each other given x~, for all x~.S Our scoring method deviates from the standard model in a number of aspects: 1. log 2 P(x~), the term of the occurrence Probability of each of the defining concepts in the sense, is excluded in our scoring method. Since the training data is not sense-tagged, the occurrence probability is highly unreliable. Moreover, the magnitude of mutual information is decreased due to the noise of the spurious senses while the average magnitude of the occurrence probability is unaffected, e Inclusion of the occurrence probability term will lead to the dominance of this term over the mutual information term, resulting in the system flavouring the sense with the more frequently occurring defining concepts most of the time. 2. The score of a sense with respect to the current context is normalised by subtracting the score of the sense calculated with respect to the GlobalCS (which contains all defining concepts) from it (see formula 5 The occurrence probabilities of some defining concepts will not be independent in some contexts. However, modelling the dependency between different concepts in different contexts will lead to an explosion of the complexity of the model. 6 The noise only leads to incorrect distribution of the occurrence probability. [1]). In effect, we are comparing the score between the sense with the current context and the score between the sense and an artificially constructed "average" context. This is needed to rectify the bias towards the sense(s) with defining concepts of higher average mutual information (over the set of all defining concepts), 'which is intensified by the ambiguity of the context words. 3. Negative mutual information score is taken to be 0 ([6]). Negative mutual information is unreliable due to the smaller number of data points. 4. The evidence (mutual information score) from multiple defining concepts/words is averaged rather than summed ([2], [4] & [5]). This is to compensate for the different lengths of definitions of different senses and different lengths of the context. The evidence from a polysemous context word is taken to be the evidence from its sense with the highest mutual information score ([3]). This is due to the fact that only one of the senses is used in the given sentence. 3 Evaluation Our system is tested on the twelve words discussed in Yarowsky (1992) and previous publications on sense disambiguation. Results are shown in Table 1. Our system achieves an average accuracy of 77% on a mean 3-way sense distinction over the twelve words. Numerically, the result is not as good as the 92% as reported in Yarowsky (1992). However, direct comparison between the numerical results can be misleading since the experiments are carried out on two very different corpora both in size and genre. Firstly, Yarowsky's system is trained with the 10 million word Grolier's Encyclopedia, which is a magnitude larger than the Brown corpus used by our system. Secondly, and more importantly, the two corpora, which are also the test corpora, are very different in genre. Semantic coherence of text, on which both systems rely, is generally stronger in technical writing than in most other kinds of text. Statistical disambiguation systems which rely on semantic coherence will generally perform better on technical writing, which encyclopedia entry can be regarded as one kind of, than on most other kinds of text. On the other hand, the Brown corpus is a collection of text with all kinds of genre. People make use of syntactic, semantic and pragmatic knowledge in sense disambiguation. It is not very realistic to expect any system which only possesses semantic coherence knowledge (including 184 ours as well as Yarowsky's) to achieve a very high level of accuracy for all words in general text. To provide a better evaluation of our approach, we have conducted an informal experiment aiming at establishing a more reasonable upper bound of the performance of such systems. In the experiment, a human subject is asked to perform the same disambiguation task as our system, given the same contextual information, 7 Since our system only uses semantic coherence information and has no deeper understanding of the meaning of the text, the human subject is asked to disambiguate the target word, given a list of all the content words in the context (sentence) of the target word in random order. The words are put in random order because the system does not make use of syntactic information of the sentence either. The human subject is also allowed access to a copy of LDOCE which the system also uses. The results are listed in Table 1. The actual upper bound of the performance of statistical methods using semantic coherence information only should be slightly better than the performance of human since the human is disadvantaged by a number of factors, including but not limited to: 1. it is unnatural for human to disambiguate in the described manner; 2. the semantic coherence knowledge used by the human is not complete or specific to the current corpusS; 3. human error. However, the results provide a rough approximation of the upper bound of performance of such systems, The human subject achieves an average accuracy of 71% over the twelve words, which is 6% lower than our system. More interestingly, the results of the human subject are found to exhibit a similar pattern to the results of our system - the human subject performs better on words and senses for which our system achieve higher accuracy and less well on words and senses for which our system has a lower accuracy. 4 The Use of Sentence as Local Context Another significant point our experiments have shown is that the sentence can also provide enough contextual information for semantic coherence based 7 The result is less than conclusive since only one human subject is tested. In order to acquire more reliable results, we are currently seeking a few more subjects to repeat the experiment. s The subject has not read through the whole corpus. approaches in a large proportion of cases. 9 The average sentence length in the Brown corpus is 19.41° words which is 5 times smaller than the 100 word window used in Gale et al. (1992) and Yarowsky (1992). Our approach works well even with a small "window" because it is based on the identification of salient concepts rather than salient words. In salient word based approaches, due to the problem of data sparseness, many less frequently occurring words which are intuitively salient to a particular word sense will not be identified in practice unless an extremely large corpus is used. Therefore the sentence usually does not contain enough identified salient words to provide enough contextual information. Using conceptual co- occurrence data, contextual information from the salient but less frequently used words in the sentence will also be utilised through the salient concepts in the conceptual expansions of these words. Obviously, there are still cases where the sentence does not provide enough contextual information even using conceptual co-occurrence data, such as when the sentence is too short, and contextual information from a larger context has to be used. However, the ability to make use of information in a smaller context is very important because the smaller context always overrules the larger context if their sense preferences are different. For example, in a legal trial context, the correct sense of sentence in the clause she was asked to repeat the last word of her previous sentence will be its word sense rather than its legal sense which would have been selected if a larger context is used instead. 9 Analysis of the test samples which our system fails to correctly disambiguate also shows that increasing the window size will benefit the disambiguation process only in a very small proportion of these samples. The main cause of errors is the polysemous words in dictionary definitions which we will discuss in Section 6. 1o Based on 1004998 words and 51763 sentences. 185 Table 1. Results of Experiments Sense N i DBCC Human BASS Fish Musical senses BOW bending forward weapon violin part knot front of ship bend in object * CONE shaped object fruit of a plant part of eye * DUTY obligation tax GALLEY ancient ship ship's kitchen printer's tray INTEREST curiosity advantage share money paid ISSUE bringing out important point stock * MOLE skin blemish animal stone wall ** quantity * machine * SENTENCE punishment group of words 1 15 16 1 0 2 4 2 o . 5 0 54 2 56 0 4 0 187 59 8 48 302 36 87 123 11 20 31 i 100% 100% i 93% 100% Thes. 100% 99% i 94% 100% 99% ! 0% 100% i - - 92% i 100% 100% 100% i 100% 100% 25% i 50% 100% 94% - -- 50% i 78% 100% 91% i 100% 100% 61% i . . . . 99% - - 69% i 100% 100% 77% i 57% i 100% j 59% i - ilOO% i -- i 100% i 43% i 42% i 25% 88% i 49% i 64% i 56% 59% 72% 100% 73% 50% 50% 41% 47% 38% 75% 47% 75% 40% 50% 50% 100% 67% 100% 45% 65% 2 i 50% 0 i 1 i 100% 3i 67% i 91% i 80% i 84% 96% 96% 96% 97% 50% 100% 95% 88% 34% 38% 90% 72% 89% 94% 100% 94% 100% 100% 98% 100% 99% 99% 98% 98% Sense N i DBCC Human SLUG animal fake coin type strip bullet mass unit * metallurgy * STAR space object shaped object celebrity TASTE flavour preference 1 i 0% 0 i -- 0 i -- 4 i 100% 5i ao% 4 i 75% 0! -- 11 j 45% 15i 53% 21 i 100% 261 96% 47 i 98% Thes. 0% 100% -- 50% -- 100% 50% 100% -- 100% - 100% 40% 97% 75% 96% - 95% 64% 82% 67% 96% 95% 93% 85% 93% 89% 93% Notes: 1. N marks the column with the number of tcst samples for each sense. DBCC (Defmition-Bascd Conceptual Co- occurrence) and Human mark the columns with the results of our system and the human subject in disambiguating the occurrences of the 12 words in the Brown corpus, respectively. Thes. (thesaurus) marks the column with the results of Yarowsky (1992) tested on the Grolier's Encyclopedia. 2. The "correct" sense of each test sample is chosen by hand disambiguation carried out by the author using the sentence as the context. A small proportion of test samples cannot be disambiguated within the given context and are excluded from the experiment. 3. The senses marked with * are used in Yarowsky (1992) but no corresponding sense is found in LDOCE. 4. The sense marked with ** is defined in LDOCE but not used in Yarowsky (1992). 6. In our experiment, the words are disambiguated between all the senses listed except the ones marked with 7. The rare senses listed in LDOCE are not listed here. For some of the words, more than one sense listed in LDOCE corresponds to a sense as used in Yarowsky (1992). In these cases, the senses used by Yarowsky are adopted for easier comparison. 8. All results are based on 100% recall. 186 5 Related Work Previous attempts to tackle the data sparseness problem in general corpus-based work include the class-based approaches and similarity-based approaches. In these approaches, relationships between a given pair of words are modelled by analogy with other words that resemble the given pair in some way. The class-based approaches (Brown et al., 1992; Resnik, 1992; Pereira et al., 1993) calculate co-occurrence data of words belonging to different classes,~ rather than individual words, to enhance the co-occurrence data collected and to cover words which have low occurrence frequencies. Dagan et al. (1993) argue that using a relatively small number of classes to model the similarity between words may lead to substantial loss of information. In the similarity- based approaches (Dagan et al., 1993 & 1994; Grishman et al., 1993), rather than a class, each word is modelled by its own set of similar words derived from statistical data collected from corpora. However, deriving these sets of similar words requires a substantial amount of statistical data and thus these approaches require relatively large corpora to start with.~ 2 Our definition-based approach to statistical sense disambiguation is similar in spirit to the similarity- based approaches, with respect to the "specificity" of modelling individual words. However, using definitions from existing dictionaries rather than derived sets of similar words allows our method to work on corpora of much smaller sizes. In our approach, each word is modelled by its own set of defining concepts. Although only 1792 defining concepts are used, the set of all possible combinations (a power set of the defining concepts) is so huge that it is very unlikely two word senses will have the same combination of defining concepts unless they are almost identical in meaning. On the other hand, the thesaurus-based method of Yarowsky (1992) may suffer from loss of information (since it is semi-class-based) as well as data sparseness (since H Classes used in Resnik (1992) are based on the WordNet taxonomy while classes of Brown et al. (1992) and Pereira et al. (1993) are derived from statistical data collected from corpora. ~2 The corpus used in Dagan et al. (1994) contains 40.5 million words. it is based on salient words) and may not perform as well on general text as our approach. 6 Limitation and Further work Being a dictionary-based method, the natural limitation of our approach is the dictionary. The most serious problem is that many of the words in the controlled vocabulary of LDOCE are polysemous themselves. The result is that many of our list of 1792 defining concepts actually stand for a number of distinct concepts. For example, the defining concept point is used in its place sense, idea sense and sharp end sense in different definitions. This affects the accuracy of disambiguating senses which have definitions containing these polysemous words and is found to be the main cause of errors for most of the senses with below-average results. We are currently working on ways to disambiguate the words in the dictionary definitions. One possible way is to apply the current method of disambiguation on the defining text of dictionary itself. The LDOCE defining text has roughly half a million words in its 41000 entries, which is half the size of the Brown corpus used in the current experiment. Although the result on the dictionary cannot be expected to be as good as the result on the Brown corpus due to the smaller size of the dictionary, the reliability of further co-occurrence data collected and, thus, the performance of the disambiguation system can be improved significantly as long as the disambiguation of the dictionary is considerably more accurate than by chance. Our success in using definitions of word senses to overcome the data sparseness problem may also lead to further improvement of sense disambiguation technologies. In many cases, semantic coherence information is not adequate to select the correct sense, and knowledge about local constraints is needed. ~3 For disambiguation of polysemous nouns, these constraints include the modifiers of these nouns and the verbs which take these nouns as objects, etc. This knowledge has been successfully acquired from corpora in manual or semi-automatic approaches such as that described in Hearst (1991). However, fully automatic lexically based approaches 3 Hatzivassiloglou (1994) shows that the introduction of linguistic cues improves the performance of a statistical semantic knowledge acquisition system in the context of word grouping. 187 such as that described in Yarowsky (1992) are very unlikely to be capable of acquiring this finer knowledge because the problem of data sparseness becomes even more serious with the introduction of syntactic constraints. Our approach has overcome the data sparseness problem by using the defining concepts of words. It is found to be effective in acquiring semantic coherence knowledge from a relatively small corpus. It is possible that a similar approach based on dictionary definitions will be successful in acquiring knowledge of local constraints from a reasonably sized corpus. 7 Conclusion We have shown that using definition-based conceptual co-occurrence data collected from a relatively small corpus, our sense disambiguation system has achieved accuracy comparable to human performance given the same amount of contextual information. By overcoming the data sparseness problem, contextual information from a smaller local context becomes sufficient for disambiguation in a large proportion of cases. Acknowledgments t I would like to thank Robert Dale and Vance Gledhill for their helpful comments on earlier drafts of this paper, and Richard Buckland and Mark Dras for their help with the statistics. References Black, E., 1988. An Experiment In Computational Discrimination of English Word Senses. IBM Journal of research and development, vol. 32, pp. 185-194. Brown, P., et al., 1991. Word-sense Disambiguation using Statistical Methods. In Proceedings of 29th annual meeting of ACL, pp.264-270. Brown, P. et al., 1992. Class-based n-gram Models of Natural Language. Computational Linguistics, 18(4):467-479. Church, K. and P. Hanks, 1989. Word Association Norms, Mutual Information, and Lexicography. In Proceedings of the 27th Annual Meeting of the Association for Computational Linguistics, pp.76- 83. Dagan, I. et al., 1991. Two Languages Are More Informative Than One. In Proceedings of the 29th Annual Meeting of the ACL, pp130-137. Dagan, I. et al., 1993. Contextual Word Similarity and Estimation From Sparse Data. In Proceedings of the 31st Annual Meeting of the ACL. Dagan, I. et al., 1994. Similarity-Based Estimation of Word Cooccurrence Probabilities. In Proceedings of the 32nd Annual Meeting of the ACL, Las Cruces, pp272-278. Fano, R., 1961. Transmission of Information. MIT Press, Cambridge, Mass. Gale, W., et al., 1992. A Method for Disambiguating Word Senses in a Large Corpus. Computer and Humanities, vol. 26 pp.415-439. Grishman, R. and J. Sterling, 1993. Smoothing of automatically generated selectional constraints. In Human Language Technology, pp.254-259, San Francisco, California. Advanced Research Projects Agency, Software and Intelligent Systems Technology Office, Morgan Kanfmann. Hatzivassiloglou, V., 1994. Do We Need Linguistics When We Have Statistics? A Comparative Analysis of the Contributions of Linguistic Cues to a Statistical Word Grouping System. In Proceedings of Workshop The Balancing Act: Combining Symbolic and Statistical Approaches to Language, Las Cruces, New Mexico. Association of Computational Linguistics. Hearst, M., J991. Noun Homograph Disambiguation Using Local Context in Large Text Corpora, Using Corpora, University of Waterloo, Waterloo, Ontario. Kelly, E. and P. Stone, 1975. Computer Recognition of English Word Senses, North-Holland, Amsterdam. Pereira F., et al., 1993. Distributional Clustering of English words. In Proceedings of the 31st Annual Meeting of the ACL. pp183-190. Procter, P., et al. (eds.), 1978. Longman Dictionary of Contemporary English, Longman Group. Resnik, P., 1992. WordNet and distributional analysis: A class-based approach to lexical discovery. In Proceedings of AAAI Workshop on Statistically-based NLP Techniques, San Jose, California. Yarowsky, D., 1992. Word-sense Disambiguation using Statistical Models of Roget's Categories Trained on Large Corpora. In Proceedings of COLING9 2, pp.454-460. 188 | 1995 | 25 |
UNSUPERVISED WORD SENSE DISAMBIGUATION RIVALING SUPERVISED METHODS David Yarowsky Department of Computer and Information Science University of Pennsylvania Philadelphia, PA 19104, USA yarowsky~unagi, ci s. upenn, edu Abstract This paper presents an unsupervised learn- ing algorithm for sense disambiguation that, when trained on unannotated English text, rivals the performance of supervised techniques that require time-consuming hand annotations. The algorithm is based on two powerful constraints - that words tend to have one sense per discourse and one sense per collocation - exploited in an iterative bootstrapping procedure. Tested accuracy exceeds 96%. 1 Introduction This paper presents an unsupervised algorithm that can accurately disambiguate word senses in a large, completely untagged corpus) The algorithm avoids the need for costly hand-tagged training data by ex- ploiting two powerful properties of human language: 1. One sense per collocation: 2 Nearby words provide strong and consistent clues to the sense of a target word, conditional on relative dis- tance, order and syntactic relationship. 2. One sense per discourse: The sense of a tar- get word is highly consistent within any given document. Moreover, language is highly redundant, so that the sense of a word is effectively overdetermined by (1) and (2) above. The algorithm uses these prop- erties to incrementally identify collocations for tar- get senses of a word, given a few seed collocations 1Note that the problem here is sense disambiguation: assigning each instance of a word to established sense definitions (such as in a dictionary). This differs from sense induction: using distributional similarity to parti- tion word instances into clusters that may have no rela- tion to standard sense partitions. 2Here I use the traditional dictionary definition of collocation - "appearing in the same location; a juxta- position of words". No idiomatic or non-compositional interpretation is implied. for each sense, This procedure is robust and self- correcting, and exhibits many strengths of super- vised approaches, including sensitivity to word-order information lost in earlier unsupervised algorithms. 2 One Sense Per Discourse The observation that words strongly tend to exhibit only one sense in a given discourse or document was stated and quantified in Gale, Church and Yarowsky (1992). Yet to date, the full power of this property has not been exploited for sense disambiguation. The work reported here is the first to take advan- tage of this regularity in conjunction with separate models of local context for each word. Importantly, I do not use one-sense-per-discourse as a hard con- straint; it affects the classification probabilistically and can be overridden when local evidence is strong. In this current work, the one-sense-per-discourse hypothesis was tested on a set of 37,232 examples (hand-tagged over a period of 3 years), the same data studied in the disambiguation experiments. For these words, the table below measures the claim's accuracy (when the word occurs more than once in a discourse, how often it takes on the majority sense for the discourse) and applicability (how often the word does occur more than once in a discourse). The one-sense-per-discourse hypothesis: Word plant tank poach palm axes sake bass space motion crane Senses living/factory vehicle/contnr steal/boil tree/hand grid/tools benefit/drink fish/music volume/outer legal/physical bird/machine Average Accuracy 99.8 % 99.6 % 100.0 % 99.8 % I00.0 % 100.0 % 100.0 % 99.2 % 99.9 % 100.0 % 99.8 % Applicblty 72.8 % 50.5 % 44.4 % 38.5 % 35.5 % 33.7 % 58.8 % 67.7 % 49.8 % 49.1% 50.1% Clearly, the claim holds with very high reliability for these words, and may be confidently exploited 189 as another source of evidence in sense tagging. 3 3 One Sense Per Collocation The strong tendency for words to exhibit only one sense in a given collocation was observed and quan- tified in (Yarowsky, 1993). This effect varies de- pending on the type of collocation. It is strongest for immediately adjacent collocations, and weakens with distance. It is much stronger for words in a predicate-argument relationship than for arbitrary associations at equivalent distance. It is very much stronger for collocations with content words than those with function words. 4 In general, the high reli- ability of this behavior (in excess of 97% for adjacent content words, for example) makes it an extremely useful property for sense disambiguation. A supervised algorithm based on this property is given in (Yarowsky, 1994). Using a decisien list control structure based on (Rivest, 1987), this al- gorithm integrates a wide diversity of potential ev- idence sources (lemmas, inflected forms, parts of speech and arbitrary word classes) in a wide di- versity of positional relationships (including local and distant collocations, trigram sequences, and predicate-argument association). The training pro- cedure computes the word-sense probability distri- butions for all such collocations, and orders them by r 0 /Pr(SenseAlColloeationi~x 5 the log-likelihood ratio ~ gt prISenseBlColloeationi~), with optional steps for interpolation and pruning. New data are classified by using the single most predictive piece of disambiguating evidence that ap- pears in the target context. By not combining prob- abilities, this decision-list approach avoids the prob- lematic complex modeling of statistical dependencies 3It is interesting to speculate on the reasons for this phenomenon. Most of the tendency is statistical: two distinct arbitrary terms of moderate corpus frequency axe quite unlikely to co-occur in the same discourse whether they are homographs or not. This is particu- larly true for content words, which exhibit a "bursty" distribution. However, it appears that human writers also have some active tendency to avoid mixing senses within a discourse. In a small study, homograph pairs were observed to co-occur roughly 5 times less often than arbitrary word pairs of comparable frequency. Regard- less of origin, this phenomenon is strong enough to be of significant practical use as an additional probabilistic disambiguation constraint. 4This latter effect is actually a continuous function conditional on the burstiness of the word (the tendency of a word to deviate from a constant Poisson distribution in a corpus). SAs most ratios involve a 0 for some observed value, smoothing is crucial. The process employed here is sen- sitive to variables including the type of collocation (ad- jacent bigrams or wider context), coliocational distance, type of word (content word vs. function word) and the expected amount of noise in the training data. Details axe provided in (Yarowsky, to appear). encountered in other frameworks. The algorithm is especially well suited for utilizing a large set of highly non-independent evidence such as found here. In general, the decision-list algorithm is well suited for the task of sense disambiguation and will be used as . a component of the unsupervised algorithm below. 4 Unsupervised Learning Algorithm Words not only tend to occur in collocations that reliably indicate their sense, they tend to occur in multiple such collocations. This provides a mecha- nism for bootstrapping a sense tagger. If one begins with a small set of seed examples representative of two senses of a word, one can incrementally aug- ment these seed examples with additional examples of each sense, using a combination of the one-sense- per-collocation and one-sense-per-discourse tenden- cies. Although several algorithms can accomplish sim- ilar ends, 6 the following approach has the advan- tages of simplicity and the ability to build on an existing supervised classification algorithm without modification. ~ As shown empirically, it also exhibits considerable effectiveness. The algorithm will be illustrated by the disam- biguation of 7538 instances of the polysemous word plant in a previously untagged corpus. STEP 1: In a large corpus, identify all examples of the given polysemous word, storing their contexts as lines in an initially untagged training set. For example: Sense ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? Training Examples (Keyword in Context) ... company said the plant is still operating Although thousands of plant and animal species ... zonal distribution of plant life .... ... to strain microscopic plant life from the ... vinyl chloride monomer plant, which is ... and Golgi apparatus of plant and animal cells ... computer disk drive plant located in ... ... divide life into plant and animal kingdom ... close-up studies of plant life and natural ... Nissan car and truck plant in Japan is ... ... keep a manufacturing ... molecules found in ... union responses to ... animal rather than ... many dangers to company manufacturing ... growth of aquatic automated manufacturing ... Animal and discovered at a St. Louis plant profitable without plant and animal tissue plant closures .... plant tissues can be plant and animal life plant is in Orlando ... plant life in water ... plant in Fremont , plant life are delicately plant manufacturing computer manufacturing plant and adjacent ... ... the proliferation of plant and animal life °Including variants of the EM algorithm (Bantu, 1972; Dempster et al., 1977), especially as applied in Gale, Church and Yarowsky (1994). 7Indeed, any supervised classification algorithm that returns probabilities with its classifications may poten- tially be used here. These include Bayesian classifiers (Mosteller and Wallace, 1964) and some implementa- tions of neural nets, but not BrK! rules (Brill, 1993). 190 STEP 2: For each possible sense of the word, identify a rel- atively small number of training examples represen- tative of that sense, s This could be accomplished by hand tagging a subset of the training sentences. However, I avoid this laborious procedure by iden- tifying a small number of seed collocations repre- sentative of each sense and then tagging all train- ing examples containing the seed collocates with the seed's sense label. The remainder of the examples (typically 85-98%) constitute an untagged residual. Several strategies for identifying seeds that require minimal or no human participation are discussed in Section 5. In the example below, the words life and manufac- turing are used as seed collocations for the two major senses of plant (labeled A and B respectively). This partitions the training set into 82 examples of living plants (1%), 106 examples of manufacturing plants (1%), and 7350 residual examples (98%). Sense Training Examples A used to strain microscopic A ... zonal distribution of A close-up studies of A too rapid growth of aquatic A ... the proliferation of A establishment phase of the A ... that divide life into A ... many dangers to A mammals . Animal and A beds too salty to support A heavy seas, damage , and A ? ... vinyl chloride monomer ? ... molecules found in ? ... Nissan car and truck ? ... and Golgi apparatus of ? ... union responses to ? ? ? ... cell types found in the ? ... company said the ? ... Although thousands of ? ... animal rather than ? ... computer disk drive • (Keyword in Context) plant life from the ... plant life .... plant life and natural ... plant life in water ... plant and animal llfe ... plant virus life cycle ... plant and animal kingdom plant and animal life ... plant life are delicately plant life . River ... plant life growing on ... plant, which is ... plant and animal tissue plant in Japan is ... plant and animal celia ... plant closures .... plant kingdom are ... plant is still operating ... plant and animal species plant tissues can be ... plant located in ... S . . . . . . . . B automated manufacturing plant in Fremont ... B ... vast manufacturing plant and distribution ... B chemical manufacturing plant, producing viscose B ... keep a manufacturing plant profitable without B computer manufacturing plant and adjacent ... B discovered at a St. Louis plant manufacturing B ... copper manufacturing plant found that they B copper wire manufacturing plant, for example ... B 's cement manufacturing plant in Alpena ... B polystyrene manufacturing plant at its Dew ... B company manufacturing plant is in Orlando ... It is useful to visualize the process of seed de- velopment graphically. The following figure illus- trates this sample initial state. Circled regions are the training examples that contain either an A or B seed collocate. The bulk of the sample points "?" constitute the untagged residual. SFor the purposes of exposition, I will assume a binary sense partition. It is straightforward to extend this to k senses using k sets of seeds. ? _? ? _ ? 7 ? ? "t ?z .71 ? ? ??? ? , ? t?? ?? ? ?? ???7 ? ? A A AAA ? ? 7 ?? ?7 77 ? ? ? ~ AAAAA A A A ??? ? ? ?? ? ?? ? A A AA A AAAA AA ?? ? ?? ? ?? A AAA A A AA ? ? 7 ? ~ A ~ ??77 ??777 ? ? ? ?? ?? ? 7 ? 7? ?-- ? ? ???? ???? ? ?? ? ? ?? ? ?? 77? ? ?? ?? ?? ? ?? ? ? ?7?? ? ?? ? ? ? ?? ? ? ? ? ??77 ???? ? ?? ? ? ? 77 ? ?? ?? ? ? ?~ 4 ?? ? ?7? ? 77 77 ? ? 7777 ?? ? ?~ 7 ? 7 77 ?? 77 ,, ,7 ,,,7 77,, 7,~:;~ 7 77777 77 , -7717 ?77?7 7777 77777 ? 77 97 77 77 ? ?r 77 7 77 77 77 7 ?7 7 7?777 77 ? 77 ~ ~ :.,_:..:.ff.?.;:.7.:.::.:.:.~ .., ...... 7. .... : .... :.,.;. 7777 7 777~ ~777~7 7 ~777 77777~?~77 77~7 7 7 77 77 ~ 77 77 77 7 7 7~ 77 7, 7 77 ,7 7v 7 7 ,7 7 77 77 , ,, ? 77 7 ' 77,~, '7 '77 7 ,777 ,, 7 7 7 7 , 7 7 ,7 7 7 ,7 7 7 77 7777 77 77 ,, ? ? 77 ? 7 ? ?? ? 7777 77 7 7777 7 7 77 7 7 77 ? ?7 I 7 ~ 7 v ~ ~7 ~ v I ~? 7'~7 7 7 7 ? ? ? 7 7 7 • 7 ~, I ? " 7 7 77 7,~ 7"? 7 77 77 77 7 ~ , ? -' I 7 ~'?77 77 :,? 7777 77 :,7 777 7 7? 7? 777 ? 7 7 7 7 77 ? ? 77 7 77 7 77 77 ? 7 7 77 7 7 ? ? 77 7 ? 7 7 7 1~ 77 ? 7? I' 7 7,' ?7 . 7 7 ~,i,~o,.o,~,'. ~:.--~7 ~ ~7. I ? ? 7~ ?" 77 I , Sl ? ?? ? My ? 7? ? 7 ?? 777? 7 t77777? ? 77 7 ?7 ?7 ? ? ~ Figure 1: Sample Initial State A = SENSE-A training example B = SENSE-B training example .~urrently unclassified training example [ Life ] = Set of training examples containing the collocation "life". STEP 3a: Train the supervised classification algorithm on the SENSE-A/SENSE-B seed sets. The decision-list al- gorithm used here (Yarowsky, 1994) identifies other collocations that reliably partition the seed training data, ranked by the purity of the distribution. Be- low is an abbreviated example of the decision list trained on the plant seed data. 9 Initial decision list for plant (abbreviated) LogL 8.10 7.58 7.39 7.20 6.27 4.70 4.39 4.30 4.10 3.52 3.48 3.45 Collocation Sense plant life =~ A manufacturing plant ~ B life (within 4-2-10 words) ~ A manufacturing (in 4-2-10 words) =~ B animal (within -I-2-10 words) =~ A equipment (within -1-2-10 words) =¢, B employee (within 4-2-10 words) =~ B assembly plant ~ B plant closure =~ B plant species =~ A automate (within 4-2-10 words) ::~ B microscopic plant ~ A 9Note that a given collocate such as life may appear multiple times in the list in different collocations1 re- lationships, including left-adjacent, right-adjacent, co- occurrence at other positions in a +k-word window and various other syntactic associations. Different positions often yield substantially different likelihood ratios and in cases such as pesticide plant vs. plant pesticide indicate entirely different classifications. 191 STEP 3b: Apply the resulting classifier to the entire sam- ple set. Take those members in the residual that are tagged as SENSE-A or SENSE-B with proba- bility above a certain threshold, and add those examples to the growing seed sets. Using the decision-list algorithm, these additions will contain newly-learned collocations that are reliably indica- tive of the previously-trained seed sets. The acquisi- tion of additional partitioning collocations from co- occurrence with previously-identified ones is illus- trated in the lower portion of Figure 2. STEP 3c: Optionally, the one-sense-per-discourse constraint is then used both to filter and augment this addition. The details of this process are discussed in Section 7. In brief, if several instances of the polysemous word in a discourse have already been assigned SENSE-A, this sense tag may be extended to all examples in the discourse, conditional on the relative numbers and the probabilities associated with the tagged ex- amples. Labeling previously untagged contexts using the one-sense-per-discourse property Change Disc. in tag Numb. ~. -~ A 724 A --* A 724 ? --* A 724 A --* A 348 A --* A 348 ? --* A i 348 ? --* A 348 Training Examples (from same discourse) ... the existence of plant and animal life ... ... classified as either plant or animal ... Althoul~h bacterial and plant cells are enclosed ... the life of the plant, producing stem ... an aspect of plant life , for example ... tissues ; because plant egg cells have photosynthesis, and so plant growth is attuned This augmentation of the training data can often form a bridge to new collocations that may not oth- erwise co-occur in the same nearby context with pre- viously identified collocations. Such a bridge to the SENSE-A collocate "cell" is illustrated graphically in the upper half of Figure 2. Similarly, the one-sense-per-discourse constraint may also be used to correct erroneously labeled ex- amples. For example: Error Correction using the one-sense-per-discourse property Change Disc. in tag Numb. A ---* A 525 A ---* A 525 A ---* A 525 B ~ A 525 "l~raining Examples (from same discourse) contains a varied plant and animal life the most common plant life , the ... slight within Arctic plant species ... are protected by plant parts remaining from ? ? L/re -A'" a " AA.'? ~? ?77 ??''? ? ?? ? IrX'li'~A .'".^^~At22~f~--P.,,~:~'lMl~o~w~opic I • 1'? ' ~??,? ?'?;:, ? ?? ? ? ?? ? ?? ,^~-*~'. ,/2"~A=,I ,~:'-; , ,, ,, ,,, L~II I ? 3? ? ??2' ? ??? ??'t "" ?" ????? ? ?~77 ?'t ??~?-777 ???? ? 77 ? ?7 ? 77 ? ? re ? ?? ? ?? ? ?? ? 77?,7??? ?? ?? :.:.:.,. '. ...... : .... : ...... '.: ? ?? ? 2? ? ??? ?27 ? ? ? ?? ?? 27 ? ? ?? ? ?? ???? ? ? ? ? ? ? ? ?? ? ? 7 "7 7 1Eouimr~nt I .-[ a~l~ B u %~.~i,.lL.~ B-n~| .;,?-?~?? ? ??? ? '~ .~.f:'l~,,,~/m,, = D B ~hl ? ?'~ B~-.I I ? ?? ? ? ? ?? ? ? '7'., ? Figure 2: Sample Intermediate State (following Steps 3b and 3c) STEP 4: Stop. When the training parameters are held con- stant, the algorithm will converge on a stable resid- ual set. Note that most training examples will exhibit mul- tiple collocations indicative of the same sense (as il- lustrated in Figure 3). The decision list algorithm resolves any conflicts by using only the single most reliable piece of evidence, not a combination of all matching collocations. This circumvents many of the problemz associated with non-independent evi- dence sources. STEP 3d: Repeat Step 3 iteratively. The training sets (e.g. SENSE-A seeds plus newly added examples) will tend to grow, while the residual will tend to shrink. Addi- tional details aimed at correcting and avoiding mis- classifications will be discussed in Section 6. Figure 3: Sample Final State 192 STEP 5: The classification procedure learned from the final supervised training step may now be applied to new data, and used to annotate the original untagged corpus with sense tags and probabilities. An abbreviated sample of the final decision list for plant is given below. Note that the original seed words are no longer at the top of the list. They have been displaced by more broadly applicable colloca- tions that better partition the newly learned classes. In cases where there are multiple seeds, it is even possible for an original seed for SENSE-A to become an indicator for SENSE-B if the collocate is more com- patible with this second class. Thus the noise intro- duced by a few irrelevant or misleading seed words is not fatal. It may be corrected if the majority of the seeds forms a coherent collocation space. Final decision list for plant (abbreviated) LogL Collocation Sense 10.12 plant growth :=~ A 9.68 car (within q-k words) =~ B 9.64 plant height ~ A 9.61 union (within 4-k words) =~ B 9.54 equipment (within +k words) =¢, B 9.51 assembly plant ~ B 9.50 nuclear plant =~ B 9.31 flower (within =t:k words) =~ A 9.24 job (within q-k words) =~ B 9.03 fruit (within :t:k words) =¢, A 9.02 plant species =~ A When this decision list is applied to a new test sen- tence, ... the loss of animal and plant species through extinction ..., the highest ranking collocation found in the target context (species) is used to classify the example as SENSW-A (a living plant). If available, information from other occurrences of "plant" in the discourse may override this classification, as described in Sec- tion 7. 5 Options for Training Seeds The algorithm should begin with seed words that accurately and productively distinguish the possible senses. Such seed words can be selected by any of the following strategies: • Use words in dictionary definitions Extract seed words from a dictionary's entry for the target sense. This can be done automati- cally, using words that occur with significantly greater frequency in the entry relative to the entire dictionary. Words in the entry appearing in the most reliable collocational relationships with the target word are given the most weight, based on the criteria given in Yarowsky (1993). Use a single defining collocate for each class Remarkably good performance may be achieved by identifying a single defining collocate for each class (e.g. bird and machine for the word crane), and using for seeds only those contexts contain- ing one of these words. WordNet (Miller, 1990) is an automatic source for such defining terms. Label salient corpus collocates Words that co-occur with the target word in unusually great frequency, especially in certain collocational relationships, will tend to be reli- able indicators of one of the target word's senses (e.g. ]lock and bulldozer for "crane"). A human judge must decide which one, but this can be done very quickly (typically under 2 minutes for a full list of 30-60 such words). Co-occurrence analysis selects collocates that span the space with minimal overlap, optimizing the efforts of the human assistant. While not fully automatic, this approach yields rich and highly reliable seed sets with minimal work. 6 Escaping from Initial Misclassifications Unlike many previous bootstrapping approaches, the present algorithm can escape from initial misclassi- fication. Examples added to the the growing seed sets remain there only as long as the probability of the classification stays above the threshold. IIf their classification begins to waver because new examples have discredited the crucial collocate, they are re- turned to the residual and may later be classified dif- ferently. Thus contexts that are added to the wrong seed set because of a misleading word in a dictionary definition may be (and typically are) correctly re- classified as iterative training proceeds. The redun- dancy of language with respect to collocation makes the process primarily self-correcting. However, cer- tain strong collocates may become entrenched as in- dicators for the wrong class. We discourage such be- havior in the training algorithm by two techniques: 1) incrementally increasing the width of the context window after intermediate convergence (which peri- odically adds new feature values to shake up the sys- tem) and 2) randomly perturbing the class-inclusion threshold, similar to simulated annealing. 7 Using the One-sense-per-discourse Property The algorithm performs well using only local col- locational information, treating each token of the target word independently. However, accuracy can be improved by also exploiting the fact that all oc- currences of a word in the discourse are likely to exhibit the same sense. This property may be uti- lized in two places, either once at the end of Step 193 [ (1) I (2) Word plant space tank motion bass palm poach axes duty drug sake crane AVG (3) 1(4) (5) % Samp. Major Supvsd Senses Size Sense Algrtm living/factory 7538 53.1 97.7 volume/outer 5745 50.7 93.9 vehicle/container 11420 58.2 97.1 legal/physical 11968 57.5 98.0 fish/music 1859 56.1 97.8 tree/hand 1572 74.9 96.5 steal/boil 585 84.6 97.1 grid/tools 1344 71.8 95.5 tax/obligation 1280 50.0 93.7 medicine/narcotic 1380 50.0 93.0 benefit/drink 407 82.8 96.3 bird/machine 2145 78.0 96.6 3936 63.9 96.1 (6) 1(7) Seed Training Two Dict. Words Defn. 97.1 97.3 89.1 92.3 94.2 94.6 93.5 97.4 96.6 97.2 93.9 94.7 96.6 97.2 94.0 94.3 90.4 92.1 90.4 91.4 59.6 95.8 92.3 93.6 90.6 94.8 I (8) (9) 1(1°) II (11) Options (7) + OSPD Top End Each Schiitze Colls. only Iter. Algrthm 97.6 98.3 98.6 92 93.5 93.3 93.6 90 95.8 96.1 96.5 95 97.4 97.8 97.9 92 97.7 98.5 98.8 95.8 95.5 95.9 - 97.7 98.4 98.5 - 94.7 96.8 97.0 - 93.2 93.9 94.1 - 92.6 93.3 93.9 - 96.1 96.1 97.5 - 94.2 95.4 95.5 95.5 96.1 96.5 92.2 4 after the algorithm has converged, or in Step 3c after each iteration. At the end of Step 4, this property is used for error correction. When a polysemous word such as plant occurs multiple times in a discourse, tokens that were tagged by the algorithm with low con- fidence using local collocation information may be overridden by the dominant tag for the discourse. The probability differentials necessary for such a re- classification were determined empirically in an early pilot study. The variables in this decision are the to- tal number of occurrences of plant in the discourse (n), the number of occurrences assigned to the ma- jority and minor senses for the discourse, and the cumulative scores for both (a sum of log-likelihood ratios). If cumulative evidence for the majority sense exceeds that of the minority by a threshold (condi- tional on n), the minority cases are relabeled. The case n = 2 does not admit much reclassification be- cause it is unclear which sense is dominant. But for n > 4, all but the most confident local classifications tend to be overridden by the dominant tag, because of the overwhelming strength of the one-sense-per- discourse tendency. The use of this property after each iteration is similar to the final post-hoe application, but helps prevent initially mistagged collocates from gaining a foothold. The major difference is that in discourses where there is substantial disagreement concerning which is the dominant sense, all instances in the discourse are returned to the residual rather than merely leaving their current tags unchanged. This helps improve the purity of the training data. The fundamental limitation of this property is coverage. As noted in Section 2, half of the exam- ples occur in a discourse where there are no other instances of the same word to provide corroborating evidence for a sense or to protect against misclas- sification. There is additional hope for these cases, however, as such isolated tokens tend to strongly fa- vor a particular sense (the less "bursty" one). We have yet to use this additional information. 8 Evaluation The words used in this evaluation were randomly selected from those previously studied in the litera- ture. They include words where sense differences are realized as differences in French translation (drug --* drogue/m~dicament, and duty --~ devoir/droit), a verb (poach) and words used in Schiitze's 1992 disambiguation experiments (tank, space, motion, plant) J ° The data were extracted from a 460 million word corpus containing news articles, scientific abstracts, spoken transcripts, and novels, and almost certainly constitute the largest training/testing sets used in the sense-disambiguation literature. Columns 6-8 illustrate differences in seed training options. Using only two words as seeds does surpris- ingly well (90.6 %). This approach is least success- ful for senses with a complex concept space, which cannot be adequately represented by single words. Using the salient words of a dictionary definition as seeds increases the coverage of the concept space, im- proving accuracy (94.8%). However, spurious words in example sentences can be a source of noise. Quick hand tagging of a list of algorithmically-identified salient collocates appears to be worth the effort, due to the increa3ed accuracy (95.5%) and minimal cost. Columns 9 and 10 illustrate the effect of adding the probabilistic one-sense-per-discourse constraint to collocation-based models using dictionary entries as training seeds. Column 9 shows its effectiveness 1°The number of words studied has been limited here by the highly time-consuming constraint that full hand tagging is necessary for direct comparison with super- vised training. 194 as a post-hoc constraint. Although apparently small in absolute terms, on average this represents a 27% reduction in error rate. 11 When applied at each iter- ation, this process reduces the training noise, yield- ing the optimal observed accuracy in column 10. Comparative performance: Column 5 shows the relative performance of su- pervised training using the decision list algorithm, applied to the same data and not using any discourse information. Unsupervised training using the addi- tional one-sense-per-discourse constraint frequently exceeds this value. Column 11 shows the perfor- mance of Schiitze's unsupervised algorithm applied to some of these words, trained on a New York Times News Service corpus. Our algorithm exceeds this ac- curacy on each word, with an average relative per- formance of 97% vs. 92%. 1~ 9 Comparison with Previous Work This algorithm exhibits a fundamental advantage over supervised learning algorithms (including Black (1988), Hearst (1991), Gale et al. (1992), Yarowsky (1993, 1994), Leacock et al. (1993), Bruce and Wiebe (1994), and Lehman (1994)), as it does not re- quire costly hand-tagged training sets. It thrives on raw, unannotated monolingual corpora - the more the merrier. Although there is some hope from using aligned bilingual corpora as training data for super- vised algorithms (Brown et al., 1991), this approach suffers from both the limited availability of such cor- pora, and the frequent failure of bilingual translation differences to model monolingual sense differences. The use of dictionary definitions as an optional seed for the unsupervised algorithm stems from a long history of dictionary-based approaches, includ- ing Lesk (1986), Guthrie et al. (1991), Veronis and Ide (1990), and Slator (1991). Although these ear- lier approaches have used often sophisticated mea- sures of overlap with dictionary definitions, they have not realized the potential for combining the rel- atively limited seed information in such definitions with the nearly unlimited co-occurrence information extractable from text corpora. Other unsupervised methods have shown great promise. Dagan and Itai (1994) have proposed a method using co-occurrence statistics in indepen- dent monolingual corpora of two languages to guide lexical choice in machine translation. Translation of a Hebrew verb-object pair such as lahtom (sign or seal) and h. oze (contract or treaty) is determined using the most probable combination of words in an English monolingual corpus. This work shows 11The maximum possible error rate reduction is 50.1%, or the mean applicability discussed in Section 2. 12This difference is even more striking given that Schiitze's data exhibit a higher baseline probability (65% vs. 55%) for these words, and hence constitute an easier task. that leveraging bilingual lexicons and monolingual language models can overcome the need for aligned bilingual corpora. Hearst (1991) proposed an early application of bootstrapping to augment training sets for a su- pervised sense tagger. She trained her fully super- vised algorithm on hand-labelled sentences, applied the result to new data and added the most con- fidently tagged examples to the training set. Re- grettably, this algorithm was only described in two sentences and was not developed further. Our cur- rent work differs by eliminating the need for hand- labelled training data entirely and by the joint use of collocation and discourse constraints to accomplish this. Schiitze (1992) has pioneered work in the hier- archical clustering of word senses. In his disam- biguation experiments, Schiitze used post-hoc align- ment of clusters to word senses. Because the top- level cluster partitions based purely on distributional information do not necessarily align with standard sense distinctions, he generated up to 10 sense clus- ters and manually assigned each to a fixed sense label (based on the hand-inspection of 10-20 sentences per cluster). In contrast, our algorithm uses automati- cally acquired seeds to tie the sense partitions to the desired standard at the beginning, where it can be most useful as an anchor and guide. In addition, Schiitze performs his classifications by treating documents as a large unordered bag of words. By doing so he loses many important dis- tinctions, such as collocational distance, word se- quence and the existence of predicate-argument rela- tionships between words. In contrast, our algorithm models these properties carefully, adding consider- able discriminating power lost in other relatively im- poverished models of language. 10 Conclusion In essence, our algorithm works by harnessing sev- eral powerful, empirically-observed properties of lan- guage, namely the strong tendency for words to ex- hibit only one sense per collocation and per dis- course. It attempts to derive maximal leverage from these properties by modeling a rich diversity of collo- cational relationships. It thus uses more discriminat- ing information than available to algorithms treating documents as bags of words, ignoring relative posi- tion and sequence. Indeed, one of the strengths of this work is that it is sensitive to a wider range of language detail than typically captured in statistical sense-disambiguation algorithms. Also, for an unsupervised algorithm it works sur- prisingly well, directly outperforming Schiitze's un- supervised algorithm 96.7 % to 92.2 %, on a test of the same 4 words. More impressively, it achieves nearly the same performance as the supervised al- gorithm given identical training contexts (95.5 % 195 vs. 96.1%) , and in some cases actually achieves superior performance when using the one-sense-per- discourse constraint (96.5 % vs. 96.1%). This would indicate that the cost of a large sense-tagged train- ing corpus may not be necessary to achieve accurate word-sense disambiguation. Acknowledgements This work was partially supported by an NDSEG Fel- lowship, ARPA grant N00014-90-J-1863 and ARO grant DAAL 03-89-C0031 PRI. The author is also affiliated with the Information Principles Research Center AT&T Bell Laboratories, and greatly appreciates the use of its resources in support of this work. He would like to thank Jason Eisner, Mitch Marcus, Mark Liberman, Alison Mackey, Dan Melamed and Lyle Ungar for their valu- able comments. References Baum, L.E., "An Inequality and Associated Maximiza- tion Technique in Statistical Estimation of Probabilis- tic Functions of a Markov Process," Inequalities, v 3, pp 1-8, 1972. Black, Ezra, "An Experiment in Computational Discrim- ination of English Word Senses," in IBM Journal of Research and Development, v 232, pp 185-194, 1988. BriU, Eric, "A Corpus-Based Approach to Language Learning," Ph.D. Thesis, University of Pennsylvania, 1993. Brown, Peter, Stephen Della Pietra, Vincent Della Pietra, and Robert Mercer, "Word Sense Disambigua- tion using Statistical Methods," Proceedings of the 29th Annual Meeting of the Association for Compu- tational Linguistics, pp 264-270, 1991. Bruce, Rebecca and Janyce Wiebe, "Word-Sense Disam- biguation Using Decomposable Models," in Proceed- ings of the 32nd Annual Meeting of the Association for Computational Linguistics, Las Cruces, NM, 1994. Church, K.W., "A Stochastic Parts Program an Noun Phrase Parser for Unrestricted Text," in Proceeding, IEEE International Conference on Acoustics, Speech and Signal Processing, Glasgow, 1989. Dagan, Ido and Alon Itai, "Word Sense Disambiguation Using a Second Language Monolingual Corpus", Com- putational Linguistics, v 20, pp 563-596, 1994. Dempster, A.P., Laird, N.M, and Rubin, D.B., "Maxi- mum Likelihood From Incomplete Data via the EM Algorithm," Journal of the Royal Statistical Society, v 39, pp 1-38, 1977. Gale, W., K. Church, and D. Yarowsky, "A Method for Disambiguating Word Senses in a Large Corpus," Computers and the Humanities, 26, pp 415-439, 1992. Gale, W., K. Church, and D. Yarowsky. "Discrimina- tion Decisions for 100,000-Dimensional Spaces." In A. Zampoli, N. Calzolari and M. Palmer (eds.), Current Issues in Computational Linguistics: In Honour of Don Walker, Kluwer Academic Publishers, pp. 429- 450, 1994. Guthrie, J., L. Guthrie, Y. Wilks and H. Aidinejad, "Subject Dependent Co-occurrence and Word Sense Disambiguation," in Proceedings of the 29th Annual Meeting of the Association for Computational Linguis- tics, pp 146-152, 1991. Hearst, Marti, "Noun Homograph Disambiguation Us- ing Local Context in Large Text Corpora," in Using Corpora, University of Waterloo, Waterloo, Ontario, 1991. Leacock, Claudia, Geoffrey Towell and Ellen Voorhees "Corpus-Based Statistical Sense Resolution," in Pro- ceedings, ARPA Human Language Technology Work- shop, 1993. Lehman, Jill Fain, "Toward the Essential Nature of Sta- tistical Knowledge in Sense Resolution", in Proceed- ings of the Twelfth National Conference on Artificial Intelligence, pp 734-471, 1994. Lesk, Michael, "Automatic Sense Disambiguation: How to tell a Pine Cone from an Ice Cream Cone," Pro- ceeding of the 1986 SIGDOC Conference, Association for Computing Machinery, New York, 1986. Miller, George, "WordNet: An On-Line Lexical Database," International Journal of Lexicography, 3, 4, 1990. Mosteller, Frederick, and David Wallace, Inference and Disputed Authorship: The Federalist, Addison-Wesley, Reading, Massachusetts, 1964. Rivest, R. L., "Learning Decision Lists," in Machine Learning, 2, pp 229-246, 1987. Schiitze, Hinrich, "Dimensions of Meaning," in Proceed- ings of Supercomputing '92, 1992. Slator, Brian, "Using Context for Sense Preference," in Text-Based Intelligent Systems: Current Research in Text Analysis, Information Extraction and Retrieval, P.S. Jacobs, ed., GE Research and Development Cen- ter, Schenectady, New York, 1990. Veronis, Jean and Nancy Ide, "Word Sense Disam- biguation with Very Large Neural Networks Extracted from Machine Readable Dictionaries," in Proceedings, COLING-90, pp 389-394, 1990. Yarowsky, David "Word-Sense Disambiguation Using Statistical Models of Roget's Categories Trained on Large Corpora," in Proceedings, COLING-92, Nantes, France, 1992. Yaxowsky, David, "One Sense Per Collocation," in Pro- ceedings, ARPA Human Language Technology Work- shop, Princeton, 1993. Yarowsky, David, "Decision Lists for Lexical Ambigu- ity Resolution: Application to Accent Restoration in Spanish and French," in Proceedings of the 32nd An- nual Meeting of the Association .for Computational Linguistics, Las Cruces, NM, 1994. Yarowsky, David. "Homograph Disambiguation in Speech Synthesis." In J. Hirschberg, R. Sproat and J. van Santen (eds.), Progress in Speech Synthesis, Springer-Verlag, to appear. 196 | 1995 | 26 |
A Quantitative Evaluation of Linguistic Tests for the Automatic Prediction of Semantic Markedness Vasileios Hatzivassiloglou and Kathleen McKeown Department of Computer Science 450 Computer Science Building Columbia University New York, N.Y. 10027 {vh, kathy}~cs, columbia, edu Abstract We present a corpus-based study of methods that have been proposed in the linguistics liter- ature for selecting the semantically unmarked term out of a pair of antonymous adjectives. Solutions to this problem are applicable to the more general task of selecting the positive term from the pair. Using automatically collected data, the accuracy and applicability of each method is quantified, and a statistical analysis of the significance of the results is performed. We show that some simple methods are indeed good indicators for the answer to the problem while other proposed methods fail to perform better than would be attributable to chance. In addition, one of the simplest methods, text frequency, dominates all others. We also ap- ply two generic statistical learning methods for combining the indications of the individual methods, and compare their performance to the simple methods. The most sophisticated complex learning method offers a small, but statistically significant, improvement over the original tests. 1 Introduction The concept of markedness originated in the work of Prague School linguists (Jakobson, 1984a) and refers to relationships between two complementary or antonymous terms which can be distinguished by the presence or absence of a feature (+A versus --A). Such an opposition can occur at various linguistic levels. For example, a markedness contrast can arise at the morphology level, when one of the two words is derived from the other and therefore contains an explicit formal marker such as a prefix; e.g., prof- itable-unprofitable. Markedness contrasts also ap- pear at the semantic level in many pairs of grad- able antonymous adjectives, especially scalar ones (Levinson, 1983), such as tall-short. The marked and unmarked elements of such pairs function in dif- ferent ways. The unmarked adjective (e.g., tall) can be used in how-questions to refer to the property de- scribed by both adjectives in the pair (e.g., height), but without any implication about the modified item relative to the norm for the property. For exam- ple, the question How tall is Jack? can be answered equally well by four or seven feet. In contrast, the marked element of the opposition cannot be used generically; when used in a how-question, it implies a presupposition of the speaker regarding the rela- tive position of the modified item on the adjectival scale. Thus, the corresponding question using the marked term of the opposition (How short is Jack?) conveys an implication on the part of the speaker that Jack is indeed short; the distinguishing feature A expresses this presupposition. While markedness has been described in terms of a distinguishing feature A, its definition does not specify the type of this feature. Consequently, sev- eral different types of features have been employed, which has led into some confusion about the meaning of the term markedness. Following Lyons (1977), we distinguish between formal markedness where the opposition occurs at the morphology level (i.e., one of the two terms is derived from the other through inflection or affixation) and semantic markedness where the opposition occurs at the semantic level as in the example above. When two antonymous terms are also morphologically related, the formally unmarked term is usually also the semantically un- marked one (for example, clear-unclear). However, this correlation is not universal; consider the exam- ples unbiased-biased and independent-dependent. In any case, semantic markedness is the more in- teresting of the two and the harder to determine, both for humans and computers. Various tests for determining markedness in gen- eral have been proposed by linguists (see Section 3). However, although potentially automatic versions of some of these have been successfully applied to the problem at the phonology level (Trubetzkoy, 1939; Greenberg, 1966), little work has been done on the empirical validation or the automatic application of those tests at higher levels (but see (Ku~era, 1982) for an empirical analysis of a proposed markedness test at the syntactic level; some more narrowly fo- cused empirical work has also been done on marked- ness in second language acquisition). In this paper 197 we analyze the performance of several linguistic tests for the selection of the semantically unmarked term out of a pair of gradable antonymous adjectives. We describe a system that automatically extracts the relevant data for these tests from text corpora and corpora-based databases, and use this system to measure the applicability and accuracy of each method. We apply statistical tests to determine the significance of the results, and then discuss the per- formance of complex predictors that combine the an- swers of the linguistic tests according to two general statistical learning methods, decision trees and log- linear regression models. 2 Motivation The goal of our work is twofold: First, we are inter- ested in providing hard, quantitative evidence on the performance of markedness tests already proposed in the linguistics literature. Such tests are based on intuitive observations and/or particular theories of semantics, but their accuracy has not been mea- sured on actual data. The results of our analysis can be used to substantiate theories which are com- patible with the empirical evidence, and thus offer insight into the complex linguistic phenomenon of antonymy. The second purpose of our work is practical appli- cations. The semantically unmarked term is almost always the positive term of the opposition (Boucher and Osgood, 1969); e.g., high is positive, while low is negative. Therefore, an automatic method for deter- mining markedness values can also be used to deter- mine the polarity of antonyms. The work reported in this paper helps clarify which types of data and tests are useful for such a method and which are not. The need for an automatic corpus-based method for the identification of markedness becomes appar- ent when we consider the high number of adjectives in unrestricted text and the domain-dependence of markedness values. In the MRC Psycholinguis- tic Database (Coltheart, 1981), a large machine- readable annotated word list, 25,547 of the 150,837 entries (16.94%) are classified as adjectives, not in- cluding past participles; if we only consider regularly used grammatical categories for each word, the per- centage of adjectives rises to 22.97%. For compar- ison, nouns (the largest class) account for 51.28% and 57.47% of the words under the two criteria. In addition, while adjectives tend to have prevalent markedness and polarity values in the language at large, frequently these values are negated in spe- cific domains or contexts. For example, healthy is in most contexts the unmarked member of the opposi- tion healthy:sick; but in a hospital setting, sickness rather than health is expected, so sick becomes the unmarked term. The methods we describe are based on the form of the words and their overall statistical properties, and thus cannot predict specific occur- fences of markedness reversals. But they can predict the prevalent markedness value for each adjective in a given domain, something which is impractical to do by hand separately for each domain. We have built a large system for the automatic, domain-dependent classification of adjectives ac- cording to semantic criteria. The first phase of our system (Hatzivassiloglou and McKeown, 1993) sep- arates adjectives into groups of semantically related ones. We extract markedness values according to the methods described in this paper and use them in subsequent phases of the system that further analyze these groups and determine their scalar structure. An automatic method for extracting polarity in- formation would also be useful for the augmenta- tion of lexico-semantic databases such as WordNet (Miller et al., 1990), particularly when the method accounts for the specificities of the domain sublan- guage; an increasing number of NLP systems rely on such databases (e.g., (Resnik, 1993; Knight and Luk, 1994)). Finally, knowledge of polarity can be combined with corpus-based collocation extraction methods (Smadja, 1993) to automatically produce entries for the lexical functions used in Meaning- Text Theory (Mel'~uk and Pertsov, 1987) for text generation. For example, knowing that hearty is a positive term enables the assignment of the col- location hearty eater to the lexical function entry MAGS( eater)=-hearty. 1 3 Tests for Semantic Markedness Markedness in general and semantic markedness in particular have received considerable attention in the linguistics literature. Consequently, several tests for determining markedness have been proposed by linguists. Most of these tests involve human judg- ments (Greenberg, 1966; Lyons, 1977; Waugh, 1982; Lehrer, 1985; Ross, 1987; Lakoff, 1987) and are not suitable for computer implementation. However, some proposed tests refer to comparisons between measurable properties of the words in question and are amenable to full automation. These tests are: 1. Text frequency. Since the unmarked term can appear in more contexts than the marked one, and it has both general and specific senses, it should appear more frequently in text than the marked term (Greenberg, 1966). 2. Formal markedness. A formal markedness re- lationship (i.e., a morphology relationship be- tween the two words), whenever it exists, should be an excellent predictor for semantic marked- ness (Greenberg, 1966; Zwicky, 1978). 3. Formal complexity. Since the unmarked word is the more general one, it should also be morpho- logically the simpler (Jakobson, 1962; Battis- tella, 1990). The "economy of language" prin- 1MAGN stands for magnify. 198 ciple (Zipf, 1949) supports this claim. Note that this test subsumes test (2). 4. Morphological produclivity. Unmarked words, being more general and frequently used to de- scribe the whole scale, should be freer to com- bine with other linguistic elements (Winters, 1990; Battistella, 1990). 5. Differentialion. Unmarked terms should ex- hibit higher differentiation with more subdis- tinetions (Jakobson, 1984b) (e.g., the present tense (unmarked) appears in a greater variety of forms than the past), or, equivalently, the marked term should lack some subcategories (Greenberg, 1966). The first of the above tests compares the text fre- quencies of the two words, which are clearly mea- surable and easily retrievable from a corpus. We use the one-million word Brown corpus of written American English (Ku~era and Francis, 1967) for this purpose. The mapping of the remaining tests to quantifiable variables is not as immediate. We use the length of a word in characters, which is a rea- sonable indirect index of morphological complexity, for tests (2) and (3). This indicator is exact for the case of test (2), since the formally marked word is derived from the unmarked one through the addition of an affix (which for adjectives is always a prefix). The number of syllables in a word is another rea- sonable indicator of morphological complexity that we consider, although it is much harder to compute automatically than word length. For morphological productivity (test (4)), we mea- sure several variables related to the freedom of the word to receive affixes and to participate in com- pounds. Several distinctions exist for the definition of a variable that measures the number of words that are morphologically derived from a given word. These distinctions involve: Q Whether to consider the number of distinct words in this category (types) or the total fre- quency of these words (tokens). • Whether to separate words derived through affixation from compounds or combine these types of morphological relationships. • If word types (rather than word frequencies) are measured, we can select to count homographs (words identical in form but with different parts of speech, e.g., light as an adjective and light as a verb) as distinct types or map all homographs of the same word form to the same word type. Finally, the differentiation test (5) is the one gen- eral markedness test that cannot be easily mapped into observable properties of adjectives. Somewhat arbitrarily, we mapped this test to the number of grammatical categories (parts of speech) that each word can appear under, postulating that the un- marked term should have a higher such number. The various ways of measuring the quantities com- pared by the tests discussed above lead to the consid- eration of 32 variables. Since some of these variables are closely related and their number is so high that it impedes the task of modeling semantic marked- ness in terms of them, we combined several of them, keeping 14 variables for the statistical analysis. 4 Data Collection In order to measure the performance of the marked- ness tests discussed in the previous section, we collected a fairly large sample of pairs of antony- mous gradable adjectives that can appear in how- questions. The Deese antonyms (Deese, 1964) is the prototypical collection of pairs of antonymous adjec- tives that have been used for similar analyses in the past (Deese, 1964; Justeson and Katz, 1991; Grefen- stette, 1992). However, this collection contains only 75 adjectives in 40 pairs, some of which cannot be used in our study either because they are primar- ily adverbials (e.g., inside-outside) or not gradable (e.g., alive-dead). Unlike previous studies, the na- ture of the statistical analysis reported in this paper requires a higher number of pairs. Consequently, we augmented the Deese set with the set of pairs used in the largest manual previ- ous study of markedness in adjective pairs (Lehrer, 1985). In addition, we included all gradable adjec- tives which appear 50 times or more in the Brown corpus and have at least one gradable antonym; the antonyms were not restricted to belong to this set of frequent adjectives. For each adjective col- lected according to this last criterion, we included all the antonyms (frequent or not) that were explicitly listed in the Collins COBUILD dictionary (Sinclair, 1987) for each of its senses. This process gave us a sample of 449 adjectives (both frequent and infre- quent ones) in 344 pairs. 2 We separated the pairs on the basis of the how-test into those that contain one semantically unmarked and one marked term and those that contain two marked terms (e.g., fat-lhin), removing the latter. For the remaining pairs, we identified the unmarked member, using existing designations (Lehrer, 1985) whenever that was possible; when in doubt, the pair was dropped from further consideration. We also separated the pairs into two groups according to whether the two adjectives in each pair were mor- phologically related or not. This allowed us to study the different behavior of the tests for the two groups separately. Table 1 shows the results of this cross- classification of the adjective pairs. Our next step was to measure the variables de- scribed in Section 3 which are used in the various 2The collection method is similar to Deese's: He also started from frequent adjectives but used human sub- jects to elicit antonyms instead of a dictionary. 199 One Both unmarked marked Morphologically 211 54 unrelated Morphologically 68 3 related Total 279 57 Total 265 71 [[ 336 Table 1: Cross-classification of adjective pairs ac- cording to morphological relationship and marked- ness status. tests for semantic markedness. For these measure- ments, we used the MRC Psycholinguistic Database (Coltheart, 1981) which contains a variety of mea- sures for 150,837 entries counting different parts of speech or inflected forms as different words (115,331 distinct words). We implemented an extractor pro- gram to collect the relevant measurements for the adjectives in our sample, namely text frequency, number of syllables, word length, and number of parts of speech. All this information except the number of syllables can also be automatically ex- tracted from the corpus. The extractor program also computes information that is not directly stored in the MRC database. Affixation rules from (Quirk et al., 1985) are recursively employed to check whether each word in the database can be derived from each adjective, and counts and frequencies of such de- rived words and compounds are collected. Overall, 32 measurements are computed for each adjective, and are subsequently combined into the 14 variables used in our study. Finally, the variables for the pairs are computed as the differences between the corresponding vari- ables for the adjectives in each pair. The output of this stage is a table, with two strata corresponding to the two groups, and containing measurements on 14 variables for the 279 pairs with a semantically unmarked member. 5 Evaluation of Linguistic Tests For each of the variables, we measured how many pairs in each group it classified correctly. A positive (negative) value indicates that the first (second) ad- jective is the unmarked one, except for two variables (word length and number of syllables) where the op- posite is true. When the difference is zero, the vari- able selects neither the first or second adjective as unmarked. The percentage of nonzero differences, which correspond to cases where the test actually suggests a choice, is reported as the applicability of the variable. For the purpose of evaluating the accu- racy of the variable, we assign such cases randomly to one of the two possible outcomes in accordance with common practice in classification (Duda and Hart, 1973). For each variable and each of the two groups, we also performed a statistical test of the null hypoth- esis that its true accuracy is 50%, i.e., equal to the expected accuracy of a random binary classifier. Un- der the null hypothesis, the number of correct re- sponses follows a binomial distribution with param- eter p = 0.5. Since all obtained measurements of accuracy were higher than 50%, any rejection of the null hypothesis implies that the corresponding test is significantly better than chance. Table 2 summarizes the values obtained for some of the 14 variables in our data and reveals some surprising facts about their performance. The fre- quency of the adjectives is the best predictor in both groups, achieving an overall accuracy of 80.64% with high applicability (98.5-99%). This is all the more remarkable in the case of the morphologically related adjectives, where frequency outperforms length of the words; recall that the latter directly encodes the formal markedness relationship, so frequency is able to correctly classify some of the cases where formal and semantic markedness values disagree. On the other hand, tests based on the "economy of lan- guage" principle, such as word length and number of syllables, perform badly when formal markedness relationships do not exist, with lower applicability and very low accuracy scores. The same can be said about the test based on the differentiation properties of the words (number of different parts of speech). In fact, for these three variables, the hypothesis of ran- dom performance cannot be rejected even at the 5% level. Tests based on the productivity of the words, as measured through affixation and compounding, tend to fall in-between: their accuracy is generally significant, but their applicability is sometimes low, particularly for compounds. 6 Predictions Based on More than One Test While the frequency of the adjectives is the best single predictor, we would expect to gain accuracy by combining the answers of several simple tests. We consider the problem of determining semantic markedness as a classification problem with two pos- sible outcomes ("the first adjective is unmarked" and "the second adjective is unmarked"). To de- sign an appropriate classifier, we employed two gen- eral statistical supervised learning methods, which we briefly describe in this section. Decision trees (Quinlan, 1986) is the first statis- tical supervised learning paradigm that we explored. A popular method for the automatic construction of such trees is binary recursive partitioning, which constructs a binary tree in a top-down fashion. Starting from the root, the variable X which better discriminates among the possible outcomes is se- lected and a test of the form X < consiant is as- 200 Test Morphologically Unrelated P-Value Frequency Applicability 99.05% Accuracy 75.36% 8.4.10 -14 Number of syllables 58.29% 55.92% 0.098 Word length 83.41% 52.13% 0.582 Number of 71.09% 56.87% 0.054 homographs Total number of 64.45% 61.14% 0.0015 compounds Unique words derived 95.26% 66.35% 2.3.10 -6 by affixation Total frequency of 82.46% 66.35% 2.3 • 10 -6 derived words II Morphologically Related Applicability Accuracy P-Value 98.53% 95.59% 100.00% 97.06% 92.65% 95.59% < 10 -16 7.7.10 -14 4.4.10 -16 66.18% 14.71% 98.53% 83.82% 79.41% 60.29% 94.12% 91.18% i.I • i0 -s 0.114 5.8.10 -15 8.2.10 -13 Table 2: Evaluation of simple markedness tests. The probability of obtaining by chance performance equal to or better than the observed one is listed in the P- Value column for each test. sociated with the root node of the tree. All train- ing cases for which this test succeeds (fails) belong to the left (right) subtree of the decision tree. The method proceeds recursively, by selecting a new vari- able (possibly the same as in the parent node) and a new cutting point for each subtree, until all the cases in one subtree belong to the same category or the data becomes too sparse. When a node can- not be split further, it is labeled with the locally most probable category. During prediction, a path is traced from the root of the tree to a leaf, and the category of the leaf is the category reported. If the tree is left to grow uncontrolled, it will ex- actly represent the training set (including its pecu- liarities and random variations), and will not be very useful for prediction on new cases. Consequently, the growing phase is terminated before the training samples assigned to the leaf nodes are entirely ho- mogeneous. A technique that improves the quality of the induced tree is to grow a larger than optimal tree and then shrink it by pruning subtrees (Breiman et al., 1984). In order to select the nodes to shrink, we normally need to use new data that has not been used for the construction of the original tree. In our classifier, we employ a maximum likeli- hood estimator based on the binomial distribution to select the optimal split at each node. During the shrinking phase, we optimally regress the probabili- ties of children nodes to their parent according to a shrinking parameter ~ (Hastie and Pregibon, 1990), instead of pruning entire subtrees. To select the op- timal value for (~, we initially held out a part of the training data. In a later version of the classifier, we employed cross-validation, separating our train- ing data in 10 equally sized subsets and repeatedly training on 9 of them and validating on the other. Log-linear regression (Santner and Duffy, 1989) is the second general supervised learning method that we explored. In classical linear model- ing, the response variable y is modeled as y -- bTx+e where b is a vector of weights, x is the vector of the values of the predictor variables and e is an error term which is assumed to be normally distributed with zero mean and constant variance, independent of the mean of y. The log-linear regression model generalizes this setting to binomial sampling where the response variable follows a Bernoulli distribution (corresponding to a two-category outcome); note that the variance of the error term is not indepen- dent of the mean of y any more. The resulting gen- eralized linear model (McCullagh and Nelder, 1989) employs a linear predictor y = bTx + e as before, but the response variable y is non-linearly related to through the inverse logit function, eY y - __ 1A-e" Note that y E (0, 1); each of the two ends of that interval is associated with one of the possible choices. We employ the iterative reweighted least squares algorithm (Baker and Nelder, 1989) to approximate the maximum likelihood cstimate of the vector b, but first we explicitly drop the constant term (in- tercept) and most of the variables. The intercept is dropped because the prior probabilities of the two outcomes are known to be equal. 3 Several of the variables are dropped to avoid overfitting (Duda and Hart, 1973); otherwise the regression model will use all available variables, unless some of them are linearly dependent. To identify which variables we should keep in the model, we use the analysis of de- viance method with iterative stepwise refinement of the model by iteratively adding or dropping one term if the reduction (increase) in the deviance compares 3The order of the adjectives in the pairs is randomized before training the model, to ensure that both outcomes are equiprobable. 201 12" 10 i ® ¢3 40% 50% 60% 70% 80% 90% Accuracy Figure 1: Probability densities for the accuracy of the frequency method (dotted line) and the smoothed log-linear model (solid line) on the mor- phologically unrelated adjectives. favorably with the resulting loss (gain) in residual degrees of freedom. Using a fixed training set, six of the fourteen variables were selected for modeling the morphologically unrelated adjectives. Frequency was selected as the only component of the model for the morphologically related ones. We also examined the possibility of replacing some variables in these models by smoothing cubic B- splines (Wahba, 1990). The analysis of deviance for this model indicated that for the morphologically unrelated adjectives, one of the six selected variables should be removed altogether and another should be replaced by a smoothing spline. 7 Evaluation of the Complex Predictors For both decision trees and log-linear regression, we repeatedly partitioned the data in each of the two groups into equally sized training and testing sets, constructed the predictors using the training sets, and evaluated them on the testing sets. This pro- cess was repeated 200 times, giving vectors of esti- mates for the performance of the various methods. The simple frequency test was also evaluated in each testing set for comparison purposes. From these vec- tors, we estimate the density of the distribution of the scores for each method; Figure 1 gives these den- sities for the frequency test and the log-linear model with smoothing splines on the most difficult case, the morphologically unrelated adjectives. Table 3 summarizes the performance of the meth- ods on the two groups of adjective pairs. 4 In order to assess the significance of the differences between 4The applicability of all complex methods was 100% in both groups. the scores, we performed a nonparametric sign test (Gibbons and Chakraborti, 1992) for each complex predictor against the simple frequency variable. The test statistic is the number of runs where the score of one predictor is higher than the other's; as is com- mon in statistical practice, ties are broken by assign- ing half of them to each category. Under the null hypothesis of equal performance of the two methods that are contrasted, this test statistic follows the bi- nomial distribution with p = 0.5. Table 3 includes the exact probabilities for obtaining the observed (or more extreme) values of the test statistic. From the table, we observe that the tree-based methods perform considerably worse than frequency (significant at any conceivable level), even when cross-validation is employed. Both the standard and smoothed log-linear models outperform the fre- quency test on the morphologically unrelated adjec- tives (significant at the 5% and 0.1% levels respec- tively), while the log-linear model's performance is comparable to the frequency test's on the morpho- logically related adjectives. The best predictor over- all is the smoothed log-linear model. 5 The above results indicate that the frequency test essentially contains almost all the information that can be extracted collectively from all linguistic tests. Consequently, even very sophisticated methods for combining the tests can offer only small improve- ment. Furthermore, the prominence of one variable can easily lead to overfitting the training data in the remaining variables. This causes the decision tree models to perform badly. 8 Conclusions and Future Work We have presented a quantitative analysis of the per- formance of measurable linguistic tests for the selec- tion of the semantically unmarked term out of a pair of antonymous adjectives. The analysis shows that a simple test, word frequency, outperforms more com- plicated tests, and also dominates them in terms of information content. Some of the tests that have been proposed in the linguistics literature, notably tests that are based on the formal complexity and differentiation properties of the words; fail to give any useful information at all, at least with the ap- proximations we used for them (Section 3). On the other hand, tests based on morphological productiv- ity are valid, although not as accurate as frequency. Naturally, the validity of our results depends on the quality of our measurements. While for most of the variables our measurements are necessarily ap- sit should be noted here that the independence as- sumption of the sign test is mildly violated in these re- peated runs, since the scores depend on collections of in- dependent samples from a finite population. This mild dependence will increase somewhat the probabilities un- der the true null distribution, but we can be confident that probabilities such as 0.08% will remain significant. 202 Morphologically Morphologically Overall Predictor tested unrelated related Accuracy P-Value Accuracy P-Value Accuracy P-Value Frequency 75.87% - 97.15% - 81.07% - Decision tree (no cross-validation) 64.99% 8.2.10 -53 94.40% 1.5.10 -l° 72.05% 1.7- 10 TM Decision tree 10-40 75.19% 7.2.10 -47 (cross validated) 69.13% 94.40% 1.5- 10 -l° Log-linear model (no smoothing) 76.52% 0.0281 97.17% 1.00 81.55% 0.0228 Log-linear model (with smoothing) 76.82% 0.0008 97.17% 1.00 81.77% 0.0008 Table 3: Evaluation of the complex predictors. The probability of obtaining by chance a difference in performance relative to the simple frequency test equal to or larger than the observed one is listed in the P- Value column for each complex predictor. proximate, we believe that they are nevertheless of acceptable accuracy since (1) we used a representa- tive corpus; (2) we selected both a large sample of adjective pairs and a large number of frequent ad- jectives to avoid sparse data problems; (3) the pro- cedure of identifying secondary words for indirect measurements based on morphological productivity operates with high recall and precision; and (4) the mapping of the linguistic tests to comparisons of quantitative variables was in most cases straightfor- ward, and always at least plausible. The analysis of the linguistic tests and their com- binations has also led to a computational method for the determination of semantic markedness. The method is completely automatic and produces ac- curate results at 82% of the cases. We consider this performance reasonably good, especially since no previous automatic method for the task has been proposed. While we used a fixed set of 449 adjec- tives for our analysis, the number of adjectives in unrestricted text is much higher, as we noted in Sec- tion 2. This multitude of adjectives, combined with the dependence of semantic markedness on the do- main, makes the manual identification of markedness values impractical. In the future, we plan to expand our analy- sis to other classes of antonymous words, particu- larly verbs which are notoriously difficult to ana- lyze semantically (Levin, 1993). A similar method- ology can be applied to identify unmarked (posi- tive) versus marked (negative) terms in pairs such as agree: dissent. Acknowledgements This work was supported jointly by the Advanced Research Projects Agency and the Office of Naval Research under contract N00014-89-J-1782, and by the National Science Foundation under contract GER-90-24069. It was conducted under the auspices of the Columbia University CAT in High Perfor- mance Computing and Communications in Health- care, a New York State Center for Advanced Tech- nology supported by the New York State Science and Technology Foundation. We wish to thank Judith Klavans, Rebecca Passonneau, and the anonymous reviewers for providing us with useful comments on earlier versions of the paper. References R. J. Baker and J. A. Nelder. 1989. The GLIM System, Release 3: Generalized Linear Interactive Modeling. Numerical Algorithms Group, Oxford. Edwin L. Battistella. 1990. Markedness: The Eval- uative Superstructure of Language. State Univer- sity of New York Press, Albany, NY. T. Boucher and C. E. Osgood. 1969. The Polyanna hypothesis. Journal of Verbal Learning and Verbal Behavior, 8:1-8. Leo Breiman, J. H. Friedman, R. Olshen, and C. J. Stone. 1984. Classification and Regression Trees. Wadsworth International Group, Belmont, CA. M. Coltheart. 1981. The MRC Psycholinguis- tic Database. Quarterly Journal of Experimental Psychology, 33A:497-505. James Deese. 1964. The associative structure of some common English adjectives. Journal of Ver- bal Learning and Verbal Behavior, 3(5):347-357. Richard O. Duda and Peter E. Hart. 1973. Pattern Classification and Scene Analysis. Wiley, New York. Jean Dickinson Gibbons and Subhabrata Chak- raborti. 1992. Nonparametric Statistical Infer- ence. Marcel Dekker, New York, 3rd edition. 203 Joseph H. Greenberg. 1966. Language Universals. Mouton, The Hague. Gregory Grefenstette. 1992. Finding semantic simi- larity in raw text: The Deese antonyms. In Prob- abilistic Approaches to Natural Language: Papers from the 1992 Fall Symposium. AAAI. T. Hastie and D. Pregibon. 1990. Shrinking trees. Technical report, AT&T Bell Laboratories. Vasileios Hatzivassiloglou and Kathleen McKeown. 1993. Towards the automatic identification of ad- jectival scales: Clustering adjectives according to meaning. In Proceedings of the 31st Annual Meet- ing of the ACL, pages 172-182, Columbus, Ohio. Roman Jakobson. 1962. Phonological Studies, vol- ume 1 of Selected Writings. Mouton, The Hague. Roman Jakobson. 1984a. The structure of the Rus- sian verb (1932). In Russian and Slavic Grammar Studies 1931-1981, pages 1-14. Mouton, Berlin. Roman Jakobson. 1984b. Zero sign (1939). In Russian and Slavic Grammar Studies 1931-1981, pages 151-160. Mouton, Berlin. John S. Justeson and Slava M. Katz. 1991. Co- occurrences of antonymous adjectives and their contexts. Computational Linguistics, 17(1):1-19. Kevin Knight and Steve K. Luk. 1994. Building a large-scale knowledge base for machine transla- tion. In Proceedings of the 12th National Confer- ence on Artificial Intelligence (AAAI-94). AAAI. Henry KuSera and Winthrop N. Francis. 1967. Computational Analysis of Present-Day American English. Brown University Press, Providence, RI. Henry Ku6era. 1982. Markedness and frequency: A computational analysis. In Jan Horecky, edi- tor, Proceedings of the Ninth International Con- ference on Computational Linguistics (COLING- 82), pages 167-173, Prague. North-Holland. George Lakoff. 1987. Women, Fire, and Dangerous Things. University of Chicago Press, Chicago. Adrienne Lehrer. 1985. Markedness and antonymy. Journal of Linguistics, 31(3):397-429, September. Beth Levin. 1993. English Verb Classes and Alter- nations: A Preliminary Investigation. University of Chicago Press, Chicago. Stephen C. Levinson. 1983. Pragmatics. Cambridge University Press, Cambridge, England. John Lyons. 1977. Semantics, volume 1. Cambridge University Press, Cambridge, England. Peter McCullagh and John A. Nelder. 1989. Gen- eralized Linear Models. Chapman and Hall, Lon- don, 2nd edition. Igor A. Mel'~uk and Nikolaj V. Pertsov. 1987. Sur- face Syntax of English: a Formal Model within the Meaning-Text Framework. Benjamins, Ams- terdam and Philadelphia. George A. Miller, R. Beckwith, C. Fellbaum, D. Gross, and K. J. Miller. 1990. WordNet: An on-line lexical database. International Journal of Lexicography (special issue), 3(4):235-312. John R. Quinlan. 1986. Induction of decision trees. Machine Learning, 1(1):81-106. Randolph Quirk, Sidney Greenbaum, Geoffrey Leech, and Jan Svartvik. 1985. A Comprehensive Grammar of the English Language. Longman, London and New York. Philip Resnik. 1993. Semantic classes and syntactic ambiguity. In Proceedings of the ARPA Workshop on Human Language Technology. ARPA Informa- tion Science and Technology Office. John R. Ross. 1987. Islands and syntactic pro- totypes. In B. Need et ah, editors, Papers from the 23rd Annual Regional Meeting of the Chicago Linguistic Society (Part I: The General Session), pages 309-320. Chicago Linguistic Soci- ety, Chicago. Thomas J. Santner and Diane E. Duffy. 1989. The Statistical Analysis of Discrete Data. Springer- Verlag, New York. John Sinclair (editor in chief). 1987. Collins COBUILD English Language Dictionary. Collins, London. Frank Smadja. 1993. Retrieving collocations from text: XTRACT. Computational Linguistics, 19(1):143-177, March. Nikolai S. Trubetzkoy. 1939. Grundzuger der Phonologic. Travaux du Cercle Linguistique de Prague 7, Prague. English translation in (Trubet- zkoy, 1969). Nikolai S. Trubetzkoy. 1969. Principles of Phonol- ogy. University of California Press, Berkeley and Los Angeles, California. Translated into English from (Trubetzkoy, 1939). Grace Wahba. 1990. Spline Models for Observa- tional Data. CBMS-NSF Regional Conference se- ries in Applied Mathematics. Society for Indus- trial and Applied Mathematics (SIAM), Philadel- phia, PA. Linda R. Waugh. 1982. Marked and unmarked: A choice between unequals. Semiotica, 38:299-318. Margaret Winters. 1990. Toward a theory of syn- tactic prototypes. In Savas L. Tsohatzidis, editor, Meanings and Prototypes: Studies in Linguistic Categorization, pages 285-307. Routledge, Lon- don. George K. Zipf. 1949. Human Behavior and the Principle of Least Effort: An Introduction to Hu- man Ecology. Addison-Wesley, Reading, MA. A. Zwicky. 1978. On markedness in morphology. Die Spra'che, 24:129-142. 204 | 1995 | 27 |
Quantifier Scope and Constituency Jong C. Park Computer and Information Science University of Pennsylvania 200 South 33rd Street, Philadelphia, PA 19104-6389, USA park@line, cis. upenn, edu Abstract Traditional approaches to quantifier scope typically need stipulation to exclude rea- dings that are unavailable to human under- standers. This paper shows that quantifier scope phenomena can be precisely charac- terized by a semantic representation cons- trained by surhce constituency, if the di- stinction between referential and quantifi- cational NPs is properly observed. A CCG implementation is described and compared to other approaches. 1 Introduction It is generally assumed that sentences with multi- ple quantified NPs are to be interpreted by one or more unambiguous logical forms in which the scope of traditional logical quantifiers determines the rea- ding or readings. There are two problems with this assumption: (a) without further stipulation there is a tendency to allow too many readings and (b) there is considerable confusion as to how many readings should be allowed arising from contamination of the semantics of many NL quantifiers by referentiality. There are two well-known techniques for redis- tributing quantifiers in quantification structures: quantifying-in (Montague, 1974; Cooper, 1983; Kel- ler, 1988; Carpenter, 1994) and quantifier raising (May, 1985). The former provides a compositio- nal way of putting possibly embedded quantifiers to the scope-taking positions, and the latter utili- zes a syntactic movement operation at the level of semantics for quantifier placement. There are also approaches that put more emphasis on utilizing con- textual information in restricting the generation of semantic forms by choosing a scope-neutral repre- sentation augmented with ordering constraints to capture linguistic judgments (Webber, 1979; Kamp, 1981; Helm, 1983; Poesio, 1991; Reyle, 1993). And there are computational approaches that screen una- vailable and/or redundant semantic forms (Hobbs Shieber, 1987; Moran, 1988; Vestre, 1991). This pa- per will show that these approaches allow unavaila- ble readings, and thereby miss an important gene- ralization concerning the readings that actually are available. This paper examines English constructions that allow multiple occurrences of quantified NPs: NP modifications, transitive or ditransitive verbs, that complements, and coordinate structures. Based on a critical analysis of readings that are available from these data, the claim is that scope phenomena can be characterized by a combination of syntactic sur- face adjacency and semantic function-argument re- lationship. This characterization will draw upon the old distinction between referential and quantificatio- nal NP-semantics (Fodor & Sag, 1982). We choose to use Combinatory Categorial Grammar to show how surface adjacency affects semantic function- argument relationship, since CCG has the flexibility of composing almost any pair of adjacent constitu- ents with a precise notion of syntactic grammatica- lity (Steedman, 1990; 1993). z The rest of the paper is organized as follows. First, we discuss in §2 how traditional techniques address availability of readings and note some residual pro- blems. Then we give a brief analysis of available readings (§3), a generalization of the analysis (§4), and finally describe a computational implementation in Prolog (~5). 2 Traditional Approaches All three paradigms of grammar formalisms intro- duced earlier share similar linguistic judgments for their grammaticality analyses. This section exami- nes quantifying-in to show (a) that quantifying- in is a powerful device that allows referential NP- interpretations and (b) that quantifying-in is not suf- ficiently restricted to account for the available rea- dings for quantificational NP-interpretations. Quantifying-in is a technique originally introdu- ced to produce appropriate semantic forms for de re interpretations of NPs inside opaque operators 1 For instance, the result would transfer to Synchro- nous "I~ee Adjoining Grammar (Shieber & Schabes, 1990) without much change. 205 (Montague, 1974). For example, (a) below has two readings, de re and de dicto, depending on the rela- tivity of the existence of such an individual. They are roughly interpretable as (b) and (@2 (1) (a) John believes that a Republican will win. (b) 3r.repub(r) A bel(john, uill(uin(r))) (C) bel(john, 3r.repub(r) A uill(uin(r))) (b) has a binder 3 that is quaati.fving a variable r inside an opaque operator bel, hence the name for the technique. (c) does not have such an interven- ing operator. Although it is beyond the scope of the present paper to discuss further details of intensio- nality, it is clear that de re interpretations of NPs are strongly related to referential NP-semantics, in the sense that the de re reading of (a) is about a referred individual and not about an arbitrary such individual. Quantifying-in is designed to make any (possibly embedded) NP take the matrix scope, by leaving a scoped variable in the argument position of the original NP. This would be acceptable for re- ferential NP-semantics. Montague also proposed to capture purely exten- sional scope ambiguities using quantifying-in. For example, wide scope reading of a woman in (a) below is accounted for by quantifying-in (with a meaning postulate), patterned after one for (b). (2) (a) Every man loves a woman. (b) Every man seeks a white unicorn. His suggestion is adopted with various subsequent revisions cited earlier. Since any NP, referential or quantificational, requires quantifying-in to outscope another, quantifying-in consequently confounds re- ferential and quantificational NP-semantics. This causes a problem when there is a distributional dif- ference between referential NPs and non-referential NPs, as Fodor & Sag (1982) have argued, a view which has been followed by the approaches to dy- namic interpretation of indefinite NPs cited earlier. It seems hard to reconcile quantifying-in with these observations. 3 Availability of Readings This section proposes a way of sharpening our intui- tion on available readings and re-examines traditio- nal linguistic judgments on grammatical readings. While there are undoubted differences in degree of availability among readings dependent upon se- mantics or discourse preference (Bunt, 1985; Moran, 1988), we will focus on all-or-none structural possi- bilities afforded by competence grammar. 3 2In this simplistic notation, we gloss over tense ana- lysis, among others. 3Moran's preference-based algorithm treats certain readings as "highly unpreferred," effectively making them structurally unavailable, from those possible sco- Consider the following unambiguous quantifica- tion structure in a generalized quantifier format (hereafter oq, Barwise & Cooper, 1981), where quantifier outscopes any quantifiers that may oc- cur in either restriction or body. (3) quantifier(variable, restriction, body) Logical forms as notated this way make explicit the functional dependency between the denotations of two ordered quantificational NPs. For example~ con- sider (4) (a) (Partee, 1975). (b) shows one way of representing it in a GQ format. (4) (a) Three Frenchmen visited five Russians. (b) three(f, frenchmen(f), five(r, russians (r), visited(f, r) ) ) We can always argue, by enriching the notation, that (4) (b) represents at least four different readings, de- pending on the particular sense of each involved NP, i.e., group- vs individual-denoting. In every such reading, however, the truth of (4) (b) depends upon finding appropriate individuals (or the group) for f such that each of those individuals (or the group itself) gets associated with appropriate individuals (or a group of individuals) for r via the relation visil;ed. 4 Notice that there is always a functional dependency of individuals denoted by r upon indi- viduals denoted by f. We claim that this explicit functional dependency can be utilized to test availa- bility of readings. 5 First, consider the following sentences without coordination. (5) (a) Two representatives of three companies saw most samples. (b) Every dealer shows most customers at most three cars. (c) Most boys think that every man danced with two women. (a) has three quantifiers, and there are 6 different ways of ordering them. Hobbs & Shieber (1987) show that among these, the reading in which two re- presentatives outscopes most samples which in turn outscopes three companies is not available from the sentence. They attribute the reason to the logical structure of English as in (3), as it is considered unable to afford an unbound variable, a constraint known as the unbound variable constraint (uvc). 6 We should note, however, that there is one reading pings generated by a scheme similar to Hobbs & Shieber (1887). We clash that competence grammax makes even fewer readings available in the first place. 4Without losing generality, therefore, we will consider only individual-denoting NPs in this paper. SSingular NPs such as a company are not helpful to this task since their denotations do not involve multi- ple individuals which explicitly induce this functional dependency. eThe reading would be represented as follows, which has the first occurrence of the variable c left unbound. 206 among the remaining five that the uvc allows which in fact does not appear to be available. This is the one in which three companies outscopes most samp- les which in turn outscopes two representatives (cf. Horn (1972), Fodor (1982)). 7 This suggests that the uvc may not be the only principle under which Hobbs & Shieber's reading is excluded, s The other four readings of (a) are self-evidently available. If we generalize over available readings, they are only those that have no quantifiers which intercalate over NP boundaries. 9 (5) (b) has three quantifiers too, but unlike (5) (a), all the six ways of ordering the quantifiers are available. (5) (c) has only four available readings, where most boys does not intercalate every man and two women. 1° Consider now sentences including coordination. (6) (a) Every girl admired, but most boys dete- sted, one of the saxophonists. (b) Most boys think that every man danced with, but doubt that a few boys talked to, more than two women. As Geach (1970) pointed out, (a) has only two gram- matical readings, though it has three quantifiers. In reading 1, the same saxophonist was admired and detested at the same time. In reading 2, every girl admired an arbitrary saxophonist and most boys also detested an arbitrary saxophonist. In particu- lar, missing readings include the one in which every girl admired the same saxophonist and most boys detested the same but another saxophonist. (6) (b) rio(r, rep(r) It of(r,c), most(a, samp(s), three(c, comp(c), sag(r,s)))) 7To paraphrase this impossible reading, it is true of a situation under which there were three companies such that there were four samples for each such company such that each such sample was seen by two representatives of that company. Crucially, samples seen by representatives of different companies were not necessarily the same. SThis should not be taken as denying the reality of the uvc itself. For example, as one of the referees pointed out, the uvc is required to explain why, in (a) below, every professor must outscope a friend so as to bind the pronoun his. (a) Most students talked to a friend of every pro- fessor about his work. 9One can replace most samples with other complex NP such as most samples of at least five products to see this. Certain sentences that apparently escape this ge- nerafization will be discussed in the next section. 1°To see why they are available, it is enough to see that (a) and (b) below have two readings each. (a) 3ohn thinks that every man danced with two women. (b) Most boys think that Bill danced with two women. also has only two grammatical readings. In one, most boys outscopes every man and a few boys which together outscope more than two women. In the other, more than two women outscopes every man and a few boys, which together outscope most boys. 4 An Account of Availability This section proposes a generalization at the level of semantics for the phenomena described earlier and considers its apparent counterexamples. Consider a language £ for natural language se- mantics that explicitly represents function-argument relationships (Jackendoff, 1972). Suppose that in £: the semantic form of a quantified NP is a syntactic argument of the semantic form of a verb or a pre- position. (7) through (10) below show well-formed expressions in £.11 (7) visitld(five(rulsiim) ,thrse(frencluiin)) (8) saw(most (sanp) ,of (thres(cmap) ,two(rap))) (9) show (three(car) ,most (cstmr), every(dlr)) (10) think(Adlmced(two(woman) ,every(nan)), most (boy)) For instance, of has two arguments three(comp) and two(rep), and show has three arguments. /: gives rise to a natural generalization of available readings as summarized below. 12 (11) For a function with n arguments, there are n! ways of successively providing all the ar- guments to the function. This generalization captures the earlier observations about availability of readings. (7), for (4) (a), has two (2!) readings, as viaited has two arguments. (8) is an abstraction for four (2!x2!) readings, as both of and maw have two arguments each. (9) is an abstraction for six (3!) readings, as show has three arguments. Likewise, (10) is an abstraction for four readings. Coordination gives an interesting constraint on availability of readings. Geach's observation that (6) (a) has two readings suggests that the scope of the object must be determined before it reduces with the coordinate fragment. Suppose that the non- standard constituent for one of the conjuncts in (6) (a) has a semantic representation shown below. (12) ~z adnired(z,svery(girl)) Geach's observation implies that (12) is ambiguous, so that every(girl) can still take wide (or narrow) scope with respect to the unknown argument. A 11The up-operator ^ in (10) takes a term of type t to a term of type e, but a further description of £ is not relevant to the present discussion. 12Nan (1991)'s work is based on a related observation, though he does not make use of the distinction between referential and quantificational NP-semantics. 207 theory of CCG will be described in the next sec- tion to show how to derive scoped logical forms for available readings only. But first we must consider some apparent coun- terexamples to the generalization, (13) (a) Three hunters shot at five tigers. (b) Every representative of a company saw most samples. The obvious reading for (a) is called conjunctive or cumulative (Partee, 1975; Webber 1979). In this reading, there are three hunters and five tigers such that shooting events happened between the two par- ties. Here, arguments are not presented in succes- sion to their function, contrary to the present gene- ralization. Notice, however, that the reading must have two (or more) referential NPs (Higginbotham, 1987). 13 The question is whether our theory should predict this possibility as well. For a precise notion of availability, we claim that we must appeal to the distinction between referential and quantificational NP-semantics, since almost any referential NP can have the appearance of taking the matrix scope, wi- thout affecting the rest of scope phenomena. A re- lated example is (b), where in one reading a referen- tial NP a company arguably outscopes most samples which in turn outscopes every representative (Hobbs & Shieber, 1987). As we have pointed out earlier, the reading does not generalize to quantified NPs in general. (14) (a) Some student will investigate two dia- lects of every language. (b) Some student will investigate two dia- lects of, and collect all interesting examp- les of coordination in, every language. (c) * Two representative of at least three companies touched, but of few universi- ties saw, most samples. (a) has a reading in which every language outscopes some student which in turn outscopes two dialects (May, 1985). In a sense, this has intercalating NP quantifiers, an apparent problem to our generaliza- tion. However, the grammaticality of (b) opens up the possibility that the two conjuncts can be repre- sented grammatically as functions of arity two, si- milar to normal transitive verbs. Notice that the generalization is not at work for the fragment of at least three companies touched in (c), since the con- junct is syntactically ungrammatical. At the end of next section, we show how these finer distinctions are made under the CCG framework (See discussion of Figure 5). IZFor example, (a) below lacks such a reading. (a) Several men danced with few women. 5 A CCG Implementation This section describes a CCG approach to deriving scoped logical forms so that they range over only grammatical readings. We will not discuss details of how CCG charac- terizes natural language syntactically, and refer the interested reader to Steedman (1993). CCGs make use of a limited set of combinators, type raising (T), function composition (B), and function substitution (S), with directionality of combination for syntac- tic grammaticality. For the examples in this pa- per, we only need type raising and function composi- tion, along with function application. The following shows rules of derivation that we use. Each rule is associated with a label, such as > or <B etc, shown at the end. (15) (a) x/v ~ => x (>) (b) Y x\~ => x (<) (c) x/v Y/Z => x/z (>a) (d) Y\z x\Y ffi> x\z (<e) (e) np => T/(T\np) (>T) (f) np => T\(T/np) (<T) The mapping from syntax to semantics is usually defined in two different ways. One is to use ele- mentary categories, such as np or s, in encoding both syntactic types and logical forms (Jowsey, 1990; Steedman, 1990; Park, 1992). The other is to asso- ciate the entire lexical category with a higher-order expression (Kulick, 1995). In this paper, we take the former alternative to describe a first-order rendering of CCG. Some lexical entries for every are shown below. (16) (s :q-every (X, N, S)/(s : S\np:I) )/n:X'N (17) (s : S/(a : Sknp: s-every(1) ) )/n:W The information (s/(s\np))/n encodes the syntac- tic fact that every is a constituent which, when a constituent of category n is provided on its right, returns a constituent of category s/(s\np). q-every(X,li,S) is a term for scoped logical forms. We are using different lexical items, for instance q-every and e-every for every, in order to signify their semantic differences. 14 These lexical entries are just two instances of a general schema for type- raised categories of quantifiers shown below, where T is an arbitrary category. (18) (T/(T\np))/na~d (T\(T/np))/n And the semantic part of (16) and (17) is first-order encoding of (19) (a) and (b), respectively. 15 14q-every represents every as a quantifier, and s-every, as a set denoting property. We will use s-every(l^man(X)) and its ~-reduced equivalent s-every(man) interchangeably. 1as-quantifier(noun) denotes an arbitrary set N of individuals d such that d has the property noun and that the cardinality of N is determined by quantifier (and 208 (19) (a) ~n.AP.Vz E s-every(n).P(=) (b) (a) encodes wide scope type raising and (b), narrow. With standard entries for verbs as in (20), logical forms such as (21) and (22) are po ible. (20) saw :- (s:sav(I,Y)\np:X)/np:¥ (21) q-two (X, rep (X), aaw(X, s-f ottr (samp)) ) (22) q-two(X,rep(X) ,q-four(Y,samp(Y),aaw(][,¥))) Figure 1 shows different ways of deriving scoped logical forms. In (a), n:I'! unifies with n:X'girl(X), so that Ii gets the value girl(X). This value of !1 is transferred to the expression s:evory(X,li,S) by partial execution (Pereira Shieber, 1987; Steedman, 1990; Park, 1992). (a) shows a derivation for a reading in which object NP takes wide scope and (b) shows a derivation for a rea- ding in which subject NP takes wide scope. There are also other derivations. Figure 2 shows logical forms that can be derived in the present framework from Geach's sentence. No- tice that the conjunction forces subject NP to be first composed with the verb, so that subject NP must be type-raised and be combined with the semantics of the transitive verb. As noted earlier, the two catego- ries for the object still make both scope possibilities available, as desired. The following category is used for but. (23) ((s : and(P ,1~)/np:][)\ (s:P/np:][))/(s :Q/np :][) Readings that involve intercalating quantifiers, such as the one where every girl outscopes one sazopho- nist, which in turn outscopes most bogs, are correctly excluded. Figure 3 shows two different derivations of logi- cal forms for the complex NP two representatives of three companies. (a) shows a derivation for a rea- ding in which the modifying NP takes wide scope and (b) shows the other case. In combination with derivations involving transitive verbs with subject and object NPs, such as ones in Figure 1, this cor- rectly accounts for four grammatical readings for (5) (a). 16 Figure 4 shows a derivation for a reading, among six, in which most customers outscopes every dealer which in turn outscopes three cars. Some of these readings become unavailable when the sentence con- tains coordinate structure, such as one below. (24) Every dealer shows most customers (at most) three cars but most mechanics every car. noun). We conjecture that this can also be made to cap- ture several related NP-semantics, such as collective NP- semantics and/or referential NP-semantics, though we can not discuss further details here. lSAs we can see in Figure 3 (a) (b), there m no way quantifiers inside $ can be placed between the two quantifiers two & three, correctly excluding the other two readings. In particular, (24) does not have those two readings in which every dealer intercalates most customers and three cars. This is exactly predicted by the pre- sent CCG framework, extending Geach's observa- tion regarding (6) (a), since the coordination forces the two NPs, most customers and three cars, to be composed first (Dowty, 1988; Steedman 1990; Park 1992). (25) through (27) show one such derivation, which results in readings where three cars outscopes most customers but every dealer must take either wide or narrow scope with respect to both most cu- stomers and three cars. (25) -oat cuato.ers (26) (2T) ((s:q-most(Z,catm'(g),S)~p:g)/np:Y) \(((s:S\np:X)/np:T)/np:Z) three cars (e:q-three(Y,car(Y),S)\np:l) \((s:$\np:X)/n]p:f) ao|t custoaera three cars see above see above <B (s:q-three(¥,car(Y),q-ttost(Z,catmr(Z),S)) \np:X)\(((e:S\np:X)/np:T)/np:g) Figure 5 shows the relevant derivation for the frag- ment investigate two dialects of discussed at end of previous section. It is a conjoinable constituent, but since there is no way of using type-raised category for two for a successful derivation, two dialects can not outscope any other NPs, such as subject NP or the modifying NP (Steedman, 1992). This correctly accounts for our intuition that (14) (a) has an ap- parently intercalating reading and that (14) (b) has only two readings. However, there is no similar deri- vation for the fragment of three companies touched, as shown below. (28) of three companies touched (n\n)/np T\(T/np) (e\np)/np < n\n (with T =' n\n) 6 Concluding Remarks We have shown that the range of grammatical rea- dings allowed by sentences with multiple quantified NPs can be characterized by abstraction at function- argument structure constrained by syntactic adja- cency. This result is in principle available to other paradigms that invoke operations like QR at LF or type-lifting, which are essentially equivalent to ab- straction. The advantage of CCG's very free notion 209 (a) every girl admired one saxophonist s:q-every(X,l.S) n:X'girl(X) (s:adaired(X.Y)~np:X) s:q-one(Y,sax(Y),S)\(s:S/np:Y) /(s:S\np:X)/n:X'i /np:¥ s:q-every(X,girl(X),S)/(s:S\np:X) >B =:q-every(X.girl(X).adaired(X,Y))/np:Y (b) s:q-one(Y,sax(Y).q-every(X,girl(X,adaired(X.Y)))) every girl admired s:q-every(X.girl(X).S)/(s:S\np:X) (s:adaired(X.Y)~np:l) /np:Y s:q-every(X,girl(X).adaired(X,Y))/np:Y one saxophonist s:S\(s:S/np:s-one(sax)) s:q-every(X.girl(X).adaired(X.s-one(sax))) Figure 1: Every girl admired one sazophonist: Two sample derivations (a) every girl admired but most boys detested one saxophonist s:q-every(X,girl(l).adaired(l.Y))/np:Y ........................ > s:S\(s:S/np:s-one(sax)) < s:and(q-every(X,girl(1),~l~-~l(l,Y)),q-most(l,boy(1),detested(X,Y)))/np:Y (b) •:and(q-every(x••irl(•)•ad•ired(••s-•ne(•ax)))•q-•••t(X•b•y(X)•detested(••s-•ne(sax)))) every girl admired but most boys detested one saxophonist s:adaired(s-every(girl),Y)/np:Y ....................... ~ s:q-one(Y,sax(Y),S)\(s:S/np:¥) s:and(admired(s-every(girl),Y),detested(s-most(boy),W))/np:Y s:q-one(Y,sax(Y),and(adaired(s-every(girl),Y),detested(s-most(boy),Y))) Figure 2: Every girl admire~ but most boys detested, one sazophonist: Two sample derivations (a) two representatives of three companies (s:q-teo(X.|.S) /(s:S~np:l))/n:l'l n:X'and(rep(X),of(X.Y))/np:Y >B .(s:q-tvo(l,and(rep(l),of(X,Y)),S)/(s:S\np:X))/np:¥ (s:q-three(C.comp(C),S2)/(s:St\np:l)) \((s:S2/(s:Sl~np:l))/np:C) (b) a:q-three(C,comp(C).q-two(X.and(rep(X),of(X.C)),S))/(s:S\np:X) two representatives of three companies (s:q-twoCX,l,s) n:X'and(rep(i).of(X,Y))/np:Y (s:S2/(s:St\np:X)) /(s:S\np:i))/n:g'N \((s:S2/(s:St\np:X))/np:s-three(coap)) >B (s:q-two(X.and(rep(X),of(X,Y)),S)/(s:S\np:X))/np:Y s :q-tgo (X, and(rep(l) ,of (X,s-three (¢oap))) ,S)/(s:S\np:I) Figure 3: two representatives o/three companies: Two sample derivations 210 every dealer shows host custoners s:q-every(X,dlr(X),S) (s:ehow(X,Y,g)\np:I) (s:q-nost(Y,cstnr(Y),S) /(s:S\np:l) /np:g/np:Y /np:g)\(s:S/np:g)/np:Y ................. >B s:q-every(X,dlr(X),shog(X,Y,g)/np:Z/np:Y s:q-nost(Y,cstaw(Y),q-every(X,dlr(X),show(X,Y,Z)))/np:g three cars s:S\(s:S /np:s-three(car)) s:q-nost(Y,cstnr(Y),q-every(X,dlr(X),show(X,Y,s-three(car)))) Figure 4: Every dealer shows most customers three cars: One sample derivation investigate two dialects of (s:investigate(X,g)~ap:X) /np:Y np:s-two(l) n:lt/(n:il (n:Y'tnd(l,of(l,Z))~n:I1) /n:i \n:Y'dialect(Y)) /np:g . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ~B n: Y'and(dialect (g) ,of (g,z))/np:Z .......................... >B rip: s-two(Y'and(dialect (¥), of (Y,Z)))/rip: Z ................................. ~B (s:investigate(g,s-tuo(Y'and(dialect(Y),of(Y,Z)))\np:X)/np:Z Figure 5: investigate two dialects of. One derivation of surface structure is that it ties abstraction or the equivalent as closely as possible to derivation. Ap- parent counterexamples to the generalization can be explained by the well-known distinction between re- ferential and quantificational NP-semantics. An im- plementation of the theory for an English fragment has been written in Prolog, simulating the 2nd order properties. There is a question of how the non-standard sur- face structures of CCG are compatible with well- known conditions on binding and control (including crossover). These conditions are typically stated on standard syntactic dominance relations, but these relations are no longer uniquely derivable once CCG allows non-standard surface structures. We can show, however, that by making use of the obliquen- ess hierarchy (of. Jackendoff (1972) and much sub- sequent work) at the level of LF, rather than sur- face structure, it is possible to state such conditions (Steedman, 1993). Acknowledgements Special thanks to Mark Steedman. Thanks also to Janet Fodor, Beryl Hoffman, Aravind Joshi, Nobo Komagata, Anthony Kroch, Michael Niv, Charles L. Ortiz, Jinah Park, Scott Prevost, Matthew Stone, Bonnie Webber, and Michael White for their help and criticism at various stages of the presented idea. Thanks are also due to the anonymous referees who made valuable suggestions to clarify the paper. Standard disclaimers apply. The work is supported in part by NSF grant nos. IRI91-17110, and CISE IIP, CDA 88-22719, DARPA grant no. N660001-94- C-6043, and ARO grant no. DAAH04-94-G0426. References Jon Barwise and Robin Cooper. 1981. Generalized quantifiers and natural language. Linguistics Philosophy, 5:159- 219. Harry C. Bunt. 1985. Mass Terms and Model- Theoretic Semantics. Cambridge University Press. Bob Carpenter. 1994. A Deductive Account of Scope. The Proceedings of the 13th West Coast Conference on Formal Linguistics. Robin Cooper. 1983. Quantification and Syntactic Theory. D. Reidel. David Dowty. 1988. Type Raising, Functional Composition, and Non-Constituent Conjunction. In Richard T. Oehrle et. el. editors, Categorial Grammars and Natural Language Structures, pa- ges 153 - 197. D. Reidel. Janet D. Fodor and Ivan A. Sag. 1982. Referen- tial and quantificational indefinites. Linguistics Philosophy, 5:355 - 398. Janet Dean Fodor. 1982. The mental representation of quantifiers. In S. Peters and E. Saarinen, edi- tors, Processes, Beliefs, and Questions, pages 129 - 164. D. Reidel. Paul T. Geach. 1970. A program for syntax. Syn- these, 22:3- 17. 211 Irene Helm. 1983. File change semantics and the fa- miliarity theory of definiteness. In Ruiner B~iuerle et al., editors, Meaning, Use, and the Interpreta- tion of Language. Berlin: de Gruyter. James Higginbotham. 1987. Indefiniteness and predication. In Eric J. Reuland and Alice G. B. tee Meulen, editors, The Representation of (In)definiteness, pages 43 - 70. MIT Press. Jerry R. Hobbs and Stuart M. Shieber. 1987. An al- gorithm for generating quantifier Scopings. Com- putational Linguistics, 13:47- 63. G. M. Horn. 1974. The Noun Phrase Constraint. Ph.D. thesis, University of Massachusetts, Am- herst, MA. Ray S Jackendoff. 1972. Semantic Interpretation in generative grammar. MIT Press. Einar Jowsey. 1990. Constraining Montague Gram- mar for Computational Applications. Ph.D. the- sis, Department of AI, University of Edinburgh. Hans Kamp. 1981. A theory of truth and semantic representation. In J. Groenendijk et. al., editor, Formal Methods in the Study of Language. Mathe- matical Centre, Amsterdam. William R. Keller. 19881 Nested cooper storage: The proper treatment of quantification in ordinary noun phrases. In E. U. Reyle and E. C. Rohrer, editors, Natural Language Parsing and Linguistic Theories, pages 432 - 447. D. Reidel. Seth Kulick. 1995. Using Higher-Order Logic Pro- gramming for Semantic Interpretation of Coordi- nate Constructs. The Proceedings of the 33rd An- nual Meeting of the Association for Computatio- nal Linguistics (ACL-95). Robert May. 1985. Logical Form: Its Structure and Derivation. MIT Press. Richard Montague. 1974. The proper treatment of quantification in ordinary English. In Rich- mond H. Thomason, editor, Formal Philosophy, pages 247 - 270. Yale University Press. Douglas B. Moran. 1988. Quantifier scoping in the SRI Core Language Engine. The Proceedings of the 26th Annual Meeting of the Association for Computational Linguistics (ACL-88), pages 33- 40. Seungho Nam. 1991. Scope Interpretation in Non- constituent Coordination. The Proceedings of the Tenth West Coast Conference on Formal Lingui- stics, pages 337 - 348. Jong C. Park. 1992. A Unification-Based Seman- tic Interpretation for Coordinate Constructs. The Proceedings of the 30th Annual Meeting of the Association for Computational Linguistics (ACL- 92), pages 209 - 215. Barbara Partee. 1975. Comments on C. J. Fill- more's and N. Chomsky's papers. In Robert Au- sterlitz, editor, The Scope of American Lingui- stics: papers of the first Golden Anniversary Sym- posium of the Linguistic Society of America. Lisse: Peter de Ridder Press. Fernando C.N. Pereira and Stuart M. Shieber. 1987. Proiog and Natural-Language Analysis. CSLI Lec- ture Notes Number 10. Massimo Poesio. 1991. Scope Ambiguity and Infe- rence. University of Rochester, CS TR-389. Uwe Reyle. 1993. Dealing with ambiguities by underspecification: Construction, representation and deduction. Journal of Semantics, 10:123 - 179. Stuart M. Shieber and Yves Schabes. 1990. Syn- chronous tree-adjoining grammars. The Procee- dings of the 13th International Conference on Computational Linguistics, pages 253 - 258. Mark J. Steedman. 1990. Gapping as constituent coordination. Linguistics ~ Philosophy, 13:207 - 263. Mark Steedman. 1992. Surface Structure. Univer- sity of Pennsylvania, Technical Report MS-CIS- 92-51 (LINC LAB 229). Mark Steedman. 1993. Categorial grarnmar: Tuto- rial overview. Lingua, 90:221 - 258. Espen J. Vestre. 1991. An algorithm for generating non-redundant quantifier scopings. The Procee- dings of the Conference of the European Chapter of the Association for Computational Linguistics, pages 251 - 256. Bonnie Lynn Webber. 1979. A Formal Approach to Discourse Anaphora. Garland Pub. New York. 212 | 1995 | 28 |
Using Higher-Order Logic Programming for Semantic Interpretation of Coordinate Constructs Seth Kulick University of Pennsylvania Computer and Information Science 200 South 33rd Street Philadelphia, PA 19104-6389 USA skulick@linc, cis. upenn, edu Abstract Many theories of semantic interpretation use A-term manipulation to composition- ally compute the meaning of a sentence. These theories are usually implemented in a language such as Prolog that can simulate A-term operations with first-order unifica- tion. However, for some interesting cases, such as a Combinatory Categorial Gram- mar account of coordination constructs, this can only be done by obscuring the un- derlying linguistic theory with the "tricks" needed for implementation. This paper shows how the use of abstract syntax per- mitted by higher-order logic programming allows an elegant implementation of the se- mantics of Combinatory Categorial Gram- mar, including its handling of coordination constructs. 1 Introduction Many theories of semantic interpretation use A-term manipulation to compositionally compute the mean- ing of a sentence. These theories are usually imple- mented in a language such as Prolog that can sim- ulate A-term operations with first-order unification. However, there are cases in which this can only be done by obscuring the underlying linguistic theory with the "tricks" needed for implementation. For example, Combinatory Categorial Grammar (CCG) (Steedman, 1990) is a theory of syntax and seman- tic interpretation that has the attractive character- istic of handling many coordination constructs that other theories cannot. While many aspects of CCG semantics can be reasonably simulated in first-order unification, the simulation breaks down on some of the most interesting cases that CCG can theoreti- cally handle. The problem in general, and for CCG in particular, is that the implementation language does not have sufficient expressive power to allow a more direct encoding. The solution given in this pa- per is to show how advances in logic programming allow the implementation of semantic theories in a very direct and natural way, using CCG as a case study. We begin by briefly illustrating why first-order unification is inadequate for some coordination con- structs, and then review two proposed solutions. The sentence in (la) usually has the logical form (LF) in (lb). (la) John and Bill run. (15) (and (run John) (run Bill)) CCG is one of several theories in which (lb) gets derived by raising John to be the LF AP.(P john), where P is a predicate that takes a NP as an argu- ment to return a sentence. Likewise, Bill gets the LF AP.(P bill), and coordination results in the fol- lowing LF for John and Bill: (2) AP.(and (P john) (P bill)) When (2) is applied to the predicate, (15) will re- sult after 13-reduction. However, under first-order unification, this needs to simulated by having the variable z in Az.run(z) unify both with Bill and John, and this is not possible. See (Jowsey, 1990) and (Moore, 1989) for a thorough discussion. (Moore, 1989) suggests that the way to overcome this problem is to use explicit A-terms and encode /~-reduction to perform the needed reduction. For example, the logical form in (3) would be produced, where X\rtm(X) is the representation of Az.run (z). (3) and (apply (I\run(X), j ohn). apply (l\run(l), bill) ) This would then be reduced by the clauses for apply to result in (lb). For this small example, writing such an apply predicate is not difficult. However, as the semantic terms become more complex, it is no trivial matter to write ~-reduction that will cor- rectly handle variable capture. Also, if at some point it was desired to determine if the semantic forms of two different sentences were the same, a predicate would be needed to compare two lambda forms for a-equivalence, which again is not a simple task. Es- sentially, the logic variable X is meant to be inter- preted as a bound variable, which requires an addi- tional layer of programming. 213 (Park, 1992) proposes a solution within first-order unification that can handle not only sentence (la), but also more complex examples with determiners. The method used is to introduce spurious bindings that subsequently get removed. For example, the semantics of (4a) would be (4b), which would then get simplified to (4c). (4a) A farmer and every senator talk (4b) exists(X1 ,fanaer(I1) a( exists (x2, (x2=xl) ataZk (X2)) ) ) &f orallCl3, senat or (X3) => (exists (X2, (12=13) &talk (X2)) ) ) (4c) exists (Xl,fanaerCXl)ktalk(Xl)) &forall (13, senator (13) =>talk (13)) While this pushes first-order unification beyond what it had been previously shown capable of, there are two disadvantages to this technique: (1) For ev- ery possible category that can be conjoined, a sepa- rate lexical entry for and is required, and (2) As the conjoinable categories become more complex, the and entries become correspondingly more complex and greatly obscure the theoretical background of the grammar formalism. The fundamental problem in both cases is that the concept of free and bound occurrences of variables is not supported by Prolog, but instead needs to be implemented by additional programming. While theoretically possible, it becomes quite problematic to actually implement. The solution given in this paper is to use a higher-order logic programming language, AProlog, that already implements these concepts, called "abstract syntax" in (Miller, 1991) and "higher-order abstract syntax" in (Pfenning and Elliot, 1988). This allows a natural and elegant im- plementation of the grammatical theory, with only one lexical entry for and. This paper is meant to be viewed as furthering the exploration of the utility of higher-order logic programming for computational linguistics - see, for example, (Miller & Nadathur, 1986), (Pareschi, 1989), and (Pereira, 1990). 2 CCG CCG is a grammatical formalism in which there is a one-to-one correspondence between the rules of composition 1 at the level of syntax and logical form. Each word is (perhaps ambiguously) assigned a cat- egory and LF, and when the syntactical operations assign a new category to a constituent, the corre- sponding semantic operations produce a new LF for that constituent as well. The CCG rules shown in Figure 1 are implemented in the system described 1In the genera] sense, not specifically the CCG rule for function composition. Function Application (>): I/Y:F Y:y =>g:Fy Function Application (<): Y:y I\Y:F=>I:Fy Function Composition (> X/Y:F Y/Z:G=>X/Z: Function Composition (< Y\Z:G X\Y:F=>X\Z: Type Raising (> T): np:x => ./(s\np) : ~F.Fx Type Raising (< T): np:x =>e\(s/np):AF.Vx B): )tx.F(Gx) B): ~x.F(Gx) Figure 1: CCG rules harry found ........ ~ .......... >T S:s/(S:s\NP:harry') (S:found ~ npl np2\NP:npl)/NP:np2 >B S:found' harry' np2/NP:np2 Figure 2: CCG derivation of harry found simulated by first-order unification in this paper. 2 3 Each of the three operations have both a forward and backward variant. As an illustration of how the semantic rules can be simulated in first-order unification, consider the derivation of the constituent harry found, where harry has the category np with LF harry' and found is a transitive verb of category (s\np)/np with LF (5) Aobject.Asubject.(found' subject object) In the CCG formalism, the derivation is as fol- lows: harry gets raised with the > T rule, and then forward composed by the > B rule with found, and the result is a category of type s/rip with LF Az.(found' harry' z). In section 3 it will be seen how the use of abstract syntax allows this to be ex- pressed directly. In first-order unification, it is sim- ulated as shown in Figure 2. 4 The final CCG rule to be considered is the coor- dination rule that specifies that only like categories can coordinate: 2The type-raising rules shown are actually a simplifi- cation of what has been implemented. In order to handle determiners, a system similar to NP-complement cate- gories as discussed in (Dowty, 1988) is used. Although a worthwhile further demonstration of the use of ab- stract syntax, it has been left out of this paper for space reasons. 3The \ for a backward-looking category should not be confused with the \ for A-abstraction. *example adapted from (Steedman, 1990, p. 220). 214 (6) X ¢on3 X => x This is actually a schema for a family of rules, col- lectively called "generalized coordination", since the semantic rule is different for each case. 5 For exam- ple, if X is a unary function, then the semantic rule is (Ta), and if the functions have two arguments, then the rule is (7b). s (7a) @FGH = Az.F(Gz)(Hz) (7b) @~FGH = Az.Ay.F(Gzy)(Hzy) For example, when processing (la), rule (Ta) would be used with: • F = Az.Ay.(~md' z y) • G = AP.(P john') , H = AP.(P bill') with the result c~FGH = Az.(and' (z john') (z bill')) which is c=-equivalent to (2). 3 ~PROLOG and Abstract Syntax AProlog is a logic programming language based on higher-order hereditary Harrop formulae (Miller et al., 1991). It differs from Prolog in that first-order terms and unification are replaced with simply-typed A-terms and higher-order unification 7, respectively. It also permits universal quantification and implica- tion in the goals of clauses. The crucial aspect for this paper is that together these features permits the usage of abstract syntax to express the logical forms terms computed by CCG. The built-in A-term ma- nipulation is used as a "meta-language" in which the "object-language" of CCG logical forms is expressed, and variables in the object-language are mapped to variables in the meta-language. The AProlog code fragment shown in Figure 3 de- clares how the CCG logical forms are represented. Each CCG LF is represented as an untyped A-term, namely type t=. abe represents object-level abstrac- tion Az.M by the meta-level expression (abe I), sit is not established if this schema should actually produce an unbounded family of rules. See (Weir, 1988) and (Weir and Joshi, 1988) for a discussion of the im- plications for automata-theoretic power of generalized coordination and composition, and (Gazda~, 1988) for linguistic axguments that languages like Dutch may re- quire this power, and (Steedman, 1990) for some further discussion of the issue. In this paper we use the general- ized rule to illustrate the elegance of the representation, but it is an easy change to implement a bounded coor- dination rule. eThe ,I~ notation is used because of the combina- tory logic background of CCG. See (Steedman, 1990) for details. 7defined as the unification of simply typed A-terms, modulo ~,/conversion. kind tat type. type abe (tat -> tat) -> tat. type app tat -> tat -> tat. type forall (tat -> tat) -> tat. type exists (tat -> tat) -> tat. type >> tat -> tm -> tat. type ,t tat -> ta -> tat. Figure 3: Declarations for AProlog representation of CCG logical forms where N is a meta-level function of type ta ---* tat. A meta-level A-abstraction Ay.P is written y\p.S Thus, if waZked' has type tat --* tat, then y\(walked' y) is a AProlog (meta, level) function with type ta -* tat, and (abe y\(walked' y)) is the object-level representation, with type tat. The LF for found shown in (5) would be represented as Cabs obj\(abs sub\(found' sub obj))), app en- codes application, and so in the derivation of harry found, the type-raised harry has the AProlog value (abe p\(app p harry')). 9 The second part of Figure 3 shows declares how quantifiers are represented, which are required since the sentences to be processed may have determiners. forall and exists are encoded similarly to abstrac- tion, in that they take a functional argument and so object-level binding of variables by quantifiers is handled by meta-hvel A-abstraction. >> and tt are simple constructors for implication and conjunction, to be used with forall and exists respectively, in the typical manner (Pereira and Shieber, 1987). For example, the sentence every man found a bone has as a possible LF (8a), with the AProlog representation (8b)10: SThis is the same syntax for ~-abstraction as in (3). (Moore, 1989) in fact borrows the notation for A- abstraction from AProlog. The difference, of course, is that here the abstraction is a meta-level, built-in con- struct, while in (3) the interpretation is dependent on an extra layer of programming. Bound variables in AProlog can be either upper or lower case, since they axe not logic vaxlables, and will be written in lower case in this paper. 9It is possible to represent the logical forms at the object-level without using abs and app, so that harry could be simply p\(p harry'). The original implemen- tation of this system was in fact done in this manner. Space prohibits a full explanation, but essentially the fact that AProlog is a typed language leads to a good deal of formal clutter if this method is used. 1°The LF for the determiner has the form of a Mon- tagovian generalized quantifier, giving rise to one fully scoped logical form for the sentence. It should be stressed that this particular kind of LF is assumed here purely for the sake of illustration, to make the point that composition at the level of derivation and LF are one- to-one. Section 4 contains an example for which such a 215 type apply tm -> tm -> tm -> o. type compose tm -> tm -> tm -> o. type raise tm -> tm -> o. apply (abs R) S (R S). compose (abs F) (abs G) (abs x\(F (G x))). raise Tn (abe P\(app P Tm)). Figure 4: ~Prolog implementation of CCG logical form operations (8a) 3=.((bone' =) A y) ( ound' =))) (8b) (exists x\ ((bone' x) it& (forall xl\ (CLan' xl) >> (found' xl x))))) Figure 4 illustrates how directly the CCG opera- tions can be encoded 11. o is the type of a meta-level proposition, and so the intended usage of apply is to take three arguments of type tm, where the first should be an object-level )~-abstraction, and set the third equal to the application of the first to the sec- ond. Thus, for the query ?- apply (abe sub\(walked' sub)) harry' N. It unifies with the ta -~ ta function sub\(walked ~ sub), S with harry' and M with (It S), the recta-level application of R to S, which by the built-in fi-reduction is (walked' harry' ). In other words, object-level function application is handled simply by the meta-level function application. Function composition is similar. Consider again the derivation of harry found by type- raising and forward composition, harry would get type-raised by the raise clause to produce (abe p\(app p haxry~)), and then composed with found, with the result shown in the following query: ?- compose (abe p\(app p harry')) (abe obj\ (abe sub\ (found' sub obj))) M. M = (abe x\ (app (abs sub\(found ~ sub x)) harry' )). derivation fails to yield all available quantifier scopings. We do not address here the further question of how the remaining scoped readings axe derived. Alternatives that appear compatible with the present approach are quanti- tier movement (Hobbs & Shieber, 1987), type-ralsing at LF (Paxtee & Rooth, 1983), or the use of disambiguated quantifers in the derivation itself (Park, 1995). 11There are other clauses, not shown here, that deter- mine the direction of the CCG rule. For either direction, however, the semantics axe the same and both directiona.I rules call these clauses for the semantic computation. kind cat type fs type bs type np type s type conj type noun type. cat -> cat -> cat. cat -> cat -> cat. cat. cat. cat. cat. type atomic-~ype cat -> o. atomic-type rip. atomic-type s. atomic-type conj. atomic-type noun. Figure 5: Implementation of the CCG category sys- tem At this point a further/~-reduction is needed. Note however this is not at all the same problem of writing a /~-reducer in Prolog. Instead it is a simple matter of using the meta-level ~-reduction to eliminate ~-redexes to produce the final result (abe x\(found I harry x)). We won't show the complete declaration of the/~-reducer, but the key clause is simply: red (app (abe N) N) (N N). Thus, using the abstract syntax capabilities of ~Prolog, we can have a direct implementation of the underlying linguistic formalism, in stark contrast to the first-order simulation shown in Figure 2. 4 Implementation of Coordination A primary goal of abstract-syntax is to support re- cursion through abstractions with bound variables. This leads to the interpretation of a bound variable as a "scoped constant" - it acts like a constant that is not visible from the top of the term, but which becomes visible during the descent through the ab- straction. See (Miller, 1991) for a discussion of how this may be used for evaluation of functional pro- grams by "pushing" the evaluation through abstrac- tions to reduce redexes that are not at the top-level. This technique is also used in the fl-reducer briefly mentioned at the end of the previous section, and a similar technique will be used here to implement coordination by recursively descending through the two arguments to be coordinated. Before describing the implementation of coordi- nation, it is first necessary to mention how CCG categories are represented in the ~Prolog code. As shown in Figure 5, cat is declared to be a primi- tive type, and np, s, conj, noun are the categories used in this implementation, fs and bs are declared 216 type coord cat -> tm -> tm -> tm -> o. coord (fs • B) (abs It) (abs S) (abs T) "- pi x\ (coord B (~ x) (S x) (T x)). cooed (be i B) (abe R) (abe S) (abe T) "- pi x\ (coord B (R x) (S x) (T x)). coord B R S (and' E S) :- atomic-type B. Figure 6: Implementation of coordination to be constructors for forward and backward slash. For example, the CCG category for a transitive verb (s\np)/np would be represented as (fs np (bs np s)). Also, the predicate atomic-type is declared to be true for the four atomic categories. This will be used in the implementation of coordination as a test for termination of the recursion. The implementation of coordination crucially uses the capability of AProlog for universal quantification in the goal of a clause, pi is the meta-level operator for V, and Vz.M is written as pi x\l|. The oper- ational semantics for AProlog state that pi x\G is provable if and only if [c/z]G is provable, where c is a new variable of the same type as z that does not otherwise occur in the current signature. In other words, c is a scoped constant and the current signa- ture gets expanded with c for the proof of [c/z]G. Since e is meant to be treated as a generic place- holder for any arbitrary z of the proper type, c must not appear in any terms instantiated for logic vari- ables during the proof of [c/z]G. The significance of this restriction will be illustrated shortly. The code for coordination is shown in Figure 6. The four arguments to cooed are a category and three terms that are the object-level LF rep- resentations of constituents of that category. The last argument will result from the coordination of the second and third arguments. Consider again the earlier problematic example (la) of coordina- tion. Recall that after john is type-raised, its LF will be (abs p\(app p john')) and similarly for bill. They will both have the category (fs (bs np s) s). Thus, to obtain the LF for John and Bill, the following query would be made: ?- coord (fs (bs np s) s) (abs p\(app p john')) Cabs pkCapp p bill')) M. This will match with the first clause for coord, with • t instantiated to (be np s) • Btos • It to (p\(app p john')) • S to (p\(app p bill')) • and T a logic variable waiting instantiation. Then, after the meta-level/~-reduction using the new scoped constant c, the following goal is called: ?- coord s (app ¢ john') (app c bill') II. where II = (T c). Since s is an atomic type, the third coord clause matches with • B instantiated to s • R to (app c john') • S to (app c bill') • II to (and' (app c john') (app c bill')) Since I = (T c), higher-order unification is used by AProlog to instantiate T by extracting c from II with the result T = x\(and' (app x john') (app x bill')) and so H from the original query is (abe x\(and' (app • john') (app • bill'))) Note that since c is a scoped constant arising from the proof of an universal quantification, the instan- tiation T = x\(and' (app ¢ john') (app • bill')) is prohibited, along with the other extractions that do not remove c from the body of the abstraction. This use of universal quantification to extract out c from a term containing c in this case gives the same result as a direct implementation of the rule for coo- ordination of unary functions (7a) would. However, this same process of recursive descent via scoped constants will work for any member of the conj rule family. For example, the following query ?- coord (~s np (be np s)) Cabs obj\(abs sub\(like' sub obj))) (abs obj\(abs sub\(hate' sub obj))) M. 14 = (abe x\ (abe xl\ (and' (like' xl x) (hate' xl x)))). corresponds to rule (7b). Note also that the use of the same bound variable names obj and sub causes no difficulty since the use of scoped-constants, meta-level H-reduction, and higher-order unification is used to access and manipulate the inner terms. Also, whereas (Park, 1992) requires careful consider- ation of handling of determiners with coordination, here such sentences are handled just like any others. For example, the sentence Mary gave every dog a bone and some policeman a flower results in the LF 12. 12This is a case in which the paxticulax LF assumed here fails to yield another available scoping. See foot- note 10. 217 (and' (exists x\C(bone' x) Itlt (fore11 xl\((dog' xl) >> (gave' aaxy' x xl))))) (exists x\((flover J x) 11 (existu xl\((poiiceman' xl) IU~ (gave' =axy' x xl)))))) Thus, "generalized coordination", instead of being a family of separate rules, can be expressed as a sin- gle rule on recursive descent through logical forms. (Steedman, 1990) also discusses "generalized com- position", and it may well be that a similar imple- mentation is possible for that family of rules as well. 5 Conclusion We have shown how higher-order logic programming can be used to elegantly implement the semantic the- ory of CCG, including the previously difficult case of its handling of coordination constructs. The tech- niques used here should allow similar advantages for a variety of such theories. An argument can be made that the approach taken here relies on a formalism that entails im- plementation issues that are more difficult than for the other solutions and inherently not as efficient. However, the implementation issues, although more complex, are also well-understood and it can be ex- pected that future work will bring further improve- ments. For example, it is a straightforward matter to transform the ,XProlog code into a logic called L~ (Miller, 1990) which requires only a restricted form of unification that is decidable in linear time and space. Also, the declarative nature of ~Prolog pro- grams opens up the possibility for applications of program transformations such as partial evaluation. 6 Acknowledgments This work is supported by ARC) grant DAAL03-89- 0031, DARPA grant N00014-90-J-1863, and ARO grant DAAH04-94-G-0426. I would like to thank Aravind Joshi, Dale Miller, Jong Park, and Mark Steedman for valuable discussions and comments on earlier drafts. References David Dowty. 1988. Type raising, functional com- position, and non-constituent conjunction. In Richard T. Oehrle, Emmon Bach, and Deirdre Wheeler, editors, Categorial Grammars and Natu- ral Language Structures. Reidel, Dordrecht, pages 153-198. Gerald Gazdar. 1988. Applicability of indexed grammars to natural languages. In U. Reyle and C. Rohrer, editors, Natural language parsing and linguistic theories. Reidel, Dordrecht, pages 69-94. Jerry R. Hobbs and Stuart M. Shieber. 1987. An al- gorithm for generating quantifier scopings. Com- putational Linguistics, 13:47-63. Einar Jowsey. 1990. Constraining Montague Gram- mar for Computational Applications. PhD thesis, University of Edinburgh. Dale Miller. 1990. A logic programming language with lambda abstraction, function variables and simple unification. In P. Schroeder-Heister, ed- itor, Eztensions of Logic Programming, Lecture Notes in Artifical Intelligence, Springer-Verlag, 1990. Dale Miller. 1991. Abstract syntax and logic pro- gramming. In Proceedings of the Second Rus- sian Conference on Logic Programming, Septem- ber 1991. Dale Miller and Gopalan Nadathur. 1986. Some uses of higher-order logic in computational linguis- tics. In 24th Annual Meeting of the Association for Computational Linguistics, pages 247-255. Dale Miller, Gopalan Nadathur, Frank Pfenning, Andre Scedrov. 1991. Uniform proofs as a foun- dation for logic programming. In Annals of Pure and Applied Logic, 51:125-157. Robert C. Moore. 1989. Unification-based seman- tic interpretation. In 27th Annual Meeting of the Association for Computational Linguistics, pages 33-41. Remo Pareschi. 1989. Type-Driven Natural Lan- guage Aanalysis. PhD thesis, University of Edin- burgh. Jong C. Park. 1992. A unification-based semantic interpretation for coordinate constructs. In 80th Annual Meeting of the Association for Computa- tional Linguistics, pages 209-215. Jong C. Park. 1995. Quantifier scope and con- stituency. In 33rd Annual Meeting of the Associa- tion for Computational Linguistics (this volume). Barbara Partee and Mats Rooth. 1983. General- ized conjunction and type ambiguity. In Rainer Banerle, Christoph Schwarze, and Arnim von Ste- chow, editors, Meaning, Use, and Interpretation of Language. W. de Gruyter, Berlin, pages 361- 383. Fernando C.N. Pereira. 1990. Semantic interpre- tation as higher-order deduction. In Jan van Eijck, editor, Logics in AI: European Workshop JELIA '90, Lecture Notes in Artificial Intelligence number 478, pages 78-96. Springer-Verlag, Berlin, Germany. Fernando C.N. Pereira and Stuart M. Shieber. 1987. Prolog and Natural-Language Analysis. Number 10 in CSLI Lecture Notes. Center for the Study of Language and Information, Stanford, California, 218 1985. Distributed by the University of Chicago Press. Frank Pfenning and Conal Elliot. 1988. Higher- order abstract syntax. In Proceedings of the A CM- SIGPLAN Conference on Programming Language Design and Implementation, 1988. Mark J. Steedman. 1990. Gapping as constituent coordination. In Linguistics and Philosophy 13, pages 207-263 David Weir. 1988. Characterizing Mildly Contezt- sensitive Grammar Formalism. CIS-88-74, PhD thesis, University of Pennsylvania. David Weir and Aravind Joshi. 1988. Combina- tory categorial grammars: generative power and relation to linear CF rewriting systems. In ~6th Annual Meeting of the Association for Computa- tional Linguistics, pages 278-285. 219 | 1995 | 29 |
The Replace Operator Lauri Karttunen Rank Xerox Research Centre 6, chemin de Maupertuis F-38240 Meylan, France lauri, karttunen@xerox, fr Abstract This paper introduces to the calculus of regular expressions a replace operator and defines a set of replacement expressions that concisely encode alternate variations of the operation. Replace expressions de- note regular relations, defined in terms of other regular expression operators. The basic case is unconditional obligatory re- placement. We develop several versions of conditional replacement that allow the op- eration to be constrained by context O. Introduction Linguistic descriptions in phonology, morphology, and syntax typically make use of an operation that replaces some symbol or sequence of symbols by another sequence or symbol. We consider here the replacement operation in the context of finite-state grammars. Our purpose in this paper is twofold. One is to define replacement in a very general way, explicitly allowing replacement to be constrained by input and output contexts, as in two-level rules (Koskenniemi 1983), but without the restriction of only single-symbol replacements. The second ob- jective is to define replacement within a general calculus of regular expressions so that replace- ments can be conveniently combined with other kinds of operations, such as composition and un- ion, to form complex expressions. Our replacement operators are close relatives of the rewrite-operator defined in Kaplan and Kay 1994, but they are not identical to it. We discuss their relationship in a section at the end of the paper. 0. 1. Simple regular expressions The replacement operators are defined by means of regular expressions. Some of the operators we use to define them are specific to Xerox implementa- tions of the finite-state calculus, but equivalent formulations could easily be found in other nota- tions. The table below describes the types of expressions and special symbols that are used to define the replacement operators. [1] (A) option (union of A with the empty string) ~A complement (negation) \A term complement (any symbol other than A) $A contains (all strings containing at least one A) A* Kleene star A+ Kleene plus A/B ignore (A interspersed with strings from B) A B concatenation A [ B union A & B intersection A - B relative complement (minus) A . x. B crossproduct (Cartesian product) A . o. B composition Square brackets, [ l, are used for grouping expres- sions. Thus [AI is equivalent to A while (A) is not. The order in the above table corresponds to the precedence of the operations. The prefix operators (-, \, and $) bind more tightly than the postfix operators (*, +, and/), which in turn rank above concatenation. Union, intersection, and relative complement are considered weaker than concate- nation but stronger than crossproduct and compo- sition. Operators sharing the same precedence are interpreted left-to-right. Our new replacement operator goes in a class between the Boolean op- erators and composition. Taking advantage of all these conventions, the fully bracketed expression [2] [[[~[all* [[b]/x]] I el .x. d ; 16 can be rewritten more concisely as ~a* b/x I c .x. d [31 Expressions that contain the crossproduct (. x.) or the composition (. o.) operator describe regular relations rather than regular languages. A regular relation is a mapping from one regular language to another one. Regular languages correspond to simple finite-state automata; regular relations are modeled by finite-state transducers. In the relation a . x. B, we call the first member, A, the upper language and the second member, B, the lower lan- guage. To make the notation less cumbersome, we sys- tematically ignore the distinction between the lan- guage A and the identity relation that maps every string of A to itself. Correspondingly, a simple au- tomaton may be thought of as representing a lan- guage or as a transducer for its identity relation. For the sake of convenience, we also equate a lan- guage consisting of a single string with the string itself. Thus the expression abc may denote, de- pending on the context, (i) the string abc, (ii) the language consisting of the string abc, and (iii) the identity relation on that language. We recognize two kinds of symbols: simple sym- bols (a, b, c, etc.) and fst pairs (a : b, y : z, etc.). An fst pair a : b can be thought of as the crossproduct of a and b, the minimal relation consisting of a (the upper symbol) and b (the lower symbol). Because we regard the identity relation on A as equivalent to A, we write a : a as just a. There are two special symbols [4] 0 epsilon (the empty string). ? any symbol in the known alphabet and its extensions. The escape character, %, allows letters that have a special meaning in the calculus to be used as ordi- nary symbols. Thus %& denotes a literal ampersand as opposed to &, the intersection operator; %0 is the ordinary zero symbol. The following simple expressions appear fre- quently in our formulas: [5] [ ] the empty string language. ~ $ [ ] the null set. ?* the universal ("sigma-star") language: all possible strings of any length including the empty string. 1. Unconditional replacement To the regular-expression language described above, we add the new replacement operator. The unconditional replacement of UPPER by LOWER is written [6] UPPER -> LOWER Here UPPER and LOWER are any regular expres- sions that describe simple regular languages. We define this replacement expression as [71 [ NO UPPER [UPPER .x. LOWER] ] * NO UPPER ; where NO UPPER abbreviates ~$ [UPPER - [] ]. The defi~ion describes a regular relation whose members contain any number (including zero) of iterations of [UPPER . x. LOWER], possibly alter- nating with strings not containing UPPER that are mapped to themselves. 1.1. Examples We illustrate the meaning of the replacement op- erator with a few simple examples. The regular expression [8] a b I c -> x ; (same as [[a b] [ c] -> x) describes a relation consisting of an infinite set of pairs such as [9] a b a c a x a x a where all occurrences of ab and c are mapped to x interspersed with unchanging pairings. It also in- dudes all possible pairs like [101 x a x a x a x a that do not contain either ab or c anywhere. Figure 1 shows the state diagram of a transducer that encodes this relation. The transducer consists of states and arcs that indicate a transition from 17 state to state over a given pair of symbols. For con- venience we represent identity pairs by a single symbol; for example, we write a : a as a. The sym- bol ? represents here the identity pairs of symbols that are not explicitly present in the network. In this case, ? stands for any identity pair other than a : a, b : b, c : c, and x : x. Transitions that differ only with respect to the label are collapsed into a single multiply labelled arc. The state labeled 0 is the start state. Final states are distinguished by a double circle. ? C : ~ a C:X -- Figure 1: a b I c -> x Every pair of strings in the relation corresponds to a path from the initial 0 state of the transducer to a final state. The abaca to xaxa path is 0-1-0-2- 0-2, where the 2-0 transition is over a c : x arc. In case a given input string matches the replace- ment relation in two ways, two outputs are pro- duced. For example, [111 a b ] b c -> x ; c ? Figure 2: a b [ b c -> x maps abc to both ax and xc: a b c , a b c a x x c [121 The corresponding transducer paths in Figure 2 are 0-1-3-0 and 0-2-0-0, where the last 0-0 transi- tion is over a c arc. If this ambiguity is not desirable, we may write two replacement expressions and compose them to indicate which replacement should be preferred if a choice has to be made. For example, if the ab match should have precedence, we write [13] a b - > x o0o b c -> x ; a:x X X Figure3: a b -> x .o. b c -> x This composite relation produces the same output as the previous one except for strings like abc where it unambiguously makes only the first re- placement, giving xc as the output. The abe to xc path in Figure 3 is 0-2-0-0. 1.2. Special cases Let us illustrate the meaning of the replacement operator by considering what our definition im- plies in a few spedal cases. If UPPER is the empty set, as in [] -> a [ b [141 the expression compiles to a transducer that freely inserts as and bs in the input string. If UPPER describes the null set, as in, ~$[] -> a [ b ; [151 18 the LOWER part is irrelevant because there is no replacement. This expression is a description of the sigma-star language. If LOWER describes the empty set, replacement be- comes deletion. For example, [16] a I b-> [] removes all as and bs from the input. If LOWER describes the null set, as in a [ b -> ~$[] ; [17] all strings containing UPPER, here a or b, are ex- cluded from the upper side language. Everything else is mapped to iiself. An equivalent expression is ~$ [a [ b]. 1.3. Inverse replacement The inverse replacement operator. UPPER <- LOWER [18] is defined as the inverse of the relation LOWER -> UPPER. 1.4. Optional replacement An optional version of unconditional replacement is derived simply by augmenting LOWER with UP- PER in the replacement relation. [19] UPPER (->) LOWER is defined as UPPER -> [LOWER [ UPPER] [20] The optional replacement relation maps UPPER to both LOWER and UPPER. The optional version of <- is defined in the same way. 2. Conditional replacement We now extend the notion of simple replacement by allowing the operation to be constrained by a left and a right context. A conditional replacement expression has four components: UPPER, LOWER, LEFT, and RIGHT. They must all be regular expres- sions that describe a simple language. We write the replacement part UPPER -> LOWER, as before, and the context part as LEFT _ RIGHT, where the underscore indicates where the replacement takes place. In addition, we need a separator between the re- placement and the context part. We use four alter- nate separators, [ I, //, \ \ and \/, which gives rise to four types of conditional replacement expres- sions: [21l (1) Upward-oriented: UPPER -> LOWER J[ LEFT RIGHT ; (2) Right-oriented: UPPER-> LOWER // LEFT RIGHT ; (3) Left-oriented: UPPER -> LOWER \\ LEFT RIGHT ; (4) Downward-oriented: UPPER -> LOWER \/ LEFT RIGHT ; All four kinds of replacement expressions describe a relation that maps UPPER to LOWER between LEFT and RIGHT leaving everything else un- changed. The difference is in the intelpretation of '%etween LEFT and RIGHT." 2.1. Overview: divide and conquer We define UPPER-> LOWER l[ LEFT RIGHT and the other versions of conditional replacement in terms of expressions that are already in our regu- lar expression language, including the uncondi- tional version just defined. Our general intention is to make the conditional replacement behave ex- actly like unconditional replacement except that the operation does not take place unless the specified context is present. This may seem a simple matter but it is not, as Kaplan and Kay 1994 show. There are several sources of complexity. One is that the part that is being replaced may at the same time serve as the context of another adjacent replacement. Another complication is the fact just mentioned: there are several ways to constrain a replacement by a con- text. We solve both problems using a technique that was originally invented for the implementation of phonological rewrite rules (Kaplan and Kay 1981, 1994) and later adapted for two-level rules (Kaplan, Karttunen, Koskenniemi 1987; Karttunen and 19 Beesley 1992). The strategy is first to decompose the complex relation into a set of relatively simple components, define the components independently of one another, and then define the whole opera- tion as a composition of these auxiliary relations. We need six intermediate relations, to be defined shortly: [22] (1) InsertBrackets (2) ConstrainBrackets (3) LeftContext (4) RightContext (5) Replace (6) RemoveBrackets Relations (1), (5), and (6) involve the unconditional replacement operator defined in the previous sec- tion. Two auxiliary symbols, < and >, are introduced in (1) and (6). The left bracket, <, indicates the end of a left context. The right bracket, >, marks the begin- ning of a complete right context. The distribution of the auxiliary brackets is controlled by (2), (3), and (4). The relations (1) and (6) that introduce the brackets internal to the composition at the same time remove them from the result. 2.2. Basic definition The full spedfication of the six component relations is given below. Here UPPER, LOWER, LEFT, and RIGHT are placeholders for regular expressions of any complexity. In each case we give a regular expression that pre- cisely defines the component followed by an Eng- lish sentence describing the same language or rela- tion. In our regular expression language, we have to prefix the auxiliary context markers with the escape symbol % to distinguish them from other uses of < and >. [23] (1) InsertBrackets [] <- %< 1%> ; The relation that eliminates from the upper side lan- guage all context markers that appear on the lower side. [24] (2) ConstrainBrackets ~$ [%< %>] ; The language consisting of strings that do not contain <> anywhere. [2s] (3) LeftContext -[-[...LEFT] [<...]] & ~[ [...LEFT] ~[<...]] ; The language in which any instance of < is immedi- ately preceded by LEFT, and every LEFT is ii~iedi- ately followed by <, ignoring irrelevant brackets. Here [...LEFT] is an abbreviation for [ [?* LEFT/[%<I%>]] - [2" %<] ], that is, anystring ending in LEFT, ignoring all brackets except for a final <. Similarly, [%<... ] stands for [%</%> ? * ], any string beginning with <, ignoring the other bracket. [26] (4) RightContext ~[ [...>] -[RIGHT...] & ~[~[...>] [RIGHT...] ; The language in which any instance of > is immedi- ately followed by RIGHT, and any RIGHT is immedi- ately preceded by >, ignoring irrelevant brackets. Here [...>] abbreviates [?* %>/%<], and RIGHT... stands for [RIGHT/ [%< 1%>] - [%> ? * ] ], that is, any string beginning with RIGHT, ignoring all brackets except for an initial >. [27] (5) Replace %< UPPER/[%<I %>] %> -> %< LOWER/ [%< I %>] %> ; The unconditional replacement of <UPPER> by <LOWER>, ignoring irrelevant brackets. The redundant brackets on the lower side are im- portant for the other versions of the operation. [28] (6) RemoveBrackets %< t %>-> [] ; 20 The relation that maps the strings of the upper lan- guage to the same strings without any context mark- ers. The upper side brackets are eliminated by the in- verse replacement defined in (1). 2.3. Four ways of using contexts The complete definition of the first version of con- ditional replacement is the composition of these six relations: [29] UPPER -> LOWER [l LEFT RIGHT ; InsertBrackets oO. ConstrainBrackets oO. LeftContext °O. RightContext .Oo Replace oO. RemoveBrackets ; The composition with the left and right context constraints prior to the replacement means that any instance of UPPER that is subject to replacement is surrounded by the proper context on the upper side. Within this region, replacement operates just as it does in the unconditional case. Three other versions of conditional replacement can be defined by applying one, or the other, or both context constraints on the lower side of the relation. It is done by varying the order of the three middle relations in the composition. In the right- oriented version (//), the left context is checked on the lower side of replacement: [30] UPPER -> LOWER // LEFT RIGHT ; ° o . RightContext °Oo Replace oOo LeftContext °.o The left-oriented version applies the constraints in the opposite order: UPPER -> LOWER \\ LEFT RIGHT [31] . ° ° LeftContext .O. Replace .o. RightContext ° ° ° The first three versions roughly correspond to the three alternative interpretations of phonological rewrite rules discussed in Kaplan and Kay 1994. The upward-oriented version corresponds to si- multaneous rule application; the right- and left- oriented versions can model rightward or leftward iterating processes, such as vowel harmony and assimilation. The fourth logical possibility is that the replace- ment operation is constrained by the lower context. [32] UPPER -> LOWER \/ LEFT RIGHT ; ° o o Replace .O. LeftContext oOo RightContext . • ° When the component relations are composed to- gether in this manner, UPPER gets mapped to LOWER just in case it ends up between LEFT and RIGHT in the output string. 2.4. Examples Let us illustrate the consequences of these defini- tions with a few examples. We consider four ver- sions of the same replacement expression, starting with the upward-oriented version [331 a b-> x II a b a ; applied to the string abababa. The resulting rela- tion is ab ab a b a a b x x a The second and the third occurrence of ab are re- placed by x here because they are between ab and 21 x on the upper side language of the relation• A transducer for the relation is shown in Figure 4. • x b ?l x '<!/ Figure4: a b -> x II a b _ a The path through the network that maps abababa to abxxa is 0-1-2-5-7-5-6-3. The right-oriented version, a b -> x // a b a; ? 9 b X O--G Cr Figure5: a b -> x // a b _ a givesusadifferentresult: a b a b a b a ab x aba b ? b ? ( a:x Figure6: a b -> x \\ a b _ a With abababa composed on the upper side, it yields [38] a b a b a b a a b a b x a [35] by the path 0-1-2-3-4-5-6-3. [36] following the path 0-1- 2- 5- 6-1- 2- 3. The last occurrence of ab must remain unchanged because it does not have the required left context on the lower side. The left-oriented version of the rule shows the opposite behavior because it constrains the left context on the upper side of the replacement re- lation and the right context on the lower side. [37] a b -> x \\ a b a ; The first two occurrences of ab remain unchanged because neither one has the proper right context on the lower side to be replaced by x. Finally, the downward-oriented fourth version: [39] a b -> x \/ a b a ; a:x Figure7: a b -> x \/ a b _ a This time, surprisingly, we get two outputs from the same input: [40] ab a b a b a , ab ab aba a b x a b a a b a b x a Path 0-1-2-5-6-1-2-3 yields abxaba, path 0- 1-2-3-4-5-6-1 gives us ababxa It is easy to see that if the constraint for the re- placement pertains to the lower side, then in this case it can be satisfied in two ways. 22 3. Comparisons 3.1. Phonological rewrite rules Our definition of replacement is in its technical aspects very closely related to the way phonologi- cal rewrite-rules are defined in Kaplan and Kay 1994 but there are important differences. The initial motivation in their original 1981 presentation was to model a left-to-right deterministic process of rule application. In the course of exploring the issues, Kaplan and Kay developed a more abstract notion of rewrite rules, which we exploit here, but their 1994 paper retains the procedural point of view. Our paper has a very different starting point. The basic case for us is unconditional obligatory re- placement, defined in a purely relational way without any consideration of how it might be ap- plied. By starting with obligatory replacement, we can easily define an optional version of the opera- tor. For Kaplan and Kay, the primary notion is op- tional rewriting. It is quite cumbersome for them to provide an obligatory version. The results are not equivalent. Although people may agree, in the case of simple phonological rewrite rules, what the outcome of a deterministic rewrite operation should be, it is not clear that this is the case for replacement expres- sions that involve arbitrary regular languages. For that reason, we prefer to define the replacement operator in relational terms without relying on an uncertain intuition about a particular procedure. 3.2. Two-level rules Our definition of replacement also has a close con- nection to two-level rules. A two-level rule always specifies whether a context element belongs to the input (= lexical) or the output (= surface) context of the rule. The two-level model also shares our pure relational view of replacement as it is not con- cerned about the application procedure. But the two-level formalism is only defined for symbol-to- symbol replacements. 4. Conclusion The goal of this paper has been to introduce to the calculus of regular expressions a replace operator, ->, with a set of associated replacement expressions that concisely encode alternate variations of the operation. We defined unconditional and conditional re- placement, taking the unconditional obligatory replacement as the basic case. We provide a simple declarative definition for it, easily expressed in terms of the other regular expression operators, and extend it to the conditional case providing four ways to constrain replacement by a context. These definitions have already been implemented. The figures in this paper correspond exactly to the output of the regular expression compiler in the Xerox finite-state calculus. Acknowledgments This work is based on many years of productive collaboration with Ronald M. Kaplan and Martin Kay. I am particularly indebted to Kaplan for writing a very helpful critique, even though he strongly prefers the approach of Kaplan and Kay 1994. Special thanks are also due to Kenneth R. Beesley for technical help on the definitions of the replace operators and for expert editorial advice. I am grateful to Pasi Tapanainen, Jean-Pierre Chanod and Annie Zaenen for helping to correct many terminological and rhetorical weaknesses of the initial draft. References Kaplan, Ronald M., and Kay, Martin (1981). Phonological Rules and Finite- State Transducers. Paper presented at the Annual Meeting of the Linguistic Society of America. New York. Kaplan, Ronald M. and Kay, Martin (1994). Regular Models of Phonological Rule Systems. Computa- tional Linguistics. 20:3 331-378. 1994. Karttunen, Lauri, Koskenniemi, Kimmo, and Kaplan, Ronald M. (1987) A Compiler for Two- level Phonological Rules. In Report No. CSLI-87- 108. Center for the Study of Language and In- formation. Stanford University. Karttunen, Lauri and Beesley, Kenneth R. (1992). Two-level Rule Compiler. Technical Report. ISTL- 92-2. Xerox Palo Alto Research Center. Koskenniemi, Kimmo (1983). Two-level Morphology: A General Computational Model for Word-Form Re- cognition and Production. Department of General Linguistics. University of Helsinki. 23 | 1995 | 3 |
New Techniques for Context Modeling Eric Sven Ristad and Robert G. Thomas Department of Computer Science Princeton University {ristad, rgt }~cs. princeton, edu Abstract We introduce three new techniques for sta- tistical language models: extension mod- eling, nonmonotonic contexts, and the di- vergence heuristic. Together these tech- niques result in language models that have few states, even fewer parameters, and low message entropies. 1 Introduction Current approaches to automatic speech and hand- writing transcription demand a strong language model with a small number of states and an even smaller number of parameters. If the model entropy is high, then transcription results are abysmal. If there are too many states, then transcription be- comes computationally infeasible. And if there are too many parameters; then "overfitting" occurs and predictive performance degrades. In this paper we introduce three new techniques for statistical language models: extension modeling, nonmonotonic contexts, and the divergence heuris- tic. Together these techniques result in language models that have few states, even fewer parameters, and low message entropies. For example, our tech- niques achieve a message entropy of 1.97 bits/char on the Brown corpus using only 89,325 parameters. By modestly increasing the number of model param- eters in a principled manner, our techniques are able to further reduce the message entropy of the Brown Corpus to 1.91 bits/char. 1 In contrast, the charac- ter 4-gram model requires 250 times as many pa- rameters in order to achieve a message entropy of only 2.47 bits/char. Given the logarithmic nature of codelengths, a savings of 0.5 bits/char is quite significant. The fact that our model performs signif- icantly better using vastly fewer parameters argues 1The only change to our model selection procedure is to replace the incremental cost formula ALe(w, ~', a) with a constant cost of 2 bits/extension. This small change reduces the test message entropy from 1.97 to 1.91 bits/char but it also quadruples the number of model parameters and triples the total codelength. that it is a much better probability model of natural language text. Our first two techniques - nonmonolonic contexts and exlension modeling - are generalizations of the traditional context model (Cleary and Witten 1984; Rissanen 1983,1986). Our third technique - the di- vergence heuristic - is an incremental model selec- tion criterion based directly on Rissanen's (1978) minimum description length (MDL) principle. The MDL principle states that the best model is the sim- plest model that provides a compact description of the observed data. In the traditional context model, every prefix and every suffix of a context is also a context. Three consequences follow from this property. The first consequence is that the context dictionary is un- necessarily large because most of these contexts are redundant. The second consequence is to attenu- ate the benefits of context blending, because most contexts are equivalent to their maximal proper suf- fixes. The third consequence is that the length of the longest candidate context can increase by at most one symbol at each time step, which impairs the model's ability to model complex sources. In a non- monotonic model, this constraint is relaxed to allow compact dictionaries, discontinuous backoff, and ar- bitrary context switching. The traditional context model maps every history to a unique context. All symbols are predicted us- ing that context, and those predictions are estimated using the same set of histories. In contrast, an exten- sion model maps every history to a sel of contexts, one for each symbol in the alphabet. Each symbol is predicted in its own context, and the model's current predictions need not be estimated using the same set of histories. This is a form of parameter tying that increases the accuracy of the model's predic- tions while reducing the number of free parameters in the model. As a result of these two generalizations, nonmono- tonic extension models can outperform their equiv- alent context models using significantly fewer pa- rameters. For example, an order 3 n-gram (ie., the 4-gram) requires more than 51 times as many con- 220 texts and 787 times as many parameters as the order 3 nonmonotonic extension model, yet already per- forms worse on the Brown corpus by 0.08 bits/char. Our third contribution is the divergence heuris- tic, which adds a more specific context to the model only when it reduces the codelength of the past data more than it increases the codelength of the model. In contrast, the traditional selection heuristic adds a more specific context to the model only if it's entropy is less than the entropy of the more general context (Rissanen 1983,1986). The traditional minimum en- tropy heuristic is a special case of the more effective and more powerful divergence heuristic. The diver- gence heuristic allows our models to generalize from the training corpus to the testing corpus, even for nonstationary sources such as the Brown corpus. The remainder of our article is organized into three sections. In section 2, we formally define the class of extension models and present a heuristic model selection algorithm for that model class based on the divergence criterion. Next, in section 3, we demonstrate the efficacy of our techniques on the Brown Corpus, an eclectic collection of English prose containing approximately one million words of text. Section 4 discusses possible improvements to the model class. 2 Extension Model Class This section consists of four parts. In 2.1, we for- mally define the class of extension models and prove that they satisfy the axioms of probability. In 2.2, we show to estimate the parameters of an exten- sion model using Moffat's (1990) "method C." In 2.3, we provide codelength formulas for our model class, based on efficient enumerative codes. These code- length formulas will be used to match the complexity of the model to the complexity of the data. In 2.4, we present a heuristic model selection algorithm that adds parameters to an extension model only when they reduce the codelength of the data more than they increase the codelength of the model. 2.1 Model Class Definition Formally, an extension model ¢ : (E, D, E, A) con- sists of a finite alphabet E, [E[ = m, a dictionary D of contexts, D C E*, a set of available context extensions E, E C D x E, and a probability func- tion I : E ---* [0, 1]. For every context w in D, E(w) is the set of symbols available in the context w and A(~rlw ) is the conditional probability of the symbol c~ in the context w. Note that )--]o~ A(c~[w) < 1 for all contexts w in the dictionary D. The probability /5(h1¢ ) of a string h given the model ¢, h • E', is calculated as a chain of con- ditional probabilities (1) /5(h{¢) --" ~(hnlhl...hn_l,¢)~(hl...h,~_ll¢) (1) while the conditional probability ih(elh, ¢) of a single symbol ~r after the history h is defined as (2). { ~(~rlh ) if (h,a) ~ E /3(a]h, ¢) - 5(h)~(a]h2h3...h,, ¢) otherwise (2) The expansion factor 6(h) ensures that/5(.]h, ¢) is a probability function if/5(-Ih2.., h,~, ¢) is a probabil- ity function. 1 - )~(E(h)[h) (3) 6(h)- 1- ~(E(h)Ih2...h,~,¢) Note that E(h) represents a set of symbols, and so by a slight abuse of notation )~(E(h)Ih ) denotes ~]~eE(h) A(a[h), ie., the sum of A(alh ) over all ~ in E(h). Examplel. Let E:{0,1},D: {e,"0" },E(e) - {0, 1}, E("0") -= {0}. Suppose A(010 = ½, X(lle) = ½, and A(01"0" ) = 3 Then 6("0") = 1/1 _ 1 ~. ~ - y and i6(11"0",¢ ) : 5("0") •(l[e) - 1 The fundamental difference between a context model and an extension model lies in the inputs to the context selection rule, not its outputs. The traditional context model includes a selection rule s : E* --~ D whose only input is the history. In con- trast, an extension model includes a selection rule s : E* x E --+ D whose inputs include the past history and the symbol to be predicted. This dis- tinction is preserved even if we generalize the selec- tion rule to select a set of candidate contexts. Un- der such a generalization, the context model would map every history to a set of candidate contexts, ie., s : E* ---* 2 D , while an extension model would map every history and symbol to a set of candidate contexts, ie., s : E* x E --* 2 D. Our extension selection rule s : E* x E --+ D is de- fined implicitly by the set E of extensions currently in the model. The recursion in (2) says that each symbol should be predicted in its longest candidate context, while the expansion factor 6(h) says that longer contexts in the model should be trusted more than shorter contexts when combining the predic- tions from different contexts. An extension model ¢ is valid iff it satisfies the following constraints: a. eC DAE(c) :E c. Vw • D [E(w) : E =¢, ~oez(~o) A(~iw) : 1] (4) These constraints suffice to ensure that the model ¢ defines a probability function. Constraint (4a) states that every symbol has the empty string as a context. This guarantees that every symbol will always have at least one context in every history and that the re- cursion in (2) will terminate. Constraint (45) states that the sum of the probabilities of the extensions E(w) available in in a given context w cannot sum 221 to more than unity. The third constraint (4c) states that the sum of the probabilities of the extensions E(w) must sum exactly to unity when every symbol is available in that context (ie., when E(w) : E). Lemma 2.1 VyEE* Vcr62E [ fi(~]lY) : 1 :~/](EIqy) = 1 ] Proof. By the definition of 6(~ry). Theorem 1 If an exlension model ¢ is valid, then vn ]S,es,, = 1. Proof. By induction on n. For the base case, n : 1 and the statement is true by the definition of validity (constraints 4a and 4c). The induction step is true by lemma 2.1 and definition (1). [] 2.2 Parameter Estimation Let us now estimate the conditional probabilities A(.[-) required for an extension model. Traditionally, these conditional probabilities are estimated using string frequencies obtained from a training corpus. Let c(c~[w) be the number of times that the symbol followed the string w in the training corpus, and let c(w) be the sum ~es c(crlw) of all its conditional frequencies. Following Moffat (1990), we first partition the conditional event space E in a given context w into two subevents: the symbols q(w) that have previously occurred in context w and those that q(w) that have not. Formally, q(w) - {(r : c(,r[w) > 0} and ~(w) - E - q(w). We estimate )~c(q(w)lw ) as e(w)/(c(w) + #(w)) and )~c(4(w)[w) as #(w)/(c(w)+ #(w)) where #(w) is the to- tal weight assigned to the novel events q(w) in the context w. Currently, we calculate #(w) as min([q(w)l, Iq(w)[) so that highly variable con- texts receive more flattening, but no novel symbol in ~(w) receives more than unity weight. Next, )~c(alq(w ), w) is estimated as c(alw)/c(w ) for the previously seen symbols c~ e q(w) and Ac((r]4(w), w) is estimated uniformly as 1/[4(w)[ for the novel sym- bols ~r • 4(w). Combining these estimates, we ob- tain our overall estimate (5). c( lw) c(w) + #(w) if c~ • q(w) Ae (alw) = #(w) otherwise + O) Unlike Moffat, our estimate (5) does not use escape probabilities or any other form of context blending. All novel events 4(w) in the context w are assigned uniform probability. This is suboptimal but simpler. We note that our frequencies are incorrect when used in an extension model that contains contexts that are proper suffixes of each other. In such a sit- uation, the shorter context is only used when the longer context was not used. Let y and xy be two distinct contexts in a model ¢. Then the context y will never be used when the history is E*xy. There- fore, our estimate of A(.ly ) should be conditioned on the fact that the longer context xy did not occur. The interaction between candidate contexts can be- come quite complex, and we consider this problem in other work (Ristad and Thomas, 1995). Parameter estimation is only a small part of the overall model estimation problem. Not only do we have to estimate the parameters for a model, we have to find the right parameters to use! To do this, we proceed in two steps. First, in section 2.3, we use the minimum description length (MDL) principle to quantify the total merit of a model with respect to a training corpus. Next, in section 2.4, we use our MDL codelengths to derive a practical model selec- tion algorithm with which to find a good model in the vast class of all extension models. 2.3 Codelength Formulas The goal of this section is to establish the proper ten- sion between model complexity and data complexity, in the fundamental units of information. Although the MDL framework obliges us to propose particu- lar encodings for the model and the data, our goal is not to actually encode the data or the model. Given an extension model ¢ and a text corpus T, ITI = t, we define the total codelength L(T,¢I(I)) relative to the model class ~ using a 2-part code. L(T, ¢[(I)) : L(¢I~ ) + L(TI¢ , ~) Since conditioning on the model class (I) is always understood, we will henceforth suppress it in our notation. Firstly, we will encode the text T using the prob- ability model ¢ and an arithmetic code, obtaining the following codelength. L(T[¢) = - logif(Tl¢ ) Next, we encode the model ¢ in three parts: the con- text dictionary as L(D), the extensions as L(EID), and the conditional frequencies c(.[-) as L(e[D, E). The dictionary D of contexts forms a suffix tree containing ni vertices with branching factor i. The m tree contains n = )--~i=l ni internal vertices and no leaf vertices. There are (no + nl + ... + nm - 1)!/no!nl!...nm! such trees (Knuth, 1986:587). Ac- cordingly, this tree may be encoded with an enumer- ative code using L(D) bits. LID): Lz(n)+log( n+m-lm_l ) +log (no + nl + ...+ nm - 1)! no!nl!...nm! rn-1 i +Lz<([[DJ[,n) i=l + log ( n + I LDJl 1 JLDJI-7 ) \ 222 where [DJ is the set of all contexts in D that are proper suffixes of another context in D. The first term encodes the number n of internal vertices using the Elias code. The second term encodes the counts {nl, n2,..., am}. Given the frequencies of these in- ternal vertices, we may calculate the number no of leaf vertices as no = 1 + n2 + 2n3 + 3n4 +... + (m - 1)am. The third term encodes the actual tree (with- out labels) using an enumerative code. The fourth term assigns labels (ie., symbols from E) to the edges in the tree. At this point the decoder knows all con- texts which are not proper suffixes of other contexts, ie., D - LD]. The fourth term encodes the magni- tude of [D] as an integer bounded by the number n of internal vertices in the suffix tree. The fifth term identifies the contexts [DJ as interior vertices in the tree that are proper suffices of another context in D. Now we encode the symbols available in each con- text. Let mi be the number of contexts that have exactly i extensions, ie., mi - J{w: JE(w)l = i}l. 7"n Observe that ~i=1 mi = IDI. () E m -F rni log i i--1 The first term represents the encoding of {mi } while the second term represents the encoding IE(w)l for each w in D. The third term represents the encoding of E(w) as a subset of E for each w in D. Finally, we encode the frequencies c(~rlw) used to estimate the model parameters wED + g ,o, ( C(°) + ) IE(w)l where [y] consists of all contexts that have y as their maximal proper suffix, ie., all contexts that y imme- diately dominates, and [y] is the maximal proper suffix of y in D, ie., the unique context that imme- diately dominates y. The first term encodes ITI with an Elias code and the second term recursively parti- tions c(w) into c([w]) for every context w. The third term partitions the context frequency c(w) into the available extensions c(E(w)lw ) and the "unallocated frequency" c(E- E(w)lw) = c(w) - c(E(w)[w) in the context w. 2.4 Model Selection The final component of our contribution is a model selection algorithm for the extension model class ~. Our algorithm repeatedly refines the accuracy of our model in increasingly long contexts. Adding a new parameter to the model will decrease the codelength of the data and increase the codelength of the model. Accordingly, we add a new parameter to the model only if doing so will decrease the total codelength of the data and the model. The incremental cost and benefit of adding a sin- gle parameter to a given context cannot be accu- rately approximated in isolation from any other pa- rameters that might be added to that context. Ac- cordingly, the incremental cost of adding the set E' of extensions to the context w is defined as (6) while the incremental benefit is defined as (7). ALe(w, E') - L(¢ U ({w} × E')) - L(¢) (6) ALT(W, E') - L(TI¢ ) - L(T[¢ U ({w} x E')) (7) Keeping only significant terms that are monoton- ically nondecreasing, we approximate the incremen- tal cost ALe(w, E') as loglDl+log IS'l + log c(Lwj) + log ( c(w)ls, i + C 'I ) The first term represents the incremental increase in the size of the context dictionary D. The second term represents the cost of encoding the candidate extensions E(w) = E ~. The third term represents (an upper bound on) the cost of encoding c(w). The fourth term represents the cost of encoding c(.Iw ) for E(w). Only the second and fourth terms are signficant. Let us now consider the incremental benefit of adding the extensions E' to a given context w. The addition of a single parameter (w, ~r) to the model ¢ will immediately change A(alw), by definition of the model class. Any change to A(.Iw ) will also change the expansion factor 5(w) in that context, which may in turn change the conditional probabili- ties ~(E-E(w)lw, ¢) of symbols not available in that context. Thus the incremental benefit of adding the extensions E' to the context w may be calculated as ALT(w,E') -- c(E - E'lw)log 1 - A(E'Iw) 1 - ~(S'l~, ¢) + ~ c('/Iwll°g~(~,lw,¢ ) a' E Fd The first term represents the incremental benefit (in bits) for evaluating E - E' in the context w using the more accurate expansion factor 5(w). The sec- ond term represents the incremental benefit (in bits) of using the direct estimate A(a'lw ) instead of the model probability/5(cr'lw, ¢) in the context w. Note that A(a'lw) may be more or less than/~(cr'lw , ¢). Now the incremental cost and benefit of adding a single extension (w, cr) to a model that already contains the extensions (w, El/ may be defined as follows. ALe(w, E', a) -- ALe(w, E' U {a}) - ALe(w, E') 223 ALT(w, ~', a) - ALT(w, ~' U {a}) - ALT(W, ~') Let us now use these incremental cost/benefit for- mulas to design a simple heuristic estimation algo- rithm for the extension model. The algorithm con- sists of two subroutines. Refine(D,E,n) augments the model with all individually profitable extensions of contexts of length n. It rests on the assump- tion that adding a new context does not change the model's performance in the shorter contexts. Extend(w) determines all profitable extensions of the candidate context w, if any exist. Since it is not feasible to evaluate the incremental profit of every subset of E, Extend(w) uses a greedy heuristic that repeatedly augments the set of profitable extensions of w by the single most profitable extension until it is not longer profitable to do so. Refine( D,E,n) 1. D, := {};E, := {}; 2. Cn := {w: w e Cn-1 ~']~ A c(w) > Cmi.} ; 3. if (( n > nm~=) V (ICnl = 0)) then return; 4. for w E Cn 5. ~' := Extend(w); 6. if ISI > o then D. :-- Dn U {w}; En(w) := S; 7. D:=DUDn;E:=EUEn; 8. Refine( D,E,n -F 1); Cn is the set of candidate contexts of length n, obtained from the training corpus. Dn is the set of profitable contexts of length n, while En is the set of profitable extensions of those contexts. Extend(w) 1. S :: {}; 2. o" := argmaxoe~. {AL(w, {at})} 3. while (AL(w,S,~) > O) 4. S := S U {a}; S. o" := argrnax.e]g_ s {AL(w, ,S', ¢r)} 6. return(S); The loop in lines 3-5 repeatedly finds the single most profitable symbol a with which to augment the set S of profitable extensions. The incremental profit AL(...) is the incremental benefit ALT(...) minus the incremental cost ALe(...). Our breadth-first search considers shorter con- texts before longer ones, and consequently the deci- sion to add a profitable context y may significantly decrease the benefit of a more profitable context xy, particularly when c(xy) ~ c(y). For example, con- sider a source with two hidden states. In the first state, the source generates the alphabet E = {0, 1,2} uniformly. In the second state, the source generates the string "012" with certainty. With appropriate state transition probabilities, the source generates strings where c(0) ~ c(1) ~ e(2), c(211)/c(1 ) >> c(21e)/c(c), and c(2101)/c(01 ) > c(211)/c(1 ). In such a situation, the best context model includes the con- texts "0" and "01" along with the empty context c. However, the divergence heuristic will first deter- mine that the context "1" is profitable relative to the empty context, and add it to the model. Now the profitability of the better context °'01" is reduced, and the divergence heuristic may therefore not in- clude it in the model. This problem is best solved with a best first search. Our current implementation uses a breadth first search to limit the computational complexity of model selection. Finally, we note that our parameter estimation techniques and model selection criteria are compara- ble in computational complexity to Rissanen's con- text models (1983, 1986). For that reason, extension models should be amendable to efficient online im- plementation. 3 Empirical Results By means of the following experiments, we hope to demonstrate the utility of our context modeling techniques. All results are based on the Brown cor- pus, an eclectic collection of English prose drawn from 500 sources across 15 genres (Francis and Kucera, 1982). The irregular and nonstationary na- ture of this corpus poses an exacting test for sta- tistical language models. We use the first 90% of each file in the corpus to estimate our models, and then use the remaining 10% of each file in the corpus to evaluate the models. Each file contains approx- imately 2000 words. Due to limited computational resources, we set nmax = 10, Cmin -~- 8, and restrict our our alphabet size to 70 (ie., all printing ascii characters, ignoring case distinction). Our results are summarized in the following ta- ble. Message entropy (in bits/symbol) is for the testing corpus only, as per traditional model vali- dation methodology. The nonmonotonic extension model (NEM) outperforms all other models for all orders using vastly fewer parameters. Its perfor- mance all the more impressive when we consider that no context blending or escaping is performed, even for novel events. We note that the test message entropy of the n- gram model class is minimized by the 5-gram at 2.38 bits/char. This result for the 5-gram is not honest because knowledge of the test set was used to select the optimal model order. Jelinek and Mercer (1980) have shown to interpolate n-grams of different or- der using mixing parameters that are conditioned on the history. Such interpolated Markov sources are considerably more powerful than traditional n- grams but contain even more parameters. The best reported results on the Brown Corpus are 1.75 bits/char using a large interpolated trigram word model whose parameters are estimated using over 600,000,000 words of proprietary training data (Brown et.al., 1992). The use of proprietary training data means that these results are not independently repeatable. In contrast, our results were obtained using only 900,000 words of generally available train- ing data and may be independently verified by any- 224 Model NEM NCM MCM1 MCM2 n-gram Parameters Entropy 89,325 1.97 687,276 2.19 88,945,904 2.43 88,945,904 3.12 506,352,021,176,052 3.74 Table 1: Results for the nonmonotonic extension model (NEM), the nonmonotonic context model (NCM), Rissanen's (1983,1986) monotonic context models (MCM1, MCM2) and the n-gram model. All models are order 7. The rightmost column contains test message entropy in bits/symbol. NEM outper- forms all other model classes for all orders using sig- nificantly fewer parameters. It is possible to reduce the test message entropy of the NEM and NCM to 1.91 and 1.99, respectively, by quadrupling the num- ber of model parameters. one with the inclination to do so. The amount of training data is known to be a significant factor in model performance. Given a sufficiently rich dictio- nary of words and a sufficiently large training corpus, a model of word sequences is likely to outperform an otherwise equivalent model of character sequences. For these three reasons - repeatability, training cor- pus size, and the advantage of word models over character models - the results reported by Brown et.al (1992) are not directly comparable to those re- ported here. Section 3.1 compares the statistical efficiency of the various context model classes. Next, sec- tion 3.2 anecodatally examines the complex interac- tions among the parameters of an extension model. 3.1 Model Class Comparison Given the tremendous risk of overfitting, the most important property of a model class is arguably its statistical efficiency. Informally, statistical efficiency measures the effectiveness of individual parameters in a given model class. A high efficiency indicates that our model class provides a good description of the data. Conversely, a low efficiency indicates that the model class does not adequately describe the ob- served data. In this section, we compare the statistical effi- ciency of three model classes: context models, ex- tension models, and fixed-length Markov processes (ie., n-grams). Our model class comparison is based on three criteria of statistical efficiency: total code- length, bits/parameter on the test message, and bits/order on the test message. The context and extension models are all of order 9, and were es- timated using the true incremental benefit and a range of fixed incremental costs (between 5 and 25 bits/extension for the extension model and between 25 and 150 bits/context for the context model). According to the first criteria of statistical effi- ciency, the best model is the one that achieves the smallest total codelength L(T, ¢) of the training cor- pus T and model ¢ using the fewest parameters. This criteria measures the statistical efficiency of a model class according to the MDL framework, where we would like each parameter to be as cheap as pos- sible and do as much work as possible. Figure 1 graphs the number of model parameters required to achieve a given total codelength for the training cor- pus and model. The extension model class is the overwhelming winner. ......... N. um ,be. r, of Param,et?rs.. vs: Codele, ng~ ........ 3,5 "l t ..... M-... 2,3,4 ngrarn • exle~lslon model .-~ E - - ~-- context mod~ ....'" • """ [ ..'""" i • '"" I '"'""'"'"" i 15 .m" ~- Btm ~ 10000 100000 1000000 I(XX)OO(X) Number of parameters Figure 1: The relationship between the number of model parameters and the total codelength L(T, ¢) of the training corpus T and the model ¢. By this criteria of statistical efficiency, the extension models completely dominate context models and n-grams. According to the second criteria of statistical effi- ciency, the best model is the one that achieves the lowest test message entropy using the fewest param- eters. This criteria measures the statistical efficiency of a model class according to traditional model vali- dation methodology, tempered by a healthy concern for overfitting. Figure 2 graphs the number of model parameters required to achieve a given test message entropy for each of the three model classes. Again, the extension model class is the clear winner. (This is particularly striking when the number of parame- ters is plotted on a linear scale.) For example, one of our extension models saves 0.98 bits/char over the trigram while using less than 1/3 as many param- eters. Given the logarithmic nature of codelength and the scarcity of training data, this is a significant improvement. According to the third criteria of statistical effi- ciency, the best model is one that achieves the low- est test message entropy for a given model order. This criteria is widely used in the language model- ing community, in part because model order is typi- 225 "C" & i 's 3.0. . ....... , . . . . . . . . i . . . . . . . . = . i e~slon mo~l 2.8 22 27. =~°~"~ ..... 3,4 gram "'"'",........ "'... 2.6 '.... "'x 2.4 2.2 2.0 L8 10 4 10 s 10 6 10 7 Number of Parameters vs. Message Entropy . . . . . . . . , . . . . . . . . i . . . . . . . . = . . . . . . . . N ~ r ~ Pa~em to ~ Figure 2: The relationship between the number of model parameters and test message entropy. The most striking fact about this graph is the tremen- dous efficiency of the extension model. cally -- although not necessarily -- related to both the number of model parameters and the amount of computation required to estimate the model. Fig- ure 3 compares model order to test message entropy for each of the three model classes. As the order of the models increases from 0 (ie., unigram) to 10, we naturally expect the test message entropy to ap- proach a lower bound, which is itself bounded below by the true source entropy. By this criteria, the ex- tension model class is better than the context model class, and both are significantly better than the n- gram. 4.4 4.2 4.0 3.8 3.6 3.4 3.2 3.0 2.8 2.6 2.4 2.2 2.0 1.8 Model Order vs. Message Entropy ..... • .. ...... . --:-- :~Z~m=~ ~ 2 4 6 8 10 Model Order Figure 3: The relationship between model order and test message entropy. The extension model class is the clear winner by this criteria as well. 3.2 Anecdotes It is also worthwhile to interpret the parameters of the extension model estimated from the Brown Cor- pus, to better understand the interaction between our model class and our heuristic model selection al- gorithm. According to the divergence heuristic, the decision to add an extension (w, ~) is made relative to that context's maximal proper suffix LwJ in D as well as any other extensions in the context w. An extension (w, ~) will be added only if the direct es- timate of its conditional probability is significantly different from its conditional probability in its maxi- mal proper suffix after scaling by the expansion fac- tor in the context w, ie., if A(alw ) is significantly different than 6(w)~(c~ I LwJ). This is illusrated by the three contexts and six extensions shown immediately below, where +E(w) includes all symbols in E(w) that are more likely in w than they were in [wJ and -E(w) includes all symbols in E(w) that are less likely in w than they were in L J. W "blish" "ouestablish" "euestablish" +E(w) -E(w) e,i,m U m e The substring blish is most often followed by the characters 'e', 5', and 'm', corresponding to the rel- atively frequent word forms publish{ ed, er, ing} and establish{ ed, ing, ment}. Accordingly, the context "blish" has three positive extensions {e,i,m}, of which e has by far the greatest probability. The context "blish" is the maximal proper suffix of two other contexts in the model, "ouestablish" and "euestablish". The substring o establish occurs most frequently in the gerund to establish, which is nearly always followed by a space. Accordingly, the context "ouestablish" has a single positive extension "u". The substring o establish is also found before the characters 'm', 'e', and 'i' in sequences such as to establishments, {who, ratio, also} established, and { to, into, also} establishing. Accordingly, the context "ouestablish" does not have any negative exten- sions. In contrast, the substring e establish is overwhelm- ingly followed by the character 'm', rarely followed by 'e', and never followed by either 'i' or space. For this reason, the context "euestablish" has a sin- gle positive extension {m} corresponding to the great frequency of the string the establishment. This con- text also has single negative extension {e}, corre- sponding to the fact that the character 'e' is still pos- sible in the context "euestablish" but considerably less likely than in that context's maximal proper suf- fix "blish". Since 'i' is reasonably likely in the context "blish" but completely unlikely in the context "euestablish", we may well wonder why the model 226 does not include the negative extension 'i' in addi- tion to 'e' or even instead of 'e'. This puzzle is ex- plained by the expansion factor as follows. Since the substring e establish is only followed by 'm' and 'e', the expansion factor ~("e,,establish") is essen- tially zero after 'm' and 'e' are added to that con- text, and therefore ~(~- {m, e}l "euestablish") is also essentially zero. Thus, 'i' and space are both assigned nearly zero probability in the con- text "e, ,establish", simply because 'm' and 'e' get nearly all the probability in that context. 4 Conclusion In ongoing work, we are investigating extension mix- ture models as well as improved model selection al- gorithms. An extension mixture model is an exten- sion model whose ~(~lw) parameters are estimated by linearly interpolating the empirical probability estimates for all extensions that dominate w with respect to c~, ie., all extensions whose symbol is and whose context is a suffix of w. Extension mix- ing allows us to remove the uniform flattening of zero frequency symbols in our parameter estimates (5). Preliminary results are promising. The idea of context mixing is due to Jelinek and Mercer (1980). Our results highlight the fundamental tension be- tween model complexity and data complexity. If the model complexity does not match the data complex- ity, then both the total codelength of the past obser- vations and the predictive error increase. In other words, simply increasing the number of parameters in the model does not necessarily increase predictive power of the model. Therefore, it is necessary to have a a fine-grained model along with a heuristic model selection algorithm to guide the expansion of the model in a principled manner. Acknowledgements. Thanks to Andrew Appel, Carl de Marken, and Dafna Scheinvald for their cri- tique. The paper has benefited from discussions with the participants of DCC95. Both authors are par- tially supported by Young Investigator Award IRI- 0258517 to the first author from the National Science Foundation. The second author is additionally sup- ported by a tuition award from the Princeton Uni- versity Research Board. The research was partially supported by NSF SGER IRI-9217208. FRANCIS, W. N., AND KUCERA, H. Frequency analysis of English usage: lexicon and grammar. Houghton Mifflin, Boston, 1982. JELINEK, F., AND MERCER, a. L. Interpolated es- timation of Markov source parameters from sparse data. In Pattern Recognition in Practice (Amster- dam, May 21-23 1980), E. S. Gelsema and L. N. Kanal, Eds., North Holland, pp. 381-397. KNUTH, D. E. The Art of Computer Programming, 1 ed., vol. 1. Addison-Wesley, Reading, MA, 1968. MOFFAT, A. Implementing the PPM data compre- sion scheme. IEEE Trans. Communications 38, 11 (1990), 1917-1921. RISSANEN, J. Modeling by shortest data descrip- tion. Automatica 14 (1978), 465-471. RISSANEN, J. A universal data compression system. IEEE Trans. Information Theory IT-29, 5 (1983), 656-664. RISSANEN, J. Complexity of strings in the class of Markov sources. IEEE Trans. Information The- ory IT-32, 4 (1986), 526-532. RISTAD, E. S., AND THOMAS, R. G. Context models in the MDL framework. In Proceedings of 5th Data Compression Conference (Los Alamitos, CA, March 28-30 1995), J. Storer and M. Cohn, Eds., IEEE Computer Society Press, pp. 62-71. References BROWN, P., PIETRA, V. D., PIETRA, S. D., LAI, J., AND MERCER, R. An estimate of an upper bound for the entropy of English. Computational Linguistics 18 (1992), 31-40. CLEARY, J., AND WITTEN, I. Data compression using adaptive coding and partial string matching. IEEE Trans. Comm. COM-32, 4 (1984), 396-402. 227 | 1995 | 30 |
Bayesian Grammar Induction for Language Modeling Stanley F. Chen Aiken Computation Laboratory Division of Applied Sciences Harvard University Cambridge, MA 02138 sfc@das, harvard, edu Abstract We describe a corpus-based induction algo- rithm for probabilistic context-free gram- mars. The algorithm employs a greedy heuristic search within a Bayesian frame- work, and a post-pass using the Inside- Outside algorithm. We compare the per- formance of our algorithm to n-gram mo- dels and the Inside-Outside algorithm in three language modeling tasks. In two of the tasks, the training data is generated by a probabilistic context-free grammar and in both tasks our algorithm outperforms the other techniques. The third task involves naturally-occurring data, and in this task our algorithm does not perform as well as n-gram models but vastly outperforms the Inside-Outside algorithm. 1 Introduction In applications such as speech recognition, hand- writing recognition, and spelling correction, perfor- mance is limited by the quality of the language mo- del utilized (7; 7; 7; 7). However, static language modeling performance has remained basically un- changed since the advent of n-gram language mo- dels forty years ago (7). Yet, n-gram language mo- dels can only capture dependencies within an n- word window, where currently the largest practical n for natural language is three, and many dependen- cies in natural language occur beyond a three-word window. In addition, n-gram models are extremely large, thus making them difficult to implement effi- ciently in memory-constrained applications. An appealing alternative is grammar-based lan- guage models. Language models expressed as a pro- babilistic grammar tend to be more compact than n-gram language models, and have the ability to mo- del long-distance dependencies (7; 7; 7). However, to date there has been little success in constructing grammar-based language models competitive with n-gram models in problems of any magnitude. In this paper, we describe a corpus-based indue- tion algorithm for probabilistic context-free gram- mars that outperforms n-gram models and the Inside-Outside algorithm (7) in medium-sized do- mains. This result marks the first time a grammar- based language model has surpassed n-gram mode- ling in a task of at least moderate size. The al- gorithm employs a greedy heuristic search within a Bayesian framework, and a post-pass using the Inside-Outside algorithm. 2 Grammar Induction as Search Grammar induction can be framed as a search pro- blem, and has been framed as such almost without exception in past research (7). The search space is taken to be some class of grammars; for example, in our work we search within the space of probabilistic context-free grammars. The objective function is ta- ken to be some measure dependent on the training data; one generally wants to find a grammar that in some sense accurately models the training data. Most work in language modeling, including n- gram models and the Inside-Outside algorithm, falls under the maximum-likelihood paradigm, where one takes the objective function to be the likelihood of the training data given the grammar. However, the optimal grammar under this objective function is one which generates only strings in the training data and no other strings. Such grammars are poor lan- guage models, as they overfit the training data and do not model the language at large. In n-gram mo- dels and the Inside-Outside algorithm, this issue is evaded by bounding the size and form of the gram- mars considered, so that the "optimal" grammar cannot be expressed. However, in our work we do not wish to limit the size of the grammars conside- red. The basic shortcoming of the maximum-likelihood objective function is that it does not encompass the compelling intuition behind Occam's Razor, that simpler (or smaller) grammars are preferable over complex (or larger) grammars. A factor in the ob- jective function that favors smaller grammars over 228 s --, sx (l-e) s x (,) X ~ A (p(A)) Aa ---* a (1) VAeN-{S,X} VaET N = the set of all nonterminal symbols T = the set of all terminal symbols Probabilities for each rule are in parentheses. Table 1: Initial hypothesis grammar large can prevent the objective function from pre- ferring grammars that overfit the training data. ?) presents a Bayesian grammar induction framework that includes such a factor in a motivated manner. The goM of grammar induction is taken to be fin- ding the grammar with the largest a posteriori pro- bability given the training data, that is, finding the grammar G ~ where c' = arg m xp(GIo) and where we denote the training data as O, for ob- servations. As it is unclear how to estimate p(GIO) directly, we apply Bayes' Rule and get a I = arg p(Ola)p(a) p(o) = arg% xp(O[a)p(a) Hence, we can frame the search for G ~ as a search with the objective function p(OIG)p(G), the likeli- hood of the training data multiplied by the prior probability of the grammar. We satisfy the goal of favoring smaller grammars by choosing a prior that assigns higher probabilities to such grammars. In particular, Solomonoff propo- ses the use of the universal a priori probability (?), which is closely related to the minimum description length principle later proposed by (?). In the case of grammatical language modeling, this corresponds to taking p(G) = 2 -t(a) where l(G) is the length of the description of the grammar in bits. The universal a priori probabi- lity has many elegant properties, the most salient of which is that it dominates all other enumerable probability distributions multiplicativelyJ 3 Search Algorithm As described above, we take grammar induction to be the search for the grammar G ~ that optimizes the objective function p(OlG)p(G ). While this frame- work does not restrict us to a particular grammar formalism, in our work we consider only probabili- stic context-free grammars. 1A very thorough discussion of the universal a priori probability is given by 7). We assume a simple greedy search strategy. We maintain a single hypothesis grammar which is in- itialized to a small, trivial grammar. We then try to find a modification to the hypothesis grammar, such as the addition of a grammar rule, that results in a grammar with a higher score on the objective func- tion. When we find a superior grammar, we make this the new hypothesis grammar. We repeat this process until we can no longer find a modification that improves the current hypothesis grammar. For our initial grammar, we choose a grammar that can generate any string, to assure that the grammar can cover the training data. The initial grammar is listed in Table ??. The sentential symbol S expands to a sequence of X's, where X expands to every other nonterminal symbol in the grammar. Initially, the set of nonterminal symbols consists of a different nonterminal symbol expanding to each terminal symbol. Notice that this grammar models a sentence as a sequence of independently generated nonterminal symbols. We maintain this property throughout the search process, that is, for every symbol A ~ that we add to the grammar, we also add a rule X ---+ A I. This assures that the sentential symbol can expand to every symbol; otherwise, adding a symbol will not affect the probabilities that the grammar assigns to strings. We use the term move set to describe the set of modifications we consider to the current hypothesis grammar to hopefully produce a superior grammar. Our move set includes the following moves: Move 1: Create a rule of the form A ---* BC Move 2: Create a rule of the form A --+ BIC For any context-free grammar, it is possible to ex- press a weakly equivalent grammar using only rules of these forms. As mentioned before, with each new symbol A we also create a rule X ---* A. 3.1 Evaluating the Objective Function Consider the task of calculating the objective func- tion p(OIG)p(G ) for some grammar G. Calculating 229 S S X Aslowly i i i X A=az~s slowly I ] A Ma,-y talks i Mary S S X A,towtv [ I $ X Ataak, slowly I i ABo, talks i Bob Figure 1: Initial Viterbi Parse S s//"'x s i i I X B X AMary Atatks Astowty ABob I i t i Mary talks slowly Bob S X i B Atatk, Ajtowty I I talks slowly Figure 2: Predicted Viterbi Parse p(G) = 2 -/(G) is inexpensive2; however, calculating p(OIG) requires a parsing of the entire training data. We cannot afford to parse the training data for each grammar considered; indeed, to ever be practical for data sets of millions of words, it seems likely that we can only afford to parse the data once. To achieve this goal, we employ several approxi- mations. First, notice that we do not ever need to calculate the actual value of the objective function; we need only to be able to distinguish when a move applied to the current hypothesis grammar produces a grammar that has a higher score on the objective function, that is, we need only to be able to calcu- late the difference in the objective function resulting from a move. This can be done efficiently if we can quickly approximate how the probability of the trai- ning data changes when a move is applied. To make this possible, we approximate the proba- bility of the training data p(OIG ) by the probability of the single most probable parse, or Viterbi parse, of the training data. Furthermore, instead of recal- culating the Viterbi parse of the training data from scratch when a move is applied, we use heuristics to predict how a move will change the Viterbi parse. For example, consider the case where the training data consists of the two sentences O = {Bob talks slowly, Mary talks slowly} ~Due to space limitations, we do not specify our me- thod for encoding grammars, i.e., how we calculate l(G) for a given G. However, this will be described in the author's forthcoming Ph.D. dissertation. In Figure ??, we display the Viterbi parse of this data under the initial hypothesis grammar used in our algorithm. Now, let us consider the move of adding the rule B ---* Atalks Aslo~ty to the initial grammar (as well as the concomitant rule X ---* B). A reasonable heuristic for predic- ting how the Viterbi parse will change is to replace adjacent X's that expand to Atazk, and A~zo~,ty re- spectively with a single X that expands to B, as displayed in Figure ??. This is the actual heuristic we use for moves of the form A ---* BC, and we have analogous heuristics for each move in our move set. By predicting the differences in the Viterbi parse re- sulting from a move, we can quickly estimate the change in the probability of the training data. Notice that our predicted Viterbi parse can stray a great deal from the actual Viterbi parse, as errors can accumulate as move after move is applied. To minimize these effects, we process the training data incrementally. Using our initial hypothesis gram- mar, we parse the first sentence of the training data and search for the optimal grammar over just that one sentence using the described search framework. We use the resulting grammar to parse the second sentence, and then search for the optimal grammar over the first two sentences using the last grammar as the starting point. We repeat this process, par- sing the next sentence using the best grammar found on the previous sentences and then searching for the 230 best grammar taking into account this new sentence, until the entire training corpus is covered. Delaying the parsing of a sentence until all of the previous sentences are processed should yield more accurate Viterbi parses during the search process than if we simply parse the whole corpus with the initial hypothesis grammar. In addition, we still achieve the goal of parsing each sentence but once. 3.2 Parameter Training In this section, we describe how the parameters of our grammar, the probabilities associated with each grammar rule, are set. Ideally, in evaluating the ob- jective function for a particular grammar we should use its optimal parameter settings given the training data, as this is the full score that the given gram- mar can achieve. However, searching for optimal parameter values is extremely expensive computa- tionally. Instead, we grossly approximate the opti- mal values by deterministically setting parameters based on the Viterbi parse of the training data par- sed so far. We rely on the post-pass, described later, to refine parameter values. Referring to the rules in Table ??, the parameter e is set to an arbitrary small constant. The values of the parameters p(A) are set to the (smoothed) frequency of the X ~ A reduction in the Viterbi parse of the data seen so far. The remaining symbols are set to expand uniformly among their possible expansions. 3.3 Constraining Moves Consider the move of creating a rule of the form A --* BC. This corresponds to k 3 different specific rules that might be created, where k is the current number of symbols in the grammar. As it is too computationally expensive to consider each of these rules at every point in the search, we use heuristics to constrain which moves are appraised. For the left-hand side of a rule, we always create a new symbol. This heuristic selects the optimal choice the vast majority of the time; however, under this constraint the moves described earlier in this section cannot yield arbitrary context-free langua- ges. To partially address this, we add the move Move 3: Create a rule of the form A ---* AB[B With this iteration move, we can construct gram- mars that generate arbitrary regular languages. As yet, we have not implemented moves that enable the construction of arbitrary context-free grammars; this belongs to future work. To constrain the symbols we consider on the right-hand side of a new rule, we use what we call ~riggcrs. 3 A ~rigger is a phenomenon in the Viterbi parse of a sentence that is indicative that a particular move might lead to a better grammar. For example, 3This is not to be confused with the use of the term triggers in dynamic language modeling. in Figure .9.9 the fact that the symbols Atalks and Aszo,ozv occur adjacently is indicative that it could be profitable to create a rule B ---* At~t~sAsto,olv. We have developed a set of triggers for each move in our move set, and only consider a specific move if it is triggered in the sentence currently being parsed in the incremental processing. 3.4 Post-Pass A conspicuous shortcoming in our search framework is that the grammars in our search space are fairly unexpressive. Firstly, recall that our grammars mo- del a sentence as a sequence of independently gene- rated symbols; however, in language there is a large dependence between adjacent constituents. Further- more, the only free parameters in our search are the parameters p(A); all other symbols (except S) are fixed to expand uniformly. These choices were ne- cessary to make the search tractable. To address this issue, we use an Inside-Outside al- gorithm post-pass. Our methodology is derived from that described by .9). We create n new nonterminal symbols {X1,...,X,}, and create all rules of the form: X~ ~ Xj Xk i,j, k e {1,...,n} Xi--* A iE {1,...,n}, A E No~d- {S, X} Nold denotes the set of nonterminal symbols acqui- red in the initial grammar induction phase, and X1 is taken to be the new sentential symbol. These new rules replace the first three rules listed in Ta- ble .9.9. The parameters of these rules are initiMized randomly. Using this grammar as the starting point, we run the Inside-Outside algorithm on the training data until convergence. In other words, instead of using the naive S SXIX rule to attach symbols together in parsing data, we now use the Xi rules and depend on the Inside-Outside algorithm to train these randomly initialized rules intelligently. This post-pass allows us to express dependencies between adjacent sym- bols. In addition, it allows us to train parameters that were fixed during the initial grammar induc- tion phase. 4 Previous Work As mentioned, this work employs the Bayesian gram- mar induction framework described by Solomonoff (.9; ?). However, Solomonoff does not specify a con- crete search algorithm and only makes suggestions as to its nature. Similar research includes work by Cook et al. (1976) and Stolcke and Omohundro (1994). This work also employs a heuristic search within a Baye- sian framework. However, a different prior proba- bility on grammars is used, and the algorithms are only efficient enough to be applied to small data sets. 231 The grammar induction algorithms most suc- cessful in language modeling include the Inside- Outside algorithm (.7; ?; ?), a special case of the Expectation-Maximization algorithm (?), and work by ?). In the latter work, McCandless uses a heu- ristic search procedure similar to ours, but a very different search criteria. To our knowledge, neither algorithm has surpassed the performance of n-gram models in a language modeling task of substantial scale. 5 Results To evaluate our algorithm, we compare the perfor- mance of our algorithm to that of n-gram models and the Inside-Outside algorithm. For n-gram models, we tried n - 1,...,10 for each domain. For smoothing a particular n-gram model, we took a linear combination of all lower or- der n-gram models. In particular, we follow stan- dard practice (?; ?; ?) and take the smoothed i- gram probability to be a linear combination of the /-gram frequency in the training data and the smoo- thed (i - 1)-gram probability, that is, p(w01W = wi_l...w-i) = c(W~o0) + Ai,o(w) c(W) (1 - Ai,c(w))p(wolwi_2 . . . w-z) where c(W) denotes the count of the word sequence W in the training data. The smoothing parameters ,~i,c are trained through the Forward-Backward al- gorithm (?) on held-out data. Parameters Ai.e are tied together for similar c to prevent data sparsity. For the Inside-Outside algorithm, we follow the methodology described by Lari and Young. For a given n, we create a probabilistic context-free gram- mar consisting of all Chomsky normal form rules over the n nonterminal symbols {X1, •. • Xn } and the given terminal symbols, that is, all rules Xi ---* Xj Xk i,j, k E {1,...,n} Xi ---* a i E {1,. . .,n},a E T where T denotes the set of terminal symbols in the domain. All parameters are initialized randomly. From this starting point, the Inside-Outside algo- rithm is run until convergence. For smoothing, we combine the expansion distri- bution of each symbol with a uniform distribution, that is, we take the smoothed parameter ps(A ---* a) to be 1 p,(A ~ a) = (1 - A)p,,(A ---* a) + An3 -F n[T[ where p~ (A --~ a) denotes the unsmoothed parame- ter. The value n 3 + n[TI is the number of different ways a symbol expands under the Lari and Young methodology. The parameter A is trained through the Inside-Outside algorithm on held-out data. This smoothing is also performed on the Inside-Outside post-pass of our algorithm. For each domain, we tried n -- 3,..., 10. Because of the computational demands of our al- gorithm, it is currently impractical to apply it to large vocabulary or large training set problems. Ho- wever, we present the results of our algorithm in three medium-sized domains. In each case, we use 4500 sentences for training, with 500 of these sent- ences held out for smoothing. We test on 500 sent- ences, and measure performance by the entropy of the test data. In the first two domains, we created the training and test data artificially so as to have an ideal gram- mar in hand to benchmark results. In particular, we used a probabilistic grammar to generate the data. In the first domain, we created this grammar by hand; the grammar was a small English-like probabi- listic context-free grammar consisting of roughly 10 nonterminal symbols, 20 terminal symbols, and 30 rules. In the second domain, we derived the gram- mar from manually parsed text. From a million words of parsed Wall Street Journal data from the Penn treebank, we extracted the 20 most frequently occurring symbols, and the 10 most frequently oc- curring rules expanding each of these symbols. For each symbol that occurs on the right-hand side of a rule but which was not one of the most frequent 20 symbols, we create a rule that expands that symbol to a unique terminal symbol. After removing unre- achable rules, this yields a grammar of roughly 30 nonterminals, 120 terminals, and 160 rules. Para- meters are set to reflect the frequency of the corre- sponding rule in the parsed corpus. For the third domain, we took English text and reduced the size of the vocabulary by mapping each word to its part-of-speech tag. We used tagged Wall Street Journal text from the Penn treebank, which has a tag set size of about fifty. In Tables ??_?.7, we summarize our results. The ideal grammar denotes the grammar used to gene- rate the training and test data. For each algorithm, we list the best performance achieved over all n tried, and the best n column states which value realized this performance. We achieve a moderate but significant improve- ment in performance over n-gram models and the Inside-Outside algorithm in the first two domains, while in the part-of-speech domain we are outper- formed by n-gram models but we vastly outperform the Inside-Outside algorithm. In Table ??, we display a sample of the number of parameters and execution time (on a Decstation 5000/33) associated with each algorithm. We choose n to yield approximately equivalent performance for each algorithm. The first pass row refers to the main grammar induction phase of our algorithm, and the post-pass row refers to the Inside-Outside post-pass. 232 best entropy n (bits/word) ideal grammar 2.30 our algorithm 7 2.37 n-gram model 4 2.46 Inside-Outside 9 2.60 entr. relative to n-gram -6.5% -3.7% +5.7% Table 2: English-like artificial grammar best entropy n (bits/word) ideal grammar our algorithm 9 n-gram model 4 Inside-Outside 9 4.13 4.44 entr. relative to n-gram --10.4% -3.7% 4.61 4.64 +0.7% Table 3: Wall Street Journal-like artificial grammar Notice that our algorithm produces a significantly more compact model than the n-gram model, while running significantly faster than the Inside-Outside algorithm even though we use an Inside-Outside post-pass. Part of this discrepancy is due to the fact that we require a smaller number of new nonterminal symbols to achieve equivalent performance, but we have also found that our post-pass converges more quickly even given the same number of nonterminal symbols. 6 Discussion Our algorithm consistently outperformed the Inside- Outside algorithm in these experiments. While we partially attribute this difference to using a Bayesian instead of maximum-likelihood objective function, we believe that part of this difference results from a more effective search strategy. In particular, though both algorithms employ a greedy hill-climbing strat- egy, our algorithm gains an advantage by being able to add new rules to the grammar. In the Inside-Outside algorithm, the gradient des- cent search discovers the "nearest" local minimum in the search landscape to the initial grammar. If there are k rules in the grammar and thus k parameters, then the search takes place in a fixed k-dimensional space IR ~. In our algorithm, it is possible to ex- pand the hypothesis grammar, thus increasing the dimensionality of the parameter space that is being searched. An apparent local minimum in the space ]Rk may no longer be a local minimum in the space ]~k+l; the extra dimension may provide a pathway for further improvement of the hypothesis grammar. Hence, our algorithm should be less prone to sub- optimal local minima than the Inside-Outside algo- rithm. Outperforming n-gram models in the first two do- mains demonstrates that our algorithm is able to take advantage of the grammatical structure present in data. However, the superiority of n-gram models in the part-of-speech domain indicates that to be competitive in modeling naturally-occurring data, it is necessary to model collocational information ac- curately. We need to modify our algorithm to more aggressively model n-gram information. 7 Conclusion This research represents a step forward in the quest for developing grammar-based language models for natural language. We induce models that, while being substantially more compact, outperform n- gram language models in medium-sized domains. The algorithm runs essentially in time and space li- near in the size of the training data, so larger do- mains are within our reach. However, we feel the largest contribution of this work does not lie in the actual algorithm specified, but rather in its indication of the potential of the in- duction framework described by Solomonoffin 1964. We have implemented only a subset of the moves that we have developed, and inspection of our re- sults gives reason to believe that these additional moves may significantly improve the performance of our algorithm. Solomonoff's induction framework is not restric- ted to probabilistic context-free grammars. After completing the implementation of our move set, we plan to explore the modeling of context-sensitive phenomena. This work demonstrates that Solomo- noff's elegant framework deserves much further con- sideration. Acknowledgements We are indebted to Stuart Shieber for his suggestions and guidance, as well as his invaluable comments on earlier drafts of this paper. This material is based 233 best entropy n (bits/word) n-gram model 6 our algorithm 7 Inside-Outside 7 entr. relative to n-gram 3.01 3.15 +4.7% 3.93 +30.6% Table 4: English sentence part-of-speech sequences WSJ n artif. n-gram 3 IO 9 first pass post-pass 5 entropy no. (bits/word) params 4.61 15000 4.64 2000 800 4.60 4000 time (sec) 50 30000 1000 5000 Table 5: Parameters and Training Time on work supported by the National Science Founda- tion under Grant Number IRI-9350192 to Stuart M. Shieber. References D. Angluin and C.H. Smith. 1983. Inductive in- ference: theory and methods. ACM Computing Surveys, 15:237-269. L.R. Bahl, J.K. Baker, P.S. Cohen, F. Jelinek, B.L. Lewis, and R.L. Mercer. 1978. Recognition of a continuously read natural corpus. In Proceedings of the IEEE International Conference on Acou- stics, Speech and Signal Processing, pages 422- 424, Tulsa, Oklahoma, April. Lalit R. Bahl, Frederick Jelinek, and Robert L. Mer- cer. 1983. A maximum likelihood approach to continuous speech recognition. IEEE Transac- tions on Pattern Analysis and Machine Intelli- gence, PAMI-5(2):179-190, March. J.K. Baker. 1975. The DRAGON system - an over- view. IEEE Transactions on Acoustics, Speech and Signal Processing, 23:24-29, February. J.K. Baker. 1979. Trainable grammars for speech recognition. In Proceedings of the Spring Confe- rence of the Acoustical Society of America, pages 547-550, Boston, MA, June. L.E. Baum and J.A. Eagon. 1967. An inequality with application to statistical estimation for pro- babilistic functions of Markov processes and to a model for ecology. Bulletin of the American Ma- thematicians Society, 73:360-363. Peter F. Brown, Vincent J. DellaPietra, Peter V. deSouza, Jennifer C. Lai, and Robert L. Mercer. 1992. Class-based n-gram models of natural lan- guage. Computational Linguistics, 18(4):467-479, December. A.P. Dempster, N.M. Laird, and D.B. Rubin. 1977. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, 39(B):1-38. Frederick Jelinek and Robert L. Mercer. 1980. Inter- polated estimation of Markov source parameters from sparse data. In Proceedings of the Workshop on Pattern Recognition in Practice, Amsterdam, The Netherlands: North-Holland, May. M.D. Kernighan, K.W. Church, and W.A. Gale. 1990. A spelling correction program based on a noisy channel model. In Proceedings of the Thir- teenth International Conference on Computatio- nal Linguistics, pages 205-210. K. Lari and S.J. Young. 1990. The estimation of stochastic context-free grammars using the inside- outside algorithm. Computer Speech and Lan- guage, 4:35-56. K. Lari and S.J. Young. 1991. Applications of sto- chastic context-free grammars using the inside- outside algorithm. Computer Speech and Lan- guage, 5:237-257. Ming Li and Paul VitAnyi. 1993. An Introduction to Kolmogorov Complexity and its Applications. Springer-Verlag. Michael K. McCandless and James R. Glass. 1993. Empirical acquisition of word and phrase classes in the ATIS domain. In Third European Confe- rence on Speech Communication and Technology, Berlin, Germany, September. Fernando Pereira and Yves Schabes. 1992. Inside- outside reestimation from partially bracket cor- pora. In Proceedings of the 30th Annual Meeting of the ACL, pages 128-135, Newark, Delaware. P. Resnik. 1992. Probabilistic tree-adjoining gram- mar as a framework for statistical natural lan- guage processing. In Proceedings of the 14th In- 234 ternational Conference on Computational £ingui- stics. J. Rissanen. 1978. Modeling by the shortest data description. Automatica, 14:465-471. Y. Schabes. 1992. Stochastic lexicalized tree- adjoining grammars. In Proceedings of the l~th International Conference on Computational Lin- guistics. C.E. Shannon. 1951. Prediction and entropy of printed English. Bell Systems Technical Journal, 30:50-64, January. R,.J. Solomonoff. 1960. A preliminary report on a general theory of inductive inference. Techni- cal Report ZTB-138, Zator Company, Cambridge, MA, November. R.J. Solomonoff. 1964. A formal theory of inductive inference. Information and Control, 7:1-22,224- 254, March, June. Rohini Srihari and Charlotte BMtus. 1992. Combi- ning statisticM and syntactic methods in recogni- zing handwritten sentences. In AAAI Symposium: Probabilistie Approaches to Natural Language, pa- ges 121-127. 235 | 1995 | 31 |
A Pattern Matching Method for Finding Noun and Proper Noun Translations from Noisy Parallel Corpora Pascale Fung Computer Science Department Columbia University New York, NY 10027 pascale©cs, columbia, edu Abstract We present a pattern matching method for compiling a bilingual lexicon of nouns and proper nouns from unaligned, noisy paral- lel texts of Asian/Indo-European language pairs. Tagging information of one lan- guage is used. Word frequency and posi- tion information for high and low frequency words are represented in two different vec- tor forms for pattern matching. New an- chor point finding and noise elimination techniques are introduced. We obtained a 73.1% precision. We also show how the results can be used in the compilation of domain-specific noun phrases. 1 Bilingual lexicon compilation without sentence alignment Automatically compiling a bilingual lexicon of nouns and proper nouns can contribute significantly to breaking the bottleneck in machine translation and machine-aided translation systems. Domain-specific terms are hard to translate because they often do not appear in dictionaries. Since most of these terms are nouns, proper nouns or noun phrases, compiling a bilingual lexicon of these word groups is an impor- tant first step. We have been studying robust lexicon compilation methods which do not rely on sentence alignment. Existing lexicon compilation methods (Kupiec 1993; Smadja & McKeown 1994; Kumano & Hirakawa 1994; Dagan et al. 1993; Wu & Xia 1994) all attempt to extract pairs of words or compounds that are translations of each other from previously sentence- aligned, parallel texts. However, sentence align- ment (Brown et al. 1991; Kay & RSscheisen 1993; Gale & Church 1993; Church 1993; Chen 1993; Wu 1994) is not always practical when corpora have unclear sentence boundaries or with noisy text seg- ments present in only one language. Our proposed algorithm for bilingual lexicon ac- quisition bootstraps off of corpus alignment proce- dures we developed earlier (Fung & Church 1994; Fung & McKeown 1994). Those procedures at- tempted to align texts by finding matching word pairs and have demonstrated their effectiveness for Chinese/English and Japanese/English. The main focus then was accurate alignment, but the proce- dure produced a small number of word translations as a by-product. In contrast, our new algorithm per- forms a minimal alignment, to facilitate compiling a much larger bilingual lexicon. The paradigm for Fung ~: Church (1994); Fung & McKeown (1994) is based on two main steps - find a small bilingual primary lexicon, use the text segments which contain some of the word pairs in the lexicon as anchor points for alignment, align the text, and compute a better secondary lexicon from these partially aligned texts. This paradigm can be seen as analogous to the Estimation-Maximization step in Brown el al. (1991); Dagan el al. (1993); Wu & Xia (1994). For a noisy corpus without sentence boundaries, the primary lexicon accuracy depends on the robust- ness of the algorithm for finding word translations given no a priori information. The reliability of the anchor points will determine the accuracy of the sec- ondary lexicon. We also want an algorithm that bypasses a long, tedious sentence or text alignment step. 2 Algorithm overview We treat the bilingual lexicon compilation problem as a pattern matching problem - each word shares some common features with its counterpart in the translated text. We try to find the best repre- sentations of these features and the best ways to match them. We ran the algorithm on a small Chi- nese/English parallel corpus of approximately 5760 unique English words. The outline of the algorithm is as follows: 1. Tag the English half of the parallel text. In the first stage of the algorithm, only En- glish words which are tagged as nouns or proper nouns are used to match words in the Chinese text. 236 2. Compute the positional difference vector of each word. Each of these nouns or proper nouns is converted from their positions in the text into a vector. 3. Match pairs of positional difference vec- tors~ giving scores. All vectors from English and Chinese are matched against each other by Dynamic Time Warping (DTW). 4. Select a primary lexicon using the scores. A threshold is applied to the DTW score of each pair, selecting the most correlated pairs as the first bilingual lexicon. 5. Find anchor points using the primary lex- icon. The algorithm reconstructs the DTW paths of these positional vector pairs, giving us a set of word position points which are filtered to yield anchor points. These anchor points are used for compiling a secondary lexicon. 6. Compute a position binary vector for each word using the anchor points. The re- maining nouns and proper nouns in English and all words in Chinese are represented in a non- linear segment binary vector form from their po- sitions in the text. 7. Match binary vectors to yield a secondary lexicon. These vectors are matched against each other by mutual information. A confidence score is used to threshold these pairs. We ob- tain the secondary bilingual lexicon from this stage. In Section 3, we describe the first four stages in our algorithm, cumulating in a primary lexicon. Sec- tion 4 describes the next anchor point finding stage. Section 5 contains the procedure for compiling the secondary lexicon. 3 Finding high frequency bilingual word pairs When the sentence alignments for the corpus are un- known, standard techniques for extracting bilingual lexicons cannot apply. To make matters worse, the corpus might contain chunks of texts which appear in one language but not in its translation 1, suggest- ing a discontinuous mapping between some parallel texts. We have previously shown that using a vector rep- resentation of the frequency and positional informa- tion of a high frequency word was an effective way to match it to its translation (Fung & McKeown 1994). Dynamic Time Warping, a pattern recognition tech- nique, was proposed as a good way to match these 1This was found to be the case in the Japanese trans- lation of the AWK manual (Church et al. 1993). The Japanese AWK was also found to contain different pro- gramming examples from the English version. vectors. In our new algorithm, we use a similar po- sitional difference vector representation and DTW matching techniques. However, we improve on the matching efficiency by installing tagging and statis- tical filters. In addition, we not only obtain a score from the DTW matching between pairs of words, but we also reconstruct the DTW paths to get the points of the best paths as anchor points for use in later stages. 3.1 Tagging to identify nouns Since the positional difference vector representation relies on the fact that words which are similar in meaning appear fairly consistently in a parallel text, this representation is best for nouns or proper nouns because these are the kind of words which have con- sistent translations over the entire text. As ultimately we will be interested in finding domain-specific terms, we can concentrate our ef- fort on those words which are nouns or proper nouns first. For this purpose, we tagged the English part of the corpus by a modified POS tagger, and apply our algorithm to find the translations for words which are tagged as nouns, plural nouns or proper nouns only. This produced a more useful list of lexicon and again improved the speed of our program. 3.2 Positional difference vectors According to our previous findings (Fung& McK- eown 1994), a word and its translated counterpart usually have some correspondence in their frequency and positions although this correspondence might not be linear. Given the position vector of a word p[i] where the values of this vector are the positions at which this word occurs in the corpus, one can compute a positional difference vector V[i- 1] where Vii- 1] = p[i]- p[i- 1]. dim(V) is the dimension of the vector which corresponds to the occurrence count of the word. For example, if positional difference vectors for the word Governor and its translation in Chinese .~ are plotted against their positions in the text, they give characteristic signals such as shown in Figure 1. The two vectors have different dimensions because they occur with different frequencies. Note that the two signals are shifted and warped versions of each other with some minor noise. 3.3 Matching positional difference vectors The positional vectors have different lengths which complicates the matching process. Dynamic Time Warping was found to be a good way to match word vectors of shifted or warped forms (Fung & McK- eown 1994). However, our previous algorithm only used the DTW score for finding the most correlated word pairs. Our new algorithm takes it one step fur- ther by backtracking to reconstruct the DTW paths and then automatically choosing the best points on these DTW paths as anchor points. 237 16G00 140Q0 12000 10000 800O 6OOO 4O0O 200O 0 50 1OO ~ 150 200 250 word pos~ M text "govemor.ch.vec.diff" -- T4000 10000 300 80QO 20O0 50 100 150 200 word positiorl in text • govem~.en.vec.diff" -- 250 Figure 1: Positional difference signals showing similarity between Governor in English and Chinese For a given pair of vectors V1, V2, we attempt to discover which point in V1 corresponds to which point in V2 . If the two were not scaled, then po- sition i in V1 would correspond to position j in V2 where j/i is a constant. If we plot V1 against V2, we can get a diagonal line with slope j/i. If they occurred the same number of times, then every po- sition i in V1 would correspond to one and only one position j in V2. For non-identical vectors, DTW traces the correspondences between all points in V1 and V2 (with no penalty for deletions or insertions). Our DTW algorithm with path reconstruction is as follows: • Initialization where ~oz(1,1) = ((1,1) ¢pl(i, 1) = ¢(i, 1) + ~o(i - 1, 1]) toz(1,j) = ff(1,j)+~o(1,j-a) 9~(a, b) = minimum cost of moving from a to b ((c,d) = IVl[c]- V2[aq[ for i = 1,2,...,N j = 1,2,...,M g = dim(V1) M = dim(V2) • Recursion ~on+l (i, m) min [~(l, m) + ~o.(i,/)] 1</<3 for n and m = argmin[~(/, m) + ~n(i, 1)] 1<1<3 = 1,2,...,N-2 = 1,2,...,M • Termination ~ON(i, j) = min ~oN-1 (i,/)] 1</<3[I(1 , rt2) + (N(j) = argmin[~(l,m) + ~oN-x(i,j)] 1_</_<3 • Path reconstruction In our algorithm, we reconstruct the DTW path and obtain the points on the path for later use. The DTW path for Governor/~d~,~ is as shown in Figure 2. optimal path - (i, il,i2,... ,im-2,j) where in = ~n+l(in+l), n -- N- 1,N- 2,... ,1 with iN = j We thresholded the bilingual word pairs obtained from above stages in the algorithm and stored the more reliable pairs as our primary bilingual lexicon. 3.4 Statistical filters If we have to exhaustively match all nouns and proper nouns against all Chinese words, the match- ing will be very expensive since it involves comput- ing all possible paths between two vectors, and then backtracking to find the optimal path, and doing this for all English/Chinese word pairs in the texts. The complexity of DTW is @(NM) and the complexity of the matching is O(IJNM) where I is the number of nouns and proper nouns in the English text, J is the number of unique words in the Chinese text, N is the occurrence count of one English word and M the occurrence count of one Chinese word. We previously used some frequency difference con- straints and starting point constraints (Fung & McKeown 1994). Those constraints limited the 238 W 500000 1001~ path f | i i i i 100otm ~ 300~o 40o00o 50000o Figure 2: Dynamic Time Warping path for Governor in English and Chinese number of the pairs of vectors to be compared by DTW. For example, low frequency words are not considered since their positional difference vectors would not contain much information. We also ap- ply these constraints in our experiments. However, there is still many pairs of words left to be compared. To improve the computation speed, we constrain the vector pairs further by looking at the Euclidean distance g of their means and standard deviations: E = ~/iml - m2) 2 + (~1 - ~2)~ If their Euclidean distance is higher than a cer- tain threshold, we filter the pair out and do not use DTW matching on them. This process eliminated most word pairs. Note that this Euclidean distance function helps to filter out word pairs which are very different from each other, but it is not discriminative enough to pick out the best translation of a word. So for word pairs whose Euclidean distance is below the threshold, we still need to use DTW matching to find the best translation. However, this Euclidean distance filtering greatly improved the speed of this stage of bilingual lexicon compilation. 4 Finding anchor points and eliminating noise Since the primary lexicon after thresholding is rela- tively small, we would like to compute a secondary lexicon including some words which were not found by DTW. At stage 5 of our algorithm, we try to find anchor points on the DTW paths which divide the texts into multiple aligned segments for compil- ing the secondary lexicon. We believe these anchor points are more reliable than those obtained by trac- ing all the words in the texts. For every word pair from this lexicon, we had ob- tained a DTW score and a DTW path. If we plot the points on the DTW paths of all word pairs from the lexicon, we get a graph as in the left hand side of Fig- ure 3. Each point (i, j) on this graph is on the DTW path(vl, v2) where vl is from English words in the lexicon and v2 is from the Chinese words in the lexi- con. The union effect of all these DTW paths shows a salient line approximating the diagonal. This line can be thought of the text alignment path. Its de- parture from the diagonal illustrates that the texts of this corpus are not identical nor linearly aligned. Since the lexicon we computed was not perfect, we get some noise in this graph. Previous align- ment methods we used such as Church (1993); Fung & Church (1994); Fung & McKeown (1994) would bin the anchor points into continuous blocks for a rough alignment. This would have a smoothing ef- fect. However, we later found that these blocks of anchor points are not precise enough for our Chi- nese/English corpus. We found that it is more ad- vantageous to increase the overall reliability of an- chor points by keeping the highly reliable points and discarding the rest. From all the points on the union of the DTW paths, we filter out the points by the following con- ditions: If the point (i, j) satisfies (slope constraint) j/i > 600 * N[0] (window size constraint) i >= 25 -t- iprevious (continuity constraint) j >= Jpreviou, (offset constraini) j -- jprevious > 500 then the point (i, j) is noise and is discarded. After filtering, we get points such as shown in the right hand side of Figure 3. There are 388 highly re- liable anchor points. They divide the texts into 388 segments. The total length of the texts is around 100000, so each segment has an average window size of 257 words which is considerably longer than a sen- tence length; thus this is a much rougher alignment than sentence alignment, but nonetheless we still get a bilingual lexicon out of it. 239 IO00(X) 90OO0 8O000 70000 6O00O 5O000 40000 3O00O 2C000 10OOO 0 , , , , v ~ece "a I.dlw.pos" • ~o e • $ ,t ,,~J"O '~*¢ o * %•• ° *,~* r'* * 4' *~o ,~4!Pt s °--•'°" ~ " ~.4R " ¢ . oe . .5,,,=:~. ~-¢ • , ° ".,~" t .°e . 20000 40000 600(]0 80000 100000 120000 100000 v I 90ooo i- 80000 k 7o~o o 6OOO0 F 500OO F ~¢e ee~o 3OOOO F 1o000 F • .'f, 0-- ~ = i i 0 10000 20000 30000 40000 50000 d' ; v "finered.dtw,pos" e¢ • ,7. I t l I 66000 70000 80000 90000 100000 Figure 3: DTW path reconstruction output and the anchor points obtained after filtering The constants in the above conditions are cho- sen roughly in proportion to the corpus size so that the filtered picture looks close to a clean, diagonal line. This ensures that our development stage is still unsupervised. We would like to emphasize that if they were chosen by looking at the lexicon output as would be in a supervised training scenario, then one should evaluate the output on an independent test corpus. Note that if one chunk of noisy data appeared in text1 but not in text2, this part would be segmented between two anchor points (i, j) and (u, v). We know point i is matched to point j, and point u to point v, the texts between these two points are matched but we do not make any assumption about how this segment of texts are matched. In the extreme case where i -- u, we know that the text between j and v is noise. We have at this point a segment-aligned parallel corpus with noise elimination. 5 Finding low frequency bilingual word pairs Many nouns and proper nouns were not translated in the previous stages of our algorithm. They were not in the first lexicon because their frequencies were too low to be well represented by positional difference vectors. 5.1 Non-linear segment binary vectors In stage 6, we represent the positional and frequency information of low frequency words by a binary vec- tor for fast matching. The 388 anchor points (95,10), (139,131),..., (98809, 93251) divide the two texts into 388 non- linear segments. Textl is segmented by the points (95,139,..., 98586, 98809) and text2 is segmented by the points (10,131,..., 90957, 93251). For the nouns we are interested in finding the translations for, we again look at the position vectors. For example, the word prosperity oc- curred seven times in the English text. Its posi- tion vector is (2178, 5322,... ,86521,95341) . We convert this position vector into a binary vector V1 of 388 dimensions where VI[i] = 1 if pros- perity occured within the ith segment, VI[i] -- 0 otherwise. For prosperity, VI[i] -- 1 where i = 20, 27, 41, 47,193,321,360. The Chinese trans- lation for prosperity is ~!. Its position vec- tor is (1955,5050,... ,88048). Its binary vector is V2[i] = 1 where i = 14, 29, 41, 47,193,275,321,360. We can see that these two vectors share five segments in common. We compute the segment vector for all English nouns and proper nouns not found in the first lex- icon and whose frequency is above two. Words oc- curring only once are extremely hard to translate although our algorithm was able to find some pairs which occurred only once. 5.2 "Binary vector correlation measure To match these binary vectors V1 with their coun- terparts in Chinese V2, we use a mutual information score m. Pr(V1, V2) m = log2 Pr(Vl) Pr(V2) freq(Vl[i] = 1) Pr(V1) -- L freq(V2[i] = 1) Pr(V2) = L freq(Vl[i] -- V2[i] - 1) Pr(VI,V2) = L where L = dim(V1) = dim(V2) 240 If prosperity and ~ occurred in the same eight segments, their mutual information score would be 5.6. If they never occur in the same segments, their m would be negative infinity. Here, for prosperity/~ ~, m = 5.077 which shows that these two words are indeed highly correlated. The t-score was used as a confidence measure. We keep pairs of words if their t > 1.65 where t ~ Pr(Yl, Y2) - Pr(V1) Pr(Y2) For prosperity/~.~]~, t = 2.33 which shows that their correlation is reliable. 6 Results The English half of the corpus has 5760 unique words containing 2779 nouns and proper nouns. Most of these words occurred only once. We carried out two sets of evaluations, first counting only the best matched pairs, then counting top three Chinese translations for an English word. The top N candi- date evaluation is useful because in a machine-aided translation system, we could propose a list of up to, say, ten candidate translations to help the transla- tor. We obtained the evaluations of three human judges (El-E3). Evaluator E1 is a native Cantonese speaker, E2 a Mandarin speaker, and E3 a speaker of both languages. The results are shown in Figure 6. The average accuracy for all evaluators for both sets is 73.1%. This is a considerable improvement from our previous algorithm (Fung & McKeown 1994) which found only 32 pairs of single word trans- lation. Our program also runs much faster than other lexicon-based alignment methods. We found that many of the mistaken transla- tions resulted from insufficient data suggesting that we should use a larger size corpus in our future work. Tagging errors also caused some translation mistakes. English words with multiple senses also tend to be wrongly translated at least in part (e.g., means). There is no difference between capital let- ters and small letters in Chinese, and no difference between singular and plural forms of the same term. This also led to some error in the vector represen- tation. The evaluators' knowledge of the language and familiarity with the domain also influenced the results. Apart from single Word to single word transla- tion such as Governor/~ and prosperity/~i~fl¢~, we also found many single word translations which show potential towards being translated as com- pound domain-specific terms such as follows: • finding Chinese words: Chinese texts do not have word boundaries such as space in English, therefore our text was tokenized into words by a statistical Chinese tokenizer (Fung & Wu 1994). Tokenizer error caused some Chinese characters to be not grouped together as one word. Our program located some of these words. For ex- ample, Green was aligned to ,~j~,/~ and -~ which suggests that ,~j~ could be a single Chinese word. It indeed is the name for Green Paper - a government document. • compound noun translations: carbon could be translated as ]i~, and monoxide as ~ . If carbon monoxide were translated separately, we would get ~ --~K4h . However, our algorithm found both carbon and monoxide to be most likely translated to the single Chinese word --~ 4h~ which is the correct translation for carbon monoxide. The words Legislative and Council were both matched to ~-¢r~ and similarly we can de- duce that Legislative Council is a compound noun/collocation. The interesting fact here is, Council is also matched to ~J. So we can deduce that ~-'r_~j should be a single Chinese word cor- responding to Legislative Council. • slang: Some word pairs seem unlikely to be translations of each other, such as collusion and its first three candidates ~(it pull), ~t~(cat), F~ (tail). Actually pulling the cat's tail is Can- tonese slang for collusion. The word gweilo is not a conventional English word and cannot be found in any dictionary but it appeared eleven times in the text. It was matched to the Cantonese characters ~, ~, ~, and ~ which separately mean vulgar/folk, name/litle, ghost and male. ~ means the colloquial term gweilo. Gweilo in Cantonese is actually an idiom referring to a male west- erner that originally had pejorative implica- tions. This word reflects a certain cultural con- text and cannot be simply replaced by a word to word translation. • collocations: Some word pairs such as projects and ~(houses) are not direct translations. However, they are found to be constituent words of collocations - the Housing Projects (by the Hong Kong Government).Both Cross and Harbour are translated to 'd~Yff.(sea bottom), and then to Pi~:i(tunnel), not a very literal transla- tion. Yet, the correct translation for ~J-~ll~ is indeed the Cross Harbor Tunnel and not the Sea Bottom Tunnel. The words Hong and Kong are both translated into ~i4~, indicating Hong Kong is a compound name. Basic and Law are both matched to ~:~2~, so we know the correct translation for ~2g~ is Basic Law which is a compound noun. • proper names In Hong Kong, there is a specific system for the transliteration of Chi- nese family names into English. Our algo- 241 lexicons primary(l) secondary(l) total(l) primary(3) secondary(3) total(3) total word pairs 128 533 661 128 533 661 correct pairs accuracy E1 E2 E3 E1 E2 E3 101 107 90 78.9% 83.6% 70.3% 352 388 382 66.0% 72.8% 71.7% 453 495 472 68.5% 74.9% 71.4% 112 101 99 87.5% 78.9% 77.3% 401 368 398 75.2% 69.0% 74.7% 513 469 497 77.6% 71.0% 75.2% Figure 4: Bilingual lexicon compilation results rithm found a handful of these such as Fung/~g, Wong/~, Poon/~, Hui/ iam/CY¢, Tam/--~, etc. 7 Conclusion Our algorithm bypasses the sentence alignment step to find a bilingual lexicon of nouns and proper nouns. Its output shows promise for compilation of domain- specific, technical and regional compounds terms. It has shown effectiveness in computing such a lexicon from texts with no sentence boundary information and with noise; fine-grain sentence alignment is not necessary for lexicon compilation as long as we have highly reliable anchor points. Compared to other word alignment algorithms, it does not need a pri- ori information. Since EM-based word alignment algorithms using random initialization can fall into local maxima, our output can also be used to pro- vide a better initializing basis for EM methods. It has also shown promise for finding noun phrases in English and Chinese, as well as finding new Chinese words which were not tokenized by a Chinese word tokenizer. We are currently working on identifying full noun phrases and compound words from noisy parallel corpora with statistical and linguistic infor- mation. References BROWN, P., J. LAI, L: R. MERCER. 1991. Aligning sentences in parallel corpora. In Proceedings of the 29th Annual Conference of the Association for Computational Linguistics. CHEN, STANLEY. 1993. Aligning sentences in bilin- gual corpora using lexical information. In Pro- ceedings of the 31st Annual Conference of the Association for Computational Linguistics, 9- 16, Columbus, Ohio. CHURCH, K., I. DAGAN, W. GALE, P. FUNG, J. HELFMAN, ~ B. SATISH. 1993. Aligning par- allel texts: Do methods developed for English- French generalize to Asian languages? In Pro- ceedings of Pacific Asia Conference on Formal and Computational Linguistics. CHURCH, KENNETH. 1993. Char_align: A program for aligning parallel texts at the character level. In Proceedings of the 31st Annual Conference of the Association for Computational Linguistics, 1-8, Columbus, Ohio. DAGAN, IDO, KENNETH W. CHURCH, ~:; WILLIAM A. GALE. 1993. Robust bilingual word alignment for machine aided translation. In Proceedings of the Workshop on Very Large Corpora: Academic and Industrial Perspectives, 1-8, Columbus, Ohio. FUNG, PASCALE & KENNETH CHURCH. 1994. Kvec: A new approach for aligning parallel texts. In Proceedings of COLING 94, 1096-1102, Kyoto, Japan. FUNG, PASCALE & KATHLEEN McKEOWN. 1994. Aligning noisy parallel corpora across language groups: Word pair feature matching by dy- namic time warping. In Proceedings of the First Conference of the Association for Machine Translation in the Americas, 81-88, Columbia, Maryland. FUNC, PASCALE & DEKAI WU. 1994. Statistical augmentation of a Chinese machine-readable dictionary. In Proceedings of the 2nd Annual Workshop on Very Large Corpora, 69-85, Ky- oto, Japan. GALE, WILLIAM A. & KENNETH W. CHURCH. 1993. A program for aligning sentences in bilingual corpora. Computational Linguistics, 19(1):75-102. KAY, MARTIN ~; MARTIN ROSCHEISEN. 1993. Text- Translation alignment. Computational Linguis- tics, 19(1):121-142. KUMANO, AKIRA ~ HIDEKI HIRAKAWA. 1994. Building an mt dictionary from parallel texts based on linguistic and statistical information. In Proceedings of the 15th International Con- ference on Computational Linguistics COLING 94, 76-81, Kyoto, Japan. KUPIEC, JULIAN. 1993. An algorithm for finding noun phrase correspondences in bilingual cor- pora. In Proceedings of the 31st Annual Confer- ence of the Association for Computational Lin- guistics, 17-22, Columbus, Ohio. SMADJA, FRANK & KATHLEEN McKEOWN. 1994. Translating collocations for use in bilingual lex- icons. In Proceedings of the ARPA Human 242 Language Technology Workshop 94, Plainsboro, New Jersey. Wu, DEKAI. 1994. Aligning a parallel English- Chinese corpus statistically with lexical criteria. In Proceedings of the 32nd Annual Conference of the Association for Computational Linguis- tics, 80-87, Las Cruces, New Mexico. Wu, DEKAI L; XUANYIN XIh. 1994. Learning an English-Chinese lexicon from a parallel cor- pus. In Proceedings of the First Conference of the Association for Machine Translation in the Americas, 206-213, Columbia, Maryland. 243 | 1995 | 32 |
An Algorithm for Simultaneously Bracketing Parallel Texts by Aligning Words Dekai Wu HKUST Department of Computer Science University of Science & Technology Clear Water Bay, Hong Kong dekai@cs, ust. hk Abstract We describe a grammarless method for simul- taneously bracketing both halves of a paral- lel text and giving word alignments, assum- ing only a translation lexicon for the language pair. We introduce inversion-invariant trans- duction grammars which serve as generative models for parallel bilingual sentences with weak order constraints. Focusing on Wans- duction grammars for bracketing, we formu- late a normal form, and a stochastic version amenable to a maximum-likelihood bracketing algorithm. Several extensions and experiments are discussed. 1 Introduction Parallel corpora have been shown to provide an extremely rich source of constraints for statistical analysis (e.g., Brown et al. 1990; Gale & Church 1991; Gale et al. 1992; Church 1993; Brown et al. 1993; Dagan et al. 1993; Dagan & Church 1994; Fung & Church 1994; Wu & Xia 1994; Fung & McKeown 1994). Our thesis in this paper is that the lexical information actually gives suffi- cient information to extract not merely word alignments, but also bracketing constraints for both parallel texts. Aside from purely linguistic interest, bracket structure has been empirically shown to be highly effective at con- straining subsequent training of, for example, stochas- tic context-free grammars (Pereira & ~ 1992; Black et al. 1993). Previous algorithms for automatic bracketing operate on monolingual texts and hence re- quire more grammatical constraints; for example, tac- tics employing mutual information have been applied to tagged text (Magerumn & Marcus 1990). Algorithms for word alignment attempt to find the matching words between parallel sentences. 1 Although word alignments are of little use by themselves, they provide potential anchor points for other applications, or for subsequent learning stages to acquire more inter- esting structures. Our technique views word alignment 1 Wordmatching is a more accurate term than word alignment since the matchings may cross, but we follow the literature. and bracket annotation for both parallel texts as an inte- grated problem. Although the examples and experiments herein are on Chinese and English, we believe the model is equally applicable to other language pairs, especially those within the same family (say Indo-European). Our bracketing method is based on a new formalism called an inversion.invariant transduction grammar. By their nature inversion-invariant transduction grammars overgenerate, because they permit too much constituent- ordering freedom. Nonetheless, they turn out to be very useful for recognition when the true grammar is not fully known. Their purpose is not to flag ungrammatical in- pots; instead they assume that the inputs are grammatical, the aim being to extract structure from the input data, in kindred spirit with robust parsing. 2 Inversion-Invariant Transduction Grammars A Wansduction grammar is a bilingual model that gen- erates two output streams, one for each language. The usual view of transducers as having one input stream and one output stream is more appropriate for restricted or deterministic finite-state machines. Although finite-state transducers have been well studied, they are insufficiently powerful for bilingual models. The models we consider here are non-deterministic models where the two lan- guages' role is symmetric. We begin by generalizing transduction to context-free form. In a context-free transduction grammar, terminal symbols come in pairs that~ are emitted to separate output streams. It follows that each rewrite rule emits not one but two streams, and that every non-terminal stands for a class of derivable substring pairs. For example, in the rewrite rule A ~ B x/y C z/e the terminal symbols z and z are symbols of the language Lx and are emitted on stream 1, while the terminal symbol y is a symbol of the language L2 and is emitted on stream 2. This rule implies that z/y must be a valid entry in the translation lexicon. A matched terminal symbol pair such as z/y is called a couple. As a spe,Aal case, the null symbol e in either language means that no output 244 S PP NP NN VP W Pro Det Class Prep N V NP VP Prep NP Pro I Det Class NN ModN [ NNPP VV [ VV NN I VP PP V ] Adv V I/~ I you/f$ ~-* for/~ ~. book/n Figure 1: Example IITG. token is generated. We call a symbol pair such as x/e an Ll-singleton, and ely an L2-singleton. We can employ context-free transduction grammars in simple attempts at generative models for bilingual sen- tence pairs. For example, pretend for the moment that the simple ttansduetion grammar shown in Figure 1 is a context-free transduction grammar, ignoring the ~ sym- bols that are in place of the usual ~ symbols. This gram- mar generates the following example pair of English and Chinese sentences in translation: (1) a. [I [[took [a book]so ]vp [for yon]~ ]vp ]s b. [~i [[~T [--*W]so ]w [~]~ ]vt, ]s Each instance of a non-terminal here actually derives two subsltings, one in each of the sentences; these two substrings are translation counterparts. This suggests writing the parse trees together: (2) ~ [[took/~Y [a/~ d~: book/1[]so ]vp [for/~[~ you/~]pp ]vv ]s The problem with context-free transduction granunars is that, just as with finite-state transducers, both sentences in a translation pair must share exactly the same gram- matic~d structure (except for optional words that can be handled with lexical singletons). For example, the fol- lowing sentence pair with a perfectly valid, alternative Chinese translation cannot be generated: (3) a. [I [[took [a book]so ]vp [for you]v~ ]vP ]s b. [~ [[~¢~]~ [~T [--~]so ]vt, ]vP ]s We introduce the device of an inversion-invafiant trans- duction grammar (IITG) to get around the inflexibility of context-free txansduction grammars. Productions are in- terpreted as rewrite rules just as with context-free trans- duction grammars, with one additional proviso: when generating output for stream 2, the constituents on a rule's right-hand side may be emitted either left-to-right (as usual) or right-to-left (in inverted order). We use instead of --~ to indicate this. Note that inversion is permitted at any level of rule expansion. With this simple proviso, the transduction grammar of Figure 1 straightforwardly generates sentence-pair (3). However, the IITG's weakened ordering constraints now also permit the following sentence pairs, where some constituents have been reversed: (4) & *[I [[for youlpp [[a bookl~p tooklvp ]vp ]s b. [~ [[~¢~]1~ [~tT [--:*:It]so ]w ]vp ]s (5) a. *[[[yon for]re [[a book]so took]w ]vp I]s b. *[~ [[~]rp [[tl[:~--]so ~T]vP ]VP ]S As a bilingual generative linguistic theory, therefore, IITGs are not well-motivated (at least for most natural language pairs), since the majority of constructs do not have freely revexsable constituents. We refer to the direction of a production's L2 con- stituent ordering as an orientation. It is sometimes useful to explicitly designate one of the two possible orienta- tions when writing productions. We do this by dis- tinguishing two varieties of concatenation operators on string-pairs, depending on tim odeatation. Tim operator [] performs the "usual" paitwise concatenation so that [ A B] yields the string-pair ( Cx , C2 ) where Cx = A1Bx and (52 = A2B2. But the operator 0 concatema~ con- stituents on output stream 1 while reversing them on stream 2, so that Ci = AxBx but C2 = B2A2. For example, the NP .-. Det Class NN rule in the transduc- tion grammar above actually expands to two standard rewrite rules: -. [Bet NN] (DetClass NN) Before turning to bracketing, we take note of three lemmas for IITGs (proofs omitted): Lemma l For any inversion-invariant transduction grammar G, there exists an equivalent inversion- invariant transduction grammar G' where T(G) = T( G'), such that: 1. lfe E LI(G) and e E L2(G), then G' contains a single production of the form S' --~ e / c, where S' is the start symbol of G' and does not appear on the right-hand side of any production of G' ; 2. otherwise G' contains no productions of the form A ~ e/e. Lemma2 For any inversion-invariant transduction grammar G, there exists an equivalent inversion- invariant transduction gratrm~r G' where T(G) = T(G'), T(G) = T(G'), such that the right-hand side of any production of G' contains either a single terminal- pair or a list of nonterminals. Lemma3 For any inversion-invariant transduction grammar G, there exists an equivalent inversion trans- duction grammar G' where T( G) = T( G'), such that G' does not contain any productions of the form A --, B. 3 Bracketing Transduction Grammars For the remainder of this paper, we focus our attention on pure bracketing. We confine ourselves to bracketing 245 transduction grammars (BTGs), which are IITGs where constituent categories ate not differentiated. Aside from the start symbol S, BTGs contain only one non-terminal symbol, A, which rewrites either recursively as a string of A's or as a single terminal-pair. In the former case, the productions has the form A ~-, A ! where we use A ! to ab- breviate A... A, where thefanout f denotes the number of A's. Each A corresponds to a level of bracketing and can be thought of as demarcating some unspecified kind of syntactic category. (This same "repetitive expansion" restriction used with standard context-free grammars and transduetion grammars yields bracketing grammars with- out orientation invariauce.) A full bracketing transduction grammar of degree f contains A productions of every fanout between 2 and f, thus allowing constituents of any length up to f. In principle, a full BTG of high degree is preferable, hav- ing the greatest flexibility to acx~mmdate arbitrarily long matching sequences. However, the following theorem simplifies our algorithms by allowing us to get away with degree-2 BTGs. I ~t~ we will see how postprocessing restores the fanout flexibility (Section 5.2). Theorem 1 For any full bracketing transduction gram- mar T, there exists an equivalent bracketing transduction grammar T' in normal form where every production takes one of the followingforms: S ~ e/e S ~ A A ~ AA A ~ z/y A ~ ~:/e A ~ ely Proof By Lemmas 1, 2, and 3, we may assume T contains only productions of the form S ~-* e/e, A z/y, A ~ z/e, A ~-* e/y, and A ,--* AA... A. For proof by induction, we need only show that any full BTG T of degree f > 2 is equivalent to a full BTG T' of degree f- 1. It suffices to show that the production A ~-, A ! call be removed without any loss to the generated language, i.e., tha! the remaining productions in T' can still derive any string-pair derivable by T (removing a production cannot increase the set of derivable string-pairs). Let (E, C) be any siring-pair derivable from A ~ A 1, where E is output on stream 1 and C on stream 2. Define E i as the substring of E derived from the ith A of the production, and similarly define C i. There are two cases depending on the concatenation orientation, but (E, C) is derivable by T' in either case. In the first case, if the derivation used was A ..-, [A!], thenE = E 1 ...E l andC = C1...C 1. Let(E',C') = (E 1 ... E !-x, C1... C1-1). Then (E', C') is derivable from A --~ [A!-I], and thus (E, C) = (E~E 1, C~C !) is derivable from A ~ [A A]: In the second case, the derivation used was A ---. {A !), and we still have E = E 1 ... E ! but now C -- CY... C 1. Now let (E', C") = A ~ accountable/~tJ[ A ,---+ anthority/~t~ A ~ finauciaYl[#l~ A .-* secretary/~ A ~ to/~ A ~-, wfll]~ A ~ Jo A ,-, beJe A ~ thele Figure 2: Some relevant lexical productions. .. E 1-1 , C 1-1 ... C1). ~ (E', C") is derivable (~A --* (A!-I), and thus (E, e) - (E'E !, C!C ") is derivable from A ---, (A A). [7 4 Stochastic Bracketing Transduction Grammars In a stochastic BTG (SBTG), each rewrite rule has a prob- ability. Let a! denote the probability of the A-production with fanout degree f. For the remaining (lexical) pro- dnctions, we use b(z, y) to denote P[A ~ z/vlA]. The probabiliti~ obey the constraint that Ea! + Eb(z'Y)= 1 l ~¢,Y For our experiments we employed a normal form trans- duction grammar, so a! = 0 for all f # 2. The A- productions used were: A ~-* AA A b(&~) z/v A b~O x/e A ~%~) e/V for all z, y lexical translations for all z English vocabulary for all y Chinese vocabulary The b(z, y) distribution actually encodes the English- Chinese translation lexicon. As discussed below, the lexicon we employed was automatically learned from a parallel corpus, giving us the b(z, y) probabilities di- rectly. The latter two singleton forms permit any word in either sentence to be unmatched. A small e-constant is chosen for the probabilities b(z, e) and b(e, y), so that the optimal bracketing resorts to these productions only when it is otherwise impossible to match words. With BTGs, to parse means to build matched bracket- ings for senmnce-pairs rather than sentences. Tiffs means that the adjacency constraints given by the nested levels must be obeyed in the bracketings of both languages. The result of the parse gives bracketings for both input sen- tences, as well as a bracket alignment indicating the cor- responding brackets between the sentences. The bracket alignment includes a word alignment as a byproduct. Consider the following sentence pair from our corpus: 246 Jo will/~[#~ The/c Authority/~t~ belt accountabl~ theJ~ Financh~tt~ Figure 3: Bracketing tree. Secretary/--~ (6) a. The Authority will be accountable to the Finan- cial Secretary. b. Ift~l~t'~l~t~t~o Assume we have the productions in Figure 2, which is a fragment excerpted from our actual BTG. Ignoring cap- italization, an example of a valid parse that is consistent with our linguistic ideas is: (7) [[[ The/e Authority/~t~ ] [ will/~ ([ be& accountable/~t~ ] [ to/~ [ the/¢ [[ Financial/~l~ Secretary/~ ]]]])]] J. ] Figure 3 shows a graphic representation of the same brac&eting, where the 0 level of lrac, keting is marked by the horizontal line. The English is read in the usual depth-first left-to-right order, but for the Chinese, a hori- zontal line means the right subtree is traversed before the left. The () notation concisely displays the common struc- ture of the two sentences. However, the bracketing is clearer if we view the sentences monolingually, which allows us to invert the Chinese constituents within the 0 so that only [] brackets need to appear.. (8) a. [[[ The Authority ] [ will [[ be accountable ] [ to [ the [[ Financial Secretary ]]]]]]1. ] k [[[[ "~,'~ ] [ ~t' [[ I~ [[ ~ ~] ]]]] [ ~.l ]]]] o ] In the monolingual view, extra brackets appear in one lan- guage whenever there is a singleton in the other language. If the goal is just to obtain ~ for monolingual sen- tences, the extra brackets can be discarded aft~ parsing: (9) [[[ ~,~ ] [ ~R [ ~ [ Igil~ ~ ]] [ ~ttt ]]] o ] The basis of the bracketing strategy can be seen as choosing the bracketing that maximizes the (probabilis- tically weighted) number of words matched, subject to the BTG representational constraint, which has the ef- fect of limiting the possible crossing patterns in the word alignment. A simpler, related idea of penalizing dis- tortion from some ideal matching pattern can be found in the statistical translation (Brown et al. 1990; Brown et al. 1993) and word alignment (Dagan et al. 1993; Dagan & Church 1994) models. Unlike these mod- els, however, the BTG aims m model constituent struc- ture when determining distortion penalties. In particu- lar, crossings that are consistent with the constituent tree structure are not penalized. The implicit assumption is that core arguments of frames remain similar across lan- guages, and tha! core arguments of the same frame will surface adjacently. The accuracy of the method on a particular language pair will therefore depend upon the extent to which this language universals hypothesis holds. However, the approach is robust because if the assump- tion is violated, damage will be limited to dropping the fewest possible crossed word matchings. We now describe how a dynzmic-programming parser can compute an optimal bxackcting given a sentence-pair and a stochastic BTG. In bilingual parsing, just as with or- dinary monolingual parsing, probabilizing the grammar 247 permits ambiguities to be resolved by choosing the max- imum likelihood parse. Our algorithm is similar in spirit to the recognition algorithm for HMMs (Viterbi 1967). Denote the input English sentence by el, • •., er and the corresponding input Chinese sentence by el,..., cv. As an abbreviation we write co.., for the sequence of words eo+l,e,+2,... ,e~, and similarly for c~..~. Let 6.tu~ = maxP[e,..t/e~..~] be the maximum probability of any derivation from A that__ successfully parses both substrings es..t and ¢u..v. The best parse of the sentence pair is that with probability 60,T,0y. The algorithm computes 6o,T,0,V following the recur- fences below. 2 The time complexity of this algorithm is O(TaV a) where T and V are the lengths of the two sen~. 1. Initialization 6t--l,t,v--l,v "- 2. Recursion 6ttu v "-- Ottu u --" where l<t<T b(e,/~ ), 1 < v < V maxr/~[] 60 1 t stuv~ stuvJ .,6[ ] 611 s~ stuv ~ stuv 6[]uv = max a2 6,suu 6stuv s<S<~ u<V<v a[l stuv "- axg s max 6sSut.r 6$tUv s<S<t u<U<v v [] -- arg U max 6,suu6stuv sgut~ s<S<t u<U<v 6J~uv -- max a 2 6sSU~ 6StuU s<$<t u<U<v *r!~uv = arg s max 6,SV~ 6Stuff s<S<t u<U<v V~uv = arg U max 6,su~ 6S,uV s<S<t u<V<v 3. Reconstrm:tion Using 4-tuples to name each node of the parse tree, initially set qx = (0, T, 0, V) to be the root. The remaining descendants in the optimal parse tree are then given recursively for any q = (s, t, u, v) by: LEFT' " "s ~r[] u v [] ~ / ~q) = ( ' [~ '"~' '[] ''"~) f if0,t~ = [] mGHT(q) = t, LEFr' " "s o "0 v 0 v" RIGHT(q) = (a!~uv,t,u,v~u~) ) ifO, tuv = 0 Several additional extensions on this algorithm were found to be useful, and are briefly described below. De- tails are given in Wu (1995). 2We are gene~!izing argmax as to allow arg to specify the index of interest. 4.1 Simultaneous segmentation We often find the same concept realized using different numbers of words in the two languages, creating potential difficulties for word alignment; what is a single word in English may be realized as a compound in Chinese. Since Chinese text is not orthographically separated into words, the standard methodology is to first preproce~ input texts through a segmentation module (Chiang et al. 1992; Linet al. 1992; Chang & Chert 1993; Linet al. 1993; Wu & Tseng 1993; Sproat et al. 1994). However, this se- rionsly degrades our algorithm's performance, since the the segmenter may encounter ambiguities that are un- resolvable monolingually and thereby introduce errors. Even if the Chinese segmentation is acceptable moaolin- gually, it may not agree with the division of words present in the English sentence. Moreover, conventional com- pounds are frequently and unlmxlictably missing from translation lexicons, and this can furllu~ degrade perfor- Inane. To avoid such problems we have extended the algo- rithm to optimize the segmentation of the Chinese sen- tence in parallel with the ~ting lm~:ess. Note that this treatment of segmentation does not attempt to ad- dress the open linguistic question of what constitutes a Chinese "word". Our definition of a correct "segmenta- tion" is purely task-driven: longer segments are desirable if and only ff no compositional translation is possible. 4.2 Pre/post-positional biases Many of the bracketing errors are caused by singletons. With singletons, there is no cross-lingual discrimination to increase the certainty between alternative brackeaings. A heuristic to deal with this is to specify for each of the two languages whether prepositions or postpositions more common, where "preposition" here is meant not in the usual part-of-speech sense, but rather in a broad sense of the tendency of function words to attach left or right. This simple swategcm is effective because the majority of unmatched singletons are function words that counterparts in the other language. This observation holds assuming that the translation lexicon's coverage is reasonably good. For both English and Chinese, we specify a prepositional bias, which means that singletons are attached to the right whenever possible. 4.3 Punctuation constraints Certain punctuation characters give strong constituency indications with high reliability. "Perfect separators", which include colons and Chinese full stops, and "pet- feet delimiters", which include parentheses and quota- tion marks, can be used as bracketing constraints. We have extended the algorithm to precluded hypotheses that are inconsistent with such constraints, by initializ- ing those entries in the DP table corresponding to illegal sub-hypotheses with zero probabilities, These entries are blocked from recomputation during the DP phase. As their probabilities always remain zero, the illegal brack- etings can never participate in any optimal bracketing. 248 5 Postprocessing 5.1 A Singleton-Rebalancing Algorithm We now introduce an algorithm for further improving the bracketing accuracy in cases of singletons. Consider the following bracketing produced by the algorithm of the previous section: (10) [tThe/~ [[Authority/~f~ [wilg~ad ([be/~ accountable/~t~] [to the/~ [~/~ [Financial/~i~ Seaetary/-nl ]]])]ll] Jo ] The prepositional bias has already correctly restricted the singleton "Tbe/d' to attach to the right, but of course "The" does not belong outside the rest of the sentence, but rather with "Authority". The problem is that single- tons have no discriminative power between alternative bracket matchings--they only contribute to the ambigu- ity. However, we can minimize the impact by moving singletons as deep as possible, closer to the individual word they precede or succeed, by widening the scope of the brackets immediately following the singleton. In general this improves precision since wide-scope brack- ets are less constraining. The algorithm employs a rebalancing strategy rem- niscent of balanced-tree structures using left and right rotations. A left rotation changes a (A(BC)) structure to a ((AB)C) structure, and vice versa for a right rotation. The task is complicated by the presence of both [] and 0 brackets with both LI- and L2-singletons, since each combination presents different interactions. To be legal, a rotation must preserve symbol order on both output streams. However, the following lemma shows that any subtree can always be rebalanced at its root if either of its children is a singleton of either language. Lenuna 4 Let x be a L1 singleton, y be a L2 singleton, and A, B, C be arbitrary constituent subtrees. Then the following properties hold for the [] and 0 operators: (Associativity) [A[BC]] = [[AB]C] (A(BC)) = ((AB)C) (L, -singleton bidirectionality) lax] ~-- (A~) [,A] : (xA) (L2-singleton flipping commutativity) [Av] = (vA) [uA] = (Av) (L 1-singleton rotation properties) [z(AB)] ~- (x(AB)) ~-- ((zA)B) ~- ([xA]B) (x[aB]) ~--- [x[AB]] ~--- [[zA]B] .~ [(xA)B] [(AB)x] = ((AB)~) = (A(B~)) = (A[B~]) (lAB]x) ~- [[AB]x] = [A[Bx]] ~--- [A(Bx)] (L~-singleton rotation properties) [v(AB)] = ((AB)v) = (A(Bv)) = (AtvB]) (y[AB]) ~-- [[AB]y] ~ [A[By]] ~-- [A(yB)] [(AB)v] ,~ (y(AB)) ~ ((vA)B) ~- (My]B) ([AB]v) ~ [v[AB]] = ttvA]B] = [(Av)B] The method of Figure 4 modifies the input tree to attach singletons as closely as possible to couples, but remain- ing consistent with the input tree in the following sense: singletons cannot "escape" their inmmdiately surround- ing brackets. The key is that for any given subtree, if the outermost bracket involves a singleton that should be rotated into a subtree, then exactly one of the single- ton rotation properties will apply. The method proceeds depth-first, sinking each singleton as deeply as possible. For example, after rebalm~cing, sentence (10) is bracketed as follows: (11) [[[[The/e Authority/~] [witV~1t' ([be/e accountable/~tft] [to the/~ [dFBJ [Fhumciai/ll~'i~ Secretary/--~ 111)111 Jo ] 5.2 Flattening the Bracketing Because the BTG is in normal form, each bracket can only hold two constituents. This improves parsing ef- ficiency, but requires overcommiUnent since the algo- rithm is always forced to choose between (A(BC)) and ((AB)C) statures even when no choice is clearly bet- ter. In the worst case, both senteau:~ might have perfectly aligned words, lending no discriminative leverage what- soever to the bfac~ter. This leaves a very large number of choices: if both sentences are of length i = m, then thel~ ~ (21) 1 possible lracJw~ngs with fanout 2, none of which is better justitied than any other. Thus to improve accuracy, we should reduce the specificity of the bracketing's commitment in such cases. We implement this with another postprocessing stage. The algorithm proceeds bottom-up, elimiDming as malay brackets as possible, by making use of the associafiv- ity equivalences [ABel = [A[BC]] = [lAB]C] and SINK-SINGLETON(node) 1 ffnode is not aleaf 2 if a rotation property applies at node 3 apply the rotation to node 4 ch//d ~-- the child into which the singleton 5 was rotated 6 SINK-SINGLETON(chi/d) RE~AL~CE-aXEE(node) 1 if node is not a leaf 2 REBALANCE-TREE(left-child[node]) 3 REeALANCE-TREE(right-child[node]) 4 S ~K-SXNGI.,E'ro~(node) Figure 4: The singleton rebalancing schema. 249 [These/~ arrangements/~ will/e ef~ enhance/~q~ our/~ ([d~J ability/~;0] [tok dEt ~ maintain/~t~ monetary/~t stability/~ in the years to come/e]) do ] [The/e Authority/~]~ will/~ ([be/e accountable/gt~] [to the/e elm Financial/l~i~ Secretary/~]) Jo ] [They/~t!l~J ( are/e right/iE~ d-l-Jff tok do/~ e/~ so/e ) io ] [([ Evenk more~ important/l~ ] [Je however/~_ ]) [Je e/~, is/~ to make the very best of our/e e/~ffl~ own/~ $~ e/~J talent/X~ ] J. ] hope/e e/o!~l employers/{l[~l~ will/~ make full/e dg~rj'~ use/~ [offe those/]Jl~a~__] (([dJfJ-V who/&] [have aequired/e e/$~ new/~i skills/tS~l~ ]) [through/L~i~t thisJ~l programme/~l'|~]) J. ] have/~ o at/e length/~l ( on/e how/~g~ we/~ e/~ll~) [canFaJJ)~ boostk d~ilt our/~:~ e/~ prosperity/$~ ]Jo] Figure 5: Bracketing/alignment output examples. (~ = unrecognized input token.) (ABC) = (A(BC)) = ((AB)C). Tim singletonbidi- rectionality and flipping eommutativity equivalences (see Lemma 4) are also applied, whenever they render the as- sociativity equivalences applicable. The final result after flattening sentence (11) is as fol- lows: (12) [ The/e Authority/~]~ will/g~' ([ be/e accountable/J~tJ![ ] [ to tl~/e elm Financial/l~ Secretary/--~ 1) j o ] 6 Experiments Evaluation methodology for bracketing is controversial because of varying perspectives on what the "gold stan- dard" should be. We identify two prototypical positions, and give results for both. One position uses a linguistic evaluation criterion, where accuracy is measured against some theoretic notion of constituent structure. The other position uses a functional evaluation criterion, where the "correctness" of a bracketing depends on its utility with respect to the application task at hand. For example, here we consider a bracket-pair functionally useful if it cor- rectly identifies phrasal translations---especially where the phrases in the two languages are not compositionally derivable solely from obvious word translations. Notice that in contrast, the linguistic evaluation criterion is in- sensitive to whether the bracketings of the two sentences match each other in any semantic way, as long as the monolingual bracketings in each sentence are correct. In either case, the bracket precision gives the proportion of found br~&ets that agree with the chosen correctness criterion. All experiments reported in this paper were performed on sentence-pairs from the HKUST English-Chinese Par- allel Bilingual Corpus, which consists of governmental transcripts (Wu 1994). The translation lexicon was au- tomatically learned from the same corpus via statisti- cal sentence alignment (Wu 1994) and statistical Chi- nese word and collocation extraction (Fung & Wu 1994; Wu & Fung 1994), followed by an EM word-translation learning procedure (Wu & Xia 1994). The translation lexicon contains an English vocabulary of approximately 6,500 words and a Chinese vocabulary of approximately 5,500 words. The mapping is many-to-many, with an average of 2.25 Chinese translations per English word. The translation accuracy is imperfect (about 86% percent weighted precision), which turns out to cause many of the bracketing errors. Approximately 2,000 sentence-pairs with both English and Chinese lengths of 30 words or less were extracted from our corpus and bracketed using the algorithm de- scribed. Several additional criteria were used to filter out unsuitable sentence-pairs. If the lengths of the pair of sentences differed by more thml a 2:1 ratio, the pair was rejected; such a difference usually arises as the re- sult of an earlier error in automatic sentence alignment. Sentences containing more than one word absent from the translation lexicon were also rejected; the bracketing method is not intended to be robust against lexicon inade- quacies. We also rejected sentence pairs with fewer than two matching words, since this gives the bracketing al- gorithm no diso'iminative leverage; such pairs ~c~ounted for less than 2% of the input data. A random sample of the b~keted sentence pairs was then drawn, and the bracket precision was computed under each criterion for correctness. Additional examples are shown in Figure 5. Under the linguistic criterion, the monolingual bracket precision was 80.4% for the English sentences, and 78.4% for the Chinese sentences. Of course, monolinguai grammar-based bracketing methods can achieve higher precision, but such tools assume grammar resources that may not be available, such as good Chinese granuna~. Moreover, if a good monolingual bracketer is available, its output can easily be incorporated in much the same way as punctn~ion constraints, thereby combining the best of both worlds. Under the functional criterion, the parallel bracket precision was 72.5%, lower than the monolingual precision since brackets can be correct in one language but not the other. Grammar-based bracket- ing methods cannot directly produce results of a compa- rable nature. 250 7 Conclusion We have proposed a new tool for the corpus linguist's arsenal: a method for simultaneously bracketing both halves of a parallel bilingual corpus, using only a word translation lexicon. The method can also be seen as a word alignment algorithm that employs a realistic dis- tortion model and aligns consituents as well as words. The basis of the approach is a new inversion-invariant transduction grammar formalism. Various extension strategies for simultaneous segmen- tation, positional biases, punctuation constraints, single- ton rebalancing, and bracket flattening have been intro- duced. Parallel bracketing exploits a relatively untapped source of constraints, in that parallel bilingual sentences are used to mutually analyze each other. The model nonetheless retains a high degree of compatibility with more conventional monolingual formalisms and methods. The bracketing and alignment of parallel corpora can be fully automatized with zero initial knowledge re- sources, with the aid of automatic procedures for learning word translation lexicons. This is particularly valuable for work on languages for which online knowledge re- sources are relatively scarce compared with English. Acknowledgement I would like to thank Xuanyin Xia, Eva Wai-Man Foug, Pascale Fung, and Derick Wood. References BLACK, EZRA, ROGER GARSIDE, & GEoF~EY I ~ (eds.). 1993. Statistically-driven computer grammars of En- glish: The IB~aster approach. Amsterdam: Edi- tions Rodopi. BROWN, Pt~reR F., JOHN COCKE, STEPHEN A. D~1APt~rgA, VINCENT J. ~t~rttA, FR~ERICK J~LnqWK, JOHN D. ~ , ROBERT L. MERCER, & PAUL S. RoossiN. 1990. A statistical approach to machine translation. Com- putational Linguistics, 16(2):29-85. BROWN, PETER E, STEPHEN A. DIKLAPmTxA, VINCENT J. DEL- LAPteTgA, & ROBERT L. M~CER. 1993. The mathematics of statistical machine translation: Parameter estimation. Computational Linguistics, 19(2):263-311. CHANG, CHAO-HUANG & CHE~G-DER CHEN. 1993. HMM- based part-of-speech tagging for Chinese corpora. In Pro- ceedings of the Workshop on Very Large Corpora, 40-47, Columbus, Ohio. CHIANG, TUNG-HUI, JING-SHIN CHANG, MING-YU LIN, & KEH- YIH Su. 1992. Statistical models for word segmentation and unknown resolution. In Proceedings of ROCLING-92, 121-146. CHURCH, ~ W. 1993. Char-align: A program for align- ing parallel texts at the character level. In Proceedings of the 31st Annual Conference of the Association for Com- putational Linguistics, 1-8, Columbus, OH. DAGAN, IDO & KENNETH W. CHURCH. 1994. Termight: Iden- tifying and translating technical terminology. In Proceed- ings of the Fourth Conference on Applied Natural Lan- guage Processing, 34-40, Stuttgart. DAGAN, IDO, KENNETH W. CHURCH, & W[][JJ~ A. GAL~. 1993. Robust bilingual word alignment for machine aided translation. In Proceedings of the Wor~hop on Very Large Corpora, 1-8, Columbus, OH. FUNO, PASCALE & KENNETH W. CHURCH. 1994. K-vec: A new approach for aligning parallel texts. In Proceedings of the Fifteenth International Conference on Computational Linguistics, 1096-1102, Kyoto. FUNG, PASCALE & KATI~J~ McKEoWN. 1994. Aligning noisy parallel corpora across language groups: Word pair feature matching by dynamic time warping. In AMTA- 94, Association for Machine Translation in the Americas, 81-88, Columbia, Maryland. FUNO, PASCALE & DEKAI Wu. 1994. Statistical augmentation of a Chinese machine-readable dictionary. In Proceedings of the Second Annual Workshop on Very Large Corpora, 69-85, Kyoto. GALE, WnH~M A. & ~ W. CHURCH. 1991. Aprogram for aligning sentences in bilingual corpora. In Proceed- ings of the 29th Annual Conference of the Association for Computational Linguistics, 177-184, Berkeley. GALE, WnHAM A., KENNETH W. CHURCH, & DAVID YAROWSKY. 1992. Using bilingual materials to develop word sense disambiguatlon methods. In Fourth Inter- national Conference on Theoretical and Methodological Issues in Machine Translation, 101-112, Montreal. I~, M~o-Yu, Tt~o-Hta ~o, & K~-Ym Su. 1993. A preliminary study on unknown word problem in Chi- nese word segmentation. In Proceedings ofROCLING-93, 119-141. LIN, YI-CHUNG, TUNG-HUI CHIANG, & KEH-Ym SU. 1992. Discrimination oriented pmbabilistic tagging. In Proceed- ings of ROCLING-92, 85-96. MAGERMAN, DAVID M. & ~ L p. MARCUS. 1990. Parsing a natural language using mutual information statistics. In Proceedings of AAAI-90, Eighth National Conference on Artificial Intelligence, 984--989. PEREIRA, FEXNANDO & YVES SCHABES. 1992. Inside-outside re, estimation from partially bracketed corpora. In Proceed- ings of the 30th Annual Conference of the Association for Computational Linguistic:, 128-135, Newark, DE. SPROAT, RICHARD, CHn JN SHItl, Wn I JAM GALE, & N. CHANG. 1994. A stochastic word segmentation algorithm for a Mandarin text-to-speech system. In Proceedings of the 32nd Annual Conference of the Association for Computa- tional Linguistics, Lag Cruces, New Mexico. To appear. VITERBI, ANDREW J. 1967. Error bounds for convolutional codes and an asymptotically optimal decoding algorithm. IEEE Transactions on Information Theory, 13:260-269. WU, DEKAL 1994. Aligning a parallel English-Chinese corpus statistically with lexical criteria. In Proceedings of the 32ndAnnual Conference of the Association for Computa- tional Linguistics, 80-87, [,as Cruces, New Mexico. WU, DEKAI, 1995. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. In preparation. WU, DEKAI & PASCALE FUNG. 1994. Improving Chinese tok- enization with linguistic filters on statistical lexical acqui- sition. In Proceedings of the Fourth Conference on@plied Natural Language Processing, 180-181, Stuttgart. Wu, D~,AI & XUANTIN XIA. 1994. Learning an English- Chinese lexicon from a parallel corpus. In AMTA-94, As- sociation for Machine Translation in the Americas, 206- 213, Columbia, Maryland. Wu, ZIMIN & GWYI~TH TSI~G. 1993. Chinese text seg- mentation for text retrieval: Achievements and problems. Journal of The American Society for Information Science, 44(9):532-542. 251 | 1995 | 33 |
Two-Level, Many-Paths Generation Kevin Knight USC/Information Sciences Institute 4676 Admiralty Way Marina del Rey, CA 90292 [email protected] Vasileios Hatzivassiloglou Department of Computer Science Columbia University New York, NY 10027 [email protected] Abstract Large-scale natural language generation re- quires the integration of vast mounts of knowledge: lexical, grammatical, and concep- tual. A robust generator must be able to operate well even when pieces of knowledge axe missing. It must also be robust against incomplete or inaccurate inputs. To attack these problems, we have built a hybrid gen- erator, in which gaps in symbolic knowledge are filled by statistical methods. We describe algorithms and show experimental results. We also discuss how the hybrid generation model can be used to simplify current generators and enhance their portability, even when perfect knowledge is in principle obtainable. 1 Introduction A large-scale natural language generation (NLG) system for unrestricted text should be able to op- erate in an environment of 50,000 conceptual terms and 100,000 words or phrases. Turning conceptual expressions into English requires the integration of large knowledge bases (KBs), including grammar, ontology, lexicon, collocations, and mappings be- tween them. The quality of an NLG system depends on the quality of its inputs and knowledge bases. Given that perfect KBs do not yet exist, an impor- tant question arises: can we build high-quality NLG systems that are robust against incomplete KBs and inputs? Although robustness has been heavily stud- ied in natural language understanding (Weischedel and Black, 1980; Hayes, 1981; Lavie, 1994), it has received much less attention in NLG (Robin, 1995). We describe a hybrid model for natural language generation which offers improved performance in the presence of knowledge gaps in the generator (the grammar and the lexicon), and of errors in the se- mantic input. The model comes out of our practi- cal experience in building a large Japanese-English newspaper machine translation system, JAPAN- GLOSS (Knight et al., 1994; Knight et al., 1995). This system translates Japanese into representations whose terms are drawn from the SENSUS ontol- ogy (Knight and Luk, 1994), a 70,000-node knowl- edge base skeleton derived from resources like Word- Net (Miller, 1990), Longman's Dictionary (Procter, 1978), and the PENMAN Upper Model (Bateman, 1990). These representations are turned into En- glish during generation. Because we are processing unrestricted newspaper text, all modules in JAPAN- GLOSS must be robust. In addition, we show how the model is useful in simplifying the design of a generator and its knowl- edge bases even when perfect knowledge is available. This is accomplished by relegating some aspects of lexical choice (such as preposition selection and non- compositional interlexical constraints) to a statisti- cal component. The generator can then use simpler rules and combine them more freely; the price of this simplicity is that some of the output may be invalid. At this point, the statistical component intervenes and filters from the output all but the fluent expres- sions. The advantage of this two-level approach is that the knowledge bases in the generator become simpler, easier to develop, more portable across do- mains, and more accurate and robust in the presence of knowledge gaps. 2 Knowledge Gaps In our machine translation experiences, we traced generation disfluencies to two sources: 1 (1) incom- plete or inaccurate conceptual (interlingua) struc- tures, caused by knowledge gaps in the source lan- guage analyzer, and (2) knowledge gaps in the gen- erator itself. These two categories of gaps include: • Interlingual analysis often does not include ac- curate representations of number, definiteness, or time. (These are often unmarked in Japanese and require exceedingly difficult inferences to recover). • The generation lexicon does not mark rare words and generally does not distinguish be- tween near synonyms (e.g., finger vs. ?digil). 1See also (Kukich, 1988) for a discussion of fluency problems in NLG systems. 252 • The generation lexicon does not contain much collocational knowledge (e.g., on the field vs. *on the end zone). • Lexico-syntactic constraints (e.g., tell her hi vs. *say her hi), syntax-semantics mappings (e.g., the vase broke vs. *the food ate), and selectional restrictions are not always available or accurate. The generation system we use, PENMAN (Pen- man, 1989), is robust because it supplies appropriate defaults when knowledge is missing. But the default choices frequently are not the optimM ones; the hy- brid model we describe provides more satisfactory solutions. 3 Issues in Lexical Choice The process of selecting words that will lexicalize each semantic concept is intrinsically linked with syntactic, semantic, and discourse structure issues. 2 Multiple constraints apply to each lexical decision, often in a highly interdependent manner. However, while some lexical decisions can affect future (or past) lexical decisions, others are purely local, in the sense that they do not affect the lexicMization of other semantic roles. Consider the case of time adjuncts that express a single point in time, and as- sume that the generator has already decided to use a prepositional phrase for one of them. There are several forms of such adjuncts, e.g., at five. She left on Monday. in February. In terms of their interactions with the rest of the sentence, these manifestations of the adjunct are identical. The use of different prepositions is an interlexical constraint between the semantic and syntactic heads of the PP that does not propagate outside the PP. Consequently, the selection of the preposition can be postponed until the very end. Existing generation models however select the preposition according to defaults or randomly. among possible alternatives or by explicitly encod- ing the lexical constraints. The PENMAN gener- ation system (Penman, 1989) defaults the preposi- tion choice for point-time adjuncts to at, the most commonly used preposition in such cases. The FUF/SURGE (Elhadad, 1993) generation system is an example where prepositional lexical restrictions in time adjuncts are encoded by hand, producing fluent expressions but at the cost of a larger gram- mar. Collocational restrictions are another example of lexical constraints. Phrases such as three straight ~We consider lexical choice as a general problem for both open and closed class words, not limiting it to the former only as is sometimes done in the generation literature. victories, which are frequently used in sports reports to express historical information, can be decomposed semantically into the head noun plus its modifiers. However, when ellipsis of the head noun is consid- ered, a detailed corpus analysis of actual basketball game reports (Robin, 1995) shows that the forms wonflost three straight X, won~lost ~hree consecutive X, and won~lost three straight are regularly used, but the form *won~lost three consecutive is not. To achieve fluent output within the knowledge-based generation paradigm, lexical constraints of this type must be explicitly identified and represented. Both the above examples indicate the presence of (perhaps domain-dependent) lexical constraints that are not explainable on semantic grounds. In the case of prepositions in time adjuncts, the constraints are institutionalized in the language, but still nothing about the concept MONTH relates to the use of the preposition in with month names instead of, say, on (Herskovits, 1986). Furthermore, lexical constraints are not limited to the syntagmatic, interlexical con- straints discussed above. For a generator to be able to produce sufficiently varied text, multiple rendi- tions of the same concept must be accessible. Then, the generator is faced with paradigmatic choices among alternatives that without sufficient informa- tion may look equivalent. These choices include choices among synonyms (and near-synonyms), and choices among alternative syntactic realizations of a semantic role. However, it is possible that not all the alternatives actually share the same level of fluency or currency in the domain, even if they are rough paraphrases. In short, knowledge-based generators are faced with multiple, complex, and interacting lexical con- straints, 3 and the integration of these constraints is a difficult problem, to the extent that the need for a different specialized architecture for lexical choice in each domain has been suggested (Danlos, 1986). However, compositional approaches to lexical choice have been successful whenever detailed representa- tions of lexical constraints can be collected and en- tered into the lexicon (e.g., (Elhadad, 1993; Ku- kich et al., 1994)). Unfortunately, most of these constraints must be identified manually, and even when automatic methods for the acquisition of some types of this lexical knowledge exist (Smadja and McKeown, 1991), the extracted constraints must still be transformed to the generator's representa- tion language by hand. This narrows the scope of the lexicon to a specific domain; the approach fails to scale up to unrestricted language. When the goal is domain-independent generation, we need to inves- tigate methods for producing reasonable output in the absence of a large part of the information tradi- 3Including constraints not discussed above, originat- ing for example from discourse structure, the user models for the speaker and hearer, and pragmatic needs. 253 tionally available to the lexical chooser. 4 Current Solutions Two strategies have been used in lexical choice when knowledge gaps exist: selection of a default, 4 and random choice among alternatives. Default choices have the advantage that they can be carefully chosen to mask knowledge gaps to some extent. For exam- ple, PENMAN defaults article selection to the and tense to present, so it will produce The dog chases the cat in the absence of definiteness information. Choosing the is a good tactic, because the works with mass, count, singular, plural, and occasionally even proper nouns, while a does not. On the down side, the's only outnumber a's and an's by about two-to-one (Knight and Chander, 1994), so guess- ing the will frequently be wrong. Another ploy is to give preference to nominalizations over clauses. This generates sentences like They plan the statement of the filing for bankruptcy, avoiding disasters like They plan that it is said to file for bankruptcy. Of course, we also miss out on sparkling renditions like They plan to say that they will file for bankruptcy. The alternative of randomized decisions offers increased paraphrasing power but also the risk of producing some non-fluent expressions; we could generate sen- tences like The dog chased a cat and A dog will chase the cat, but also An earth circles a sun. To sum up, defaults can help against knowledge gaps, but they take time to construct, limit para- phrasing power, and only return a mediocre level of quality. We seek methods that can do better. 5 Statistical Methods Another approach to the problem of incomplete knowledge is the following. Suppose that according to our knowledge bases, input I may be rendered as sentence A or sentence B. If we had a device that could invoke new, easily obtainable knowledge to score the input/output pair (I, A) against (I, B), we could then choose A over B, or vice-versa. An alter- native to this is to forget I and simply score A and B on the basis of fluency. This essentially assumes that our generator produces valid mappings from I, but may be unsure as to which is the correct rendition. At this point, we can make another approximation-- modeling fluency as likelihood. In other words, how often have we seen A and B in the past? If A has oc- curred fifty times and B none at all, then we choose A. But ifA and B are long sentences, then probably we have seen neither. In that case, further approxi- mations are required. For example, does A contain frequent three-word sequences? Does B? Following this reasoning, we are led into statisti- cal language modeling. We built a language model 4See also (Harbusch et al., 1994) for a thorough dis- cussion of defaulting in NLG systems. for the English language by estimating bigram and trigram probabilities from a large collection of 46 million words of Wall Street Journal material. 5 We smoothed these estimates according to class mem- bership for proper names and numbers, and accord- ing to an extended version of the enhanced Good- Turing method (Church and Gale, 1991) for the re- maining words. The latter smoothing operation not only optimally regresses the probabilities of seen n- grams but also assigns a non-zero probability to all unseen n-grams which depends on how likely their component m-grams (m < n, i.e., words and bi- grams) are. The resulting conditional probabilities are converted to log-likelihoods for reasons of nu- merical accuracy and used to estimate the overall probability P(S) of any English sentence S accord- ing to a Markov assumption, i.e., log P(S) = Z log P(w, [Wi_l) for bigrams i log P(S) = Z log P(wilwi_z, wi-2) for trigrams i Because both equations would assign lower and lower probabilities to longer sentences and we need to compare sentences of different lengths, a heuristic strictly increasing function of sentence length, f(l) = 0.5l, is added to the log-likelihood estimates. 6 First Experiment Our first goal was to integrate the symbolic knowl- edge in the PENMAN system with the statistical knowledge in our language model. We took a se- mantic representation generated automatically from a short Japanese sentence. We then used PEN- MAN to generate 3,456 English sentences corre- sponding to the 3,456 (= 2'. 33) possible com- binations of the values of seven binary and three ternary features that were unspecified in the se- mantic input. These features were relevant to the semantic representation but their values were not extractable from the Japanese sentence, and thus each of their combinations corresponded to a par- ticular interpretation among the many possible in the presence of incompleteness in the semantic in- put. Specifying a feature forced PENMAN to make a particular linguistic decision. For example, adding (:identifiability-q t) forces the choice of de- terminer, while the :lex feature offers explicit con- trol over the selection of open-class words. A literal translation of the input sentence was something like As for new company, there is plan to establish in February. Here are three randomly selected transla- tions; note that the object of the "establishing" ac- tion is unspecified in the Japanese input, but PEN- MAN supplies a placeholder it when necessary, to ensure grammaticality: SAvailable from the ACL Data Collection Initiative, as CD ROM 1. 254 A new company will have in mind that it is establishing it on February. The new company plans the launching on February. New companies will have as a goal the launching at February. We then ranked the 3,456 sentences using the bigram version of our statistical language model, with the hope that good renditions would come out on top. Here is an abridged list of outputs, log- likelihood scores heuristically corrected for length, and rankings: 1 The new company plans to launch it in February. [ -13.568260 ] 2 The new company plans the foundation in February. [ -13.755152 ] 3 The new company plans the establishment in February. [ -13.821412 ] 4 The new company plans to establish it in February. [ -14. 121367 ] . . . . . , , . . . . . , . , . . . . . , . , , . . . . . * o . . . . * o 60 The new companies plan the establishment on February. [ -16.350112 ] 61 The new companies plan the launching in February. [ -16.530286 ] . . . , . . . . . * * . . . . . . , . . . . . . . ° , . . . . . . , . . . 400 The new companies have as a goal the foundation at February. [ -23.836556 ] 401 The new companies will have in mind to establish it at February. [ -23.842337 ] , , , . . . . . . ° , . . . . . , , . , . . . . . , . . . . . . . . . . . While this experiment shows that statistical mod- els can help make choices in generation, it fails as a computational strategy. Running PENMAN 3,456 times is expensive, but nothing compared to the cost of exhaustively exploring all combinations in larger input representations corresponding to sen- tences typically found in newspaper text. Twenty or thirty choice points typically multiply into millions or billions of potential sentences, and it is infeasible to generate them all independently. This leads us to consider other algorithms. 7 Many-Paths Generation Instead of explicitly constructing all possible rendi- tions of a semantic input and running PENMAN on them, we use a more efficient data structure and control algorithm to express possible ambigui- ties. The data structure is a word laltice--an acyclic state transition network with one start state, one fi- nal state, and transitions labeled by words. Word lattices are commonly used to model uncertainty in speech recognition (Waibel and Lee, 1990) and are well adapted for use with n-gram models. As we discussed in Section 3, a number of gen- eration difficulties can be traced to the existence of constraints between words and phrases. Our genera- tor operates on lexical islands, which do not interact with other words or concepts. 6 How to identify such islands is an important problem in NLG: grammat- ical rules (e.g., agreement) may help group words together, and collocational knowledge can also mark the boundaries of some lexical islands (e.g., nomi- nal compounds). When no explicit information is present, we can resort to treating single words as lex- ical islands, essentially adopting a view of maximum compositionality. Then, we rely on the statistical model to correct this approximation, by identifying any violations of the compositionality principle on the fly during actual text generation. The type of the lexical islands and the manner by which they have been identified do not affect the way our generator processes them. Each island cor- responds to an independent component of the final sentence. Each individual word in an island specifies a choice point in the search and causes the creation of a state in the lattice; all continuations of alterna- tive lexicalizations for this island become paths that leave this state. Choices between alternative lexi- cal islands for the same concept also become states in the lattice, with arcs leading to the sub-lattices corresponding to each island. Once the semantic input to the generator has been transformed to a word lattice, a search com- ponent identifies the N highest scoring paths from the start to the final state, according to our statisti- cal language model. We use a version of the N-best algorithm (Chow and Schwartz, 1989), a Viterbi- style beam search algorithm that allows extraction of more than just the best scoring path. (Hatzivas- siloglou and Knight, 1995) has more details on our search algorithm and the method we applied to es- timate the parameters of the statistical model. Our approach differs from traditional top-down generation in the same way that top-down and bottom-up parsing differ. In top-down parsing, backtracking is employed to exhaustively examine the space of possible alternatives. Similarly, tra- ditional control mechanisms in generation operate top-down, either deterministically (Meteer et al., 1987; Tomita and Nyberg, 1988; Penman, 1989) or by backtracking to previous choice points (Elhadad, 1993). This mode of operation can unnecessarily du- plicate a lot of work at run time, unless sophisticated control directives are included in the search engine (Elhadad and Robin, 1992). In contrast, in bottom- up parsing and in our generation model, a special data structure (a chart or a lattice respectively) is used to efficiently encode multiple analyses, and to allow structure sharing between many alternatives, eliminating repeated search. What should the word lattices produced by a gen- erator look like? If the generator has complete 6At least as far as the generator knows. 255 knowledge, the word lattice will degenerate to a string, e.g.: the _ ~ large _/"% Federal L/"~ deficit ~,q.~ fell Suppose we are uncertain about definiteness and number. We can generate a lattice with eight paths instead of one: the deficit (* stands for the empty string.) But we run the risk that the n-gram model will pick a non-grammatical path like a large Federal deficits fell. So we can pro- duce the following lattice instead: large -~J-J~ Federal ~) deficits a (~ In this case, we use knowledge about agreement to constrain the choices offered to the statistical model, from eight paths down to six. Notice that the six- path lattice has more states and is more complex than the eight-path one. Also, the n-gram length is critical. When long-distance features control gram- maticality, we cannot rely on the statistical model. Fortunately, long-distance features like agreement are among the first that go into any symbolic gen- erator. This is our first example of how symbolic and statistical knowledge sources contain comple- mentary information, which is why there is a sig- nificant advantage to combining them. Now we need an algorithm for converting gener- ator inputs into word lattices. Our approach is to assign word lattices to each fragment of the input, in a bottom-up compositional fashion. For example, consider the following semantic input, which is writ- ten in the PENMAN-style Sentence Plan Language (SPL) (Penman, 1989), with concepts drawn from the SENSUS ontology (Knight and Luk, 1994), and may be rendered in English as It is easy for Ameri- cans to obtain guns: (A / Ihave the quality of beingl :DOMAIN (P / Iprocurel :AGENT (A2 /]American]) :PATIENT (G / [gun, arm[)) :RANGE (E / Jeasy, effortlessJ)) We process semantic subexpressions in a bottom- up order, e.g., A2, G, P, ~., and finally A. The grammar assigns what we call an e-structure to each subex- pression. An e-structure consists of a list of dis- tinct syntactic categories, paired with English word lattices: (<syn, lat>, <syn, lat>, ...). As we climb up the input expression, the grammar glues together various word lattices. The grammar is organized around semantic feature patterns rather than English syntax--rather than having one S -> NP-VP rule with many semantic triggers, we have one AGENT-PATIENT rule with many English renderings. Here is a sample rule: ((xl :agent) (x2 :patient) (x3 :rest) -> (s (seq (xl rip) (x3 v-tensed) (x2 np))) (s (seq (xl np) (x3 v-tensed) (wrd "that") (x2 s))) (s (seq (xl np) (x3 v-tensed) (x2 (*OR* in, inf-raise)))) (s (seq (x2 np) (x3 v-passive) (vrd "by") (xl rip))) (in, (seq (wrd "for") (xl np) (wrd "to") (x3 v) (x2 np))) (inf-raise (seq (xl np) (or (seq (wrd "of") (x3 np) (x2 np)) (seq (wrd "to 't ) (x3 v) (x2 rip))))) (rip (seq (x3 rip) (vrd "of f' ) (x2 np) (wrd "by") (xl np)))) Given an input semantic pattern, we locate the first grammar rule that matches it, i.e., a rule whose left-hand-side features except : rest are contained in the input pattern. The feature :rest is our mech- anism for allowing partial matchings between rules and semantic inputs. Any input features that are not matched by the selected rule are collected in :rest, and recursively matched against other gram- mar rules. For the remaining features, we compute new e- structures using the rule's right-hand side. In this example, the rule gives four ways to make a syntactic S, two ways to make an infinitive, and one way to make an NP. Corresponding word lattices are built out of elements that include: • (seq x y ... )--create a lattice by sequentially gluing together the lattices x, y, and ... • (or x y ... )--create a lattice by branching on x, y, and ... • (wrd w)--create the smallest lattice: a single arc labeled with the word w. • (xn <syn>)--if the e-structure for the se- mantic material under the xn feature contains <syn, lat>, return the word lattice lat; oth- erwise fail. Any failure inside an alternative right-hand side of a rule causes that alternative to fail and be ignored. When all alternatives have been processed, results are collected into a new e-structure. If two or more word lattices can be created from one rule, they are merged with a final or. 256 Because our grammar is organized around seman- tic patterns, it nicely concentrates all of the mate- rial required to build word lattices. Unfortunately, it forces us to restate the same syntactic constraint in many places. A second problem is that sequential composition does not allow us to insert new words inside old lattices, as needed to generate sentences like John looked it up. We have extended our no- tation to allow such constructions, but the full so- lution is to move to a unification-based framework, in which e-structures are replaced by arbitrary fea- ture structures with syn, sere, and lat fields. Of course, this requires extremely efficient handling of the disjunctions inherent in large word lattices. 8 Results We implemented a medium-sized grammar of En- glish based on the ideas of the previous section, for use in experiments and in the JAPANGLOSS ma- chine translation system. The system converts a se- mantic input into a word lattice, sending the result to one of three sentence extraction programs: • RANDOM--follows a random path through the lattice. • DEFAULT--follows the topmost path in the lat- tice. All alternatives are ordered by the gram- mar writer, so that the topmost lattice path cor- responds to various defaults. In our grammar, defaults include singular noun phrases, the def- inite article, nominal direct objects, in versus on, active voice, that versus who, the alphabet- ically first synonym for open-class words, etc. • STATISTICAL--a sentence extractor based on word bigram probabilities, as described in Sec- tions 5 and 7. For evaluation, we compare English outputs from these three sources. We also look at lattice prop- erties and execution speed. Space limitations pre- vent us from tracing the generation of many long sentences--we show instead a few short ones. Note that the sample sentences shown for the RANDOM ex- traction model are not of the quality that would nor- mally be expected from a knowledge-based genera- tor, because of the high degree of ambiguity (un- specified features) in our semantic input. This in- completeness can be in turn attributed in part to the lack of such information in Japanese source text and in part to our own desire to find out how much of the ambiguity can be automatically resolved with our statistical model. INPUT (A / [accusel :AGENT SHE :PATIENT (T / [thieve[ :AGENT HE :PATIENT (M / Imotorcar]))) LATTICE CREATED 44 nodes, 217 arcs, 381,440 paths; 59 distinct unigrams, 430 distinct bigrams. RANDOM EXTRACTION Her incriminates for him to thieve an automobiles. She am accusing for him to steal autos. She impeach that him thieve that there was the auto. DEFAULT EXTRACTION She accuses that he steals the auto. STATISTICAL BIGRAM EXTRACTION 1 She charged that he stole the car. 2 She charged that he stole the cars. 3 She charged that he stole cars. 4 She charged that he stole car. 5 She charges that he stole the car. TOTAL EXECUTION TIME: 22.77 CPU seconds. INPUT (A / Ihave the quality of beingl :DOMAIN (P / [procurel :AGENT (A2 / [American[) :PATIENT (G / [gun, arml)) :RANGE (E / [easy, effortless[)) LATTICE CREATED 64 nodes, 229 arcs, 1,345,536 paths; 47 distinct unigrams, 336 distinct bigrams. RANDOM EXTRACTION Procurals of guns by Americans were easiness. A procurements of guns by a Americans will be an effortlessness. It is easy that Americans procure that there is gun. DEFAULT EXTRACTION The procural of the gun by the American is easy. STATISTICAL BIGRAM EXTRACTION 1 It is easy for Americans to obtain a gun. 2 It will be easy for Americans to obtain a 257 gun. 3 It is easy for Americans to obtain gun. 4 It is easy for American to obtain a gun. 5 It was easy for Americans to obtain a gun. TOTAL EXECUTION TIME: 23.30 CPU seconds. INPUT (H / [have the quality of being[ :DOMAIN (H2 / [have the quality of being] :DOMAIN (E / lear, take inl :AGENT YOU :PATIENT (P / IpouletJ)) :RANGE (0 / Jobligatory[)) :RANGE (P2 / [possible, potential[)) LATTICE CREATED 260 nodes, 703 arcs, 10,734,304 paths; 48 distinct unigrams, 345 distinct bigrams. RANDOM EXTRACTION You may be obliged to eat that there was the poulet. An consumptions of poulet by you may be the requirements. It might be the requirement that the chicken are eaten by you. DEFAULT EXTRACTION That the consumption of the chicken by you is obligatory is possible. STATISTICAL BIGRAM EXTRACTION 1 You may have to eat chicken. 2 You might have to eat chicken. 3 You may be required to eat chicken. 4 You might be required to eat chicken. 5 You may be obliged to eat chicken. TOTAL EXECUTION TIME: 58.78 CPU seconds. A final (abbreviated) example comes from interlin- gua expressions produced by the semantic analyzer of JAPANGLOSS, involving long sentences charac- teristic of newspaper text. Note that although the lattice is not much larger than in the previous ex- amples, it now encodes many more paths. LATTICE CREATED 267 nodes, 726 arcs, 4,831,867,621,815,091,200 paths; 67 distinct unigrams, 332 distinct bigraras. RANDOM EXTRACTION Subsidiary on an Japan's of Perkin Elmer Co.'s hold a stocks's majority, and as for a beginnings, productia of an stepper and an dry etching devices which were applied for an constructia of microcircuit microchip was planed. STATISTICAL BIGRAM EXTRACTION Perkin Elmer Co.'s Japanese subsidiary holds majority of stocks, and as for the beginning, production of steppers and dry etching devices that will be used to construct microcircuit chips are planned. TOTAL EXECUTION TIME: 106.28 CPU seconds. 9 Strengths and Weaknesses Many-paths generation leads to a new style of incre- mental grammar building. When dealing with some new construction, we first rather mindlessly overgen- erate, providing the grammar with many ways to ex- press the same thing. Then we watch the statistical component make its selections. If the selections are correct, there is no need to refine the grammar. For example, in our first grammar, we did not make any lexical or grammatical case distinctions. So our lattices included paths like Him saw I as well as He saw me. But the statistical model stu- diously avoided the bad paths, and in fact, we have yet to see an incorrect case usage from our genera- tor. Likewise, our grammar proposes both his box and the box of he/him, but the former is statistically much more likely. Finally, we have no special rule to prohibit articles and possessives from appearing in the same noun phrase, but the bigram the his is so awful that the null article is always selected in the presence of a possessive pronoun. So we can get away with treating possessive pronouns like regular adjectives, greatly simplifying our grammar. We have also been able to simplify the genera- tion of morphological variants. While true irregular forms (e.g., child~children) must be kept in a small exception table, the problem of "multiple regular" patterns usually increases the size of this table dra- matically. For example, there are two ways to plu- ralize a noun ending in -o, but often only one is cor- rect for a given noun (potatoes, but photos). There are many such inflectional and derivational patterns. Our approach is to apply all patterns and insert all results into the word lattice. Fortunately, the statis- tical model steers clear of sentences containing non- words like potatos and photoes. We thus get by with a very small exception table, and furthermore, our spelling habits automatically adapt to the training corpus. 258 Most importantly, the two-level generation model allows us to indirectly apply lexical constraints for the selection of open-class words, even though these constraints are not explicitly represented in the gen- erator's lexicon. For example, the selection of a word from a pair of frequently co-occurring adja- cent words will automatically create a strong bias for the selection of the other member of the pair, if the latter is compatible with the semantic concept being lexicalized. This type of collocational knowl- edge, along with additional collocational information based on long- and variable-distance dependencies, has been successfully used in the past to increase the fluency of generated text (Smadja and McKe- own, 1991). But, although such collocational infor- mation can be extracted automatically, it has to be manually reformulated into the generator's represen- tational framework before it can be used as an addi- tional constraint during pure knowledge-based gen- eration. In contrast, the two-level model provides for the automatic collection and implicit represen- tation of collocational constraints between adjacent words. In addition, in the absence of external lexical con- straints the language model prefers words more typ- ical of and common in the domain, rather than generic or overly specialized or formal alternatives. The result is text that is more fluent and closely sim- ulates the style of the training corpus in this respect. Note for example the choice of obtain in the second example of the previous section in favor of the more formal procure. Many times, however, the statistical model does not finish the job. A bigram model will happily se- lect a sentence like I only hires men who is good pilots. If we see plenty of output like this, then grammatical work on agreement is needed. Or con- sider They planned increase in production, where the model drops an article because planned in- crease is such a frequent bigram. This is a subtle interaction--is planned a main verb or an adjective? Also, the model prefers short sentences to long ones with the same semantic content, which favors con- ciseness, but sometimes selects bad n-grams to avoid a longer (but clearer) rendition. This an interesting problem not encountered in otherwise similar speech recognition models. We are currently investigating solutions to all of these problems in a highly exper- imental setting. 10 Conclusions Statistical methods give us a way to address a wide variety of knowledge gaps in generation. They even make it possible to load non-traditional duties onto a generator, such as word sense disambiguation for machine translation. For example, bei in Japanese may mean either American or rice, and sha may mean shrine or company. If for some reason, the analysis of beisha fails to resolve these ambiguities, the generator can pass them along in the lattice it builds, e.g.: the American shrine In this case, the statistical model has a strong preference for the American company, which is nearly always the correct translation. 7 Furthermore, our two-level generation model can implicitly handle both paradigmatic and syntag- matic lexical constraints, leading to the simplifica- tion of the generator's grammar and lexicon, and enhancing portability. By retraining the statisti- cal component on a different domain, we can au- tomatically pick up the peculiarities of the sublan- guage such as preferences for particular words and collocations. At the same time, we take advantage of the strength of the knowledge-based approach which guarantees grammatical inputs to the statisti- cal component, and reduces the amount of language structure that is to be retrieved from statistics. This approach addresses the problematic aspects of both pure knowledge-based generation (where incomplete knowledge is inevitable) and pure statistical bag gen- eration (Brown et al., 1993) (where the statistical system has no linguistic guidance). Of course, the results are not perfect. We can im- prove on them by enhancing the statistical model, or by incorporating more knowledge and constraints in the lattices, possibly using automatic knowledge acquisition methods. One direction we intend to pursue is the rescoring of the top N generated sen- tences by more expensive (and extensive) methods, incorporating for example stylistic features or ex- plicit knowledge of flexible collocations. Acknowledgments We would like to thank Yolanda Gil, Eduard Hovy, Kathleen McKeown, Jacques Robin, Bill Swartout, and the ACL reviewers for helpful comments on ear- lier versions of this paper. This work was supported in part by the Advanced Research Projects Agency (Order 8073, Contract MDA904-91-C-5224) and by the Department of Defense. References John Bateman. 1990. Upper modeling: A level of semantics for natural language processing. In 7See also (Dagan and Itai, 1994) for a study of the use of lexical co-occurrences to choose among open-class word translations. 259 Proc. Fifth International Workshop on Natural Language Generation, pages 54-61, Dawson, PA. Peter F. Brown, S. A. Della Pietra, V. J. Della Pietra, and R. L. Mercer. 1993. The mathematics of statistical machine translation: Parameter esti- mation. Computational Linguistics, 19(2), June. Yen-Lu Chow and Richard Schwartz. 1989. The N-Best algorithm: An efficient search procedure for finding top N sentence hypotheses. In Proc. DARPA Speech and Natural Language Workshop, pages 199-202. Kenneth W. Church and William A. Gale. 1991. A comparison of the enhanced Good-Turing and deleted estimation methods for estimating proba- bilities of English bigrams. Computer Speech and Language, 5:19-54. Ido Dagan and Alon Itai. 1994. Word sense dis- ambiguation using a second language monolingual corpus. Computational Linguistics, 20(4):563- 596. Laurence Danlos. 1986. The Linguistic Basis of Text Generation. Studies in Natural Language Processing. Cambridge University Press. Michael Elhadad and Jacques Robin. 1992. Con- trolling content realization with functional unifi- cation grammars. In Robert Dale, Eduard Hovy, Dietmar RSsner, and Oliviero Stock, editors, As- pects of Automated Natural Language Generation, pages 89-104. Springier Verlag. Michael Elhadad. 1993. Using Argumentation to Control Lexical Choice: A Unification-Based Im- plementation. Ph.D. thesis, Computer Science Department, Columbia University, New York. Karin Harbusch, Gen-ichiro Kikui, and Anne Kil- ger. 1994. Default handling in incremental gener- ation. In Proc. COLING-9~, pages 356-362, Ky- oto, Japan. Vasileios Hatzivassiloglou and Kevin Knight. 1995. Unification-based glossing. In Proc. IJCAL Philip J. Hayes. 1981. A construction-specific ap- proach to focused interaction in flexible parsing. In Proc. ACL, pages 149-152. Annette Herskovits. 1986. Language and spatial cognition: an interdisciplinary study of the prepo- sitions of English. Studies in Natural Language Processing. Cambridge University Press. Kevin Knight and Ishwar Chander. 1994. Auto- mated postediting of documents. In Proc. AAAL Kevin Knight and Steve K. Luk. 1994. Building a large-scale knowledge base for machine transla- tion. In Proc. AAAL Kevin Knight, Ishwar Chander, Matthew Haines, Vasileios Hatzivassiloglou, Eduard Hovy, Masayo Iida, Steve K. Luk, Akitoshi Okumura, Richard Whitney, and Kenji Yamada. 1994. Integrating knowledge bases and statistics in MT. In Proc. Conference of the Association for Machine Trans- lation in the Americas (AMTA). Kevin Knight, Ishwar Chander, Matthew Haines, Vasileios Hatzivassiloglou, Eduard Hovy, Masayo Iida, Steve K. Luk, Richard Whitney, and Kenji Yamada. 1995. Filling knowledge gaps in a broad- coverage MT system. In Proc. IJCAI. Karen Kukich, K. McKeown, J. Shaw, J. Robin, N. Morgan, and J. Phillips. 1994. User-needs analysis and design methodology for an auto- mated document generator. In A. Zampolli, N. Calzolari, and M. Palmer, editors, Current Is- sues in Computational Linguistics: In Honour of Don Walker. Kluwer Academic Press, Boston. Karen Kukich. 1988. Fluency in natural language reports. In David D. McDonald and Leonard Bolc, editors, Natural Language Generation Sys- tems. Springer-Verlag, Berlin. Alon Lavie. 1994. An integrated heuristic scheme for partial parse evaluation. In Proc. A CL (stu- dent session). Marie W. Meteer, D. D. McDonald, S. D. Anderson, D. Forster, L. S. Gay, A. K. Huettner, and P. St- bun. 1987. Mumble-86: Design and implementa- tion. Technical Report COINS 87-87, University of Massachussets at Amherst, Ahmerst, MA. George A. Miller. 1990. Wordnet: An on-line lexical database. International Journal of Lexicography, 3(4). (Special Issue). Penman. 1989. The Penman documentation. Tech- nical report, USC/Information Sciences Institute. Paul Procter, editor. 1978. Longman Dictionary of Contemporary English. Longman, Essex, UK. Jacques :Robin. 1995. Revision-Based Generation of Natural Language Summaries Providing Historical Background: Corpus-based Analysis, Design, Im- plementation, and Evaluation. Ph.D. thesis, Com- puter Science Department, Columbia University, New York, NY. Also, Technical Report CU-CS- 034-94. Frank Smadja and Kathleen R. McKeown. 1991. Using collocations for language generation. Com- putational Intelligence, 7(4):229-239, December. M. Tomita and E. Nyberg. 1988. The GenKit and Transformation Kit User's Guide. Technical Re- port CMU-CMT-88-MEMO, Center for Machine Translation, Carnegie Mellon University. A. Waibel and K. F. Lee, editors. 1990. Readings in Speech Recognition. Morgan Kaufmann, San Mateo, CA. R. Weischedel and J. Black. 1980. Responding to potentially unparseable sentences. Am. J. Com- putational Linguistics, 6. 260 | 1995 | 34 |
An Efficient Generation Algorithm for Lexicalist MT Victor Poznafiski, John L. Beaven &: Pete Whitelock * SHARP Laboratories of Europe Ltd. Oxford Science Park, Oxford OX4 4GA United Kingdom {vp ~i lb,pete } @sharp. co.uk Abstract The lexicalist approach to Machine Trans- lation offers significant advantages in the development of linguistic descriptions. However, the Shake-and-Bake generation algorithm of (Whitelock, 1992) is NP- complete. We present a polynomial time algorithm for lexicalist MT generation pro- vided that sufficient information can be transferred to ensure more determinism. 1 Introduction Lexicalist approaches to MT, particularly those in- corporating the technique of Shake-and-Bake gen- eration (Beaven, 1992a; Beaven, 1992b; Whitelock, 1994), combine the linguistic advantages of transfer (Arnold et al., 1988; Allegranza et al., 1991) and interlingual (Nirenburg et al., 1992; Dorr, 1993) ap- proaches. Unfortunately, the generation algorithms described to date have been intractable. In this pa- per, we describe an alternative generation compo- nent which has polynomial time complexity. Shake-and-Bake translation assumes a source grammar, a target grammar and a bilingual dictio- nary which relates translationally equivalent sets of lexical signs, carrying across the semantic dependen- cies established by the source language analysis stage into the target language generation stage. The translation process consists of three phases: 1. A parsing phase, which outputs a multiset, or bag, of source language signs instantiated with sufficiently rich linguistic information es- tablished by the parse to ensure adequate trans- lations. 2. A lexical-semantic transfer phase which em- ploys the bilingual dictionary to map the bag *We wish to thank our colleagues Kerima Benkerimi, David Elworthy, Peter Gibbins, Inn Johnson, Andrew Kay and Antonio Sanfilippo at SLE, and our anonymous reviewers for useful feedback and discussions on the re- search reported here and on earlier drafts of this paper. of instantiated source signs onto a bag of target language signs. 3. A generation phase which imposes an order on the bag of target signs which is guaranteed grammatical according to the monolingual tar- get grammar. This ordering must respect the linguistic constraints which have been trans- ferred into the target signs. The Shake-an&Bake generation algorithm of (Whitelock, 1992) combines target language signs using the technique known as generate-and-test. In effect, an arbitrary permutation of signs is input to a shift-reduce parser which tests them for grammatical well-formedness. If they are well-formed, the system halts indicating success. If not, another permutation is tried and the process repeated. The complexity of this algorithm is O(n!) because all permutations (n! for an input of size n) may have to be explored to find the correct answer, and indeed must be explored in order to verify that there is no answer. Proponents of the Shake-and-Bake approach have employed various techniques to improve generation efficiency. For example, (Beaven, 1992a) employs a chart to avoid recalculating the same combina- tions of signs more than once during testing, and (Popowich, 1994) proposes a more general technique for storing which rule applications have been at- tempted; (Brew, 1992) avoids certain pathological cases by employing global constraints on the solu- tion space; researchers such as (Brown et al., 1990) and (Chen and Lee, 1994) provide a system for bag generation that is heuristically guided by probabil- ities. However, none of these approaches is guar- anteed to avoid protracted search times if an exact answer is required, because bag generation is NP- complete (Brew, 1992). Our novel generation algorithm has polynomial complexity (O(n4)). The reduction in theoretical complexity is achieved by placing constraints on the power of the target grammar when operating on instantiated signs, and by using a more restric- tive data structure than a bag, which we call a target language normalised commutative bracketing 261 (TNCB). A TNCB records dominance information from derivations and is amenable to incremental up- dates. This allows us to employ a greedy algorithm to refine the structure progressively until either a target constituent is found and generation has suc- ceeded or no more changes can be made and gener- ation has failed. In the following sections, we will sketch the basic algorithm, consider how to provide it with an initial guess, and provide an informal proof of its efficiency. 2 A Greedy Incremental Generation Algorithm We begin by describing the fundamentals of a greedy incremental generation algorithm. The cruciM data structure that it employs is the TNCB. We give some definitions, state some key assumptions about suit- able TNCBs for generation, and then describe the algorithm itself. 2.1 TNCBs We assume a sign-based grammar with binary rules, each of which may be used to combine two signs by unifying them with the daughter categories and returning the mother. Combination is the commuta- tive equivalent of rule application; the linear order- ing of the daughters that leads to successful rule ap- plication determines the orthography of the mother. Whitelock's Shake-and-Bake generation algorithm attempts to arrange the bag of target signs until a grammatical ordering (an ordering which allows all of the signs to combine to yield a single sign) is found. However, the target derivation information itself is not used to assist the algorithm. Even in (Beaven, 1992a), the derivation information is used simply to cache previous results to avoid exact re- computation at a later stage, not to improve on pre- vious guesses. The reason why we believe such im- provement is possible is that, given adequate infor- mation from the previous stages, two target signs cannot combine by accident; they must do so be- cause the underlying semantics within the signs li- censes it. If the linguistic data that two signs contain allows them to combine, it is because they are providing a semantics which might later become more spec- ified. For example, consider the bag of signs that have been derived through the Shake-and-Bake pro- cess which represent the phrase: (1) The big brown dog Now, since the determiner and adjectives all mod- ify the same noun, most grammars will allow us to construct the phrases: (2) The dog (3) The big dog (4) The brown dog as well as the 'correct' one. Generation will fail if all signs in the bag are not eventually incorporated in tile final result, but in the naive algorithm, the intervening computation may be intractable. In the algorithm presented here, we start from ob- servation that the phrases (2) to (4) are not incorrect semantically; they are simply under-specifications of (1). We take advantage of this by recording the constituents that have combined within the TNCB, which is designed to allow further constituents to be incorporated with minimal recomputation. A TNCB is composed of a sign, and a history of how it was derived from its children. The structure is essentially a binary derivation tree whose children are unordered. Concretely, it is either NIL, or a triple: TNCB = NILlValue × TNCB x TNCB Value = Sign I INCONSISTENT I UNDETERMINED The second and third items of the TNCB triple are the child TNCBs. The value of a TNCB is the sign that is formed from the combination of its children, or INCONSISTENT, representing the fact that they cannot grammatically combine, or UN- DETERMINED, i.e. it has not yet been established whether the signs combine. Undetermined TNCBs are commutative, e.g. they do not distinguish between the structures shown in Figure 1. Figure 1: Equivalent TNCBs In section 3 we will see that this property is im- portant when starting up the generation process. Let us introduce some terminology. A TNCB is • well-formed iff its value is a sign, • ill-formed iff its value is INCONSISTENT, • undetermined (and its value is UNDETER- MINED) iff it has not been demonstrated whether it is well-formed or ill-formed. • maximal iff it is well-formed and its parent (if it has one) is ill-formed. In other words, a maxi- mal TNCB is a largest well-formed component of a TNCB. 262 Since TNCBs are tree-like structures, if a TNCB is undetermined or ill-formed then so are all of its ancestors (the TNCBs that contain it). We define five operations on a TNCB. The first three are used to define the fourth transformation (move) which improves ill-formed TNCBs. The fifth is used to establish the well-formedness of undeter- mined nodes. In the diagrams, we use a cross to represent ill-formed nodes and a black circle to rep- resent undetermined ones. Deletion: A maximal TNCB can be deleted from its current position. The structure above it must be adjusted in order to maintain binary branching. In figure 2, we see that when node 4 is deleted, so is its parent node 3. The new node 6, representing the combination of 2 and 5, is marked undetermined. t* 5 2 5 I..---- J Figure 2:4 is deleted, raising 5 Conjunction: A maximal TNCB can be con- joined with another maximal TNCB if they may be combined by rule. In figure 3, it can be seen how the maximal TNCB composed of nodes 1, 2, and 3 is conjoined with the maximal TNCB composed of nodes 4, 5 and 6 giving the TNCB made up of nodes 1 to 7. The new node, 7, is well-formed. 1 4 7 2 3 5 6 2 35 6 Figure 3:1 is conjoined with 4 giving 7 Adjunction: A maximal TNCB can be in- serted inside a maximal TNCB, i.e. conjoined with a non-maximal TNCB, where the combina- tion is licensed by rule. In figure 4, the TNCB composed of nodes 1, 2, and 3 is inserted in- side the TNCB composed of nodes 4, 5 and 6. All nodes (only 8 in figure 4) which dominate the node corresponding to the new combination (node 7) must be marked undetermined -- such nodes are said to be disrupted. 1 2 3 4 8 5 2 3 6 Figure 4:1 is adjoined next to 6 inside 4 Movement: This is a combination of a deletion with a subsequent conjunction or adjunction. In figure 5, we illustrate a move via conjunction. In the left-hand figure, we assume we wish to move the maximal TNCB 4 next to the maximal TNCB 7. This first involves deleting TNCB 4 (noting it), and raising node 3 to replace node 2. We then introduce node 8 above node 7, and make both nodes 7 and 4 its children. Note that during deletion, we remove a surplus node (node 2 in this case) and during conjunction or adjunction we introduce a new one (node 8 in this case) thus maintaining the same number of nodes in the tree. 9 /L 3 7 Figure 5: A conjoining move from 4 to 7 Evaluation: After a movement, the TNCB is undetermined as demonstrated in figure 5. The signs of the affected parts must be recal- culated by combining the recursively evaluated child TNCBs. 2.2 Suitable Grammars The Shake-and-Bake system of (Whitelock, 1992) employs a bag generation algorithm because it is as- sumed that the input to the generator is no more than a collection of instantiated signs. Full-scale bag generation is not necessary because sufficient infor- mation can be transferred from the source language to severely constrain the subsequent search during generation. The two properties required of TNCBs (and hence the target grammars with instantiated lexicM signs) are: 1. Precedence Monotonicity. The order of the 263 orthographies of two combining signs in the or- thography of the result must be determinate -- it must not depend on any subsequent combi- nation that the result may undergo. This con- straint says that if one constituent fails to com- bine with another, no permutation of the ele- ments making up either would render the com- bination possible. This allows bottom-up eval- uation to occur in linear time. In practice, this restriction requires that sufficiently rich infor- mation be transferred from the previous trans- lation stages to ensure that sign combination is deterministic. 2. Dominance Monotonicity. If a maximal TNCB is adjoined at the highest possible place inside another TNCB, the result will be well- formed after it is re-evaluated. Adjunction is only attempted if conjunction fails (in fact con- junction is merely a special case of adjunction in which no nodes are disrupted); an adjunction which disrupts i nodes is attempted before one which disrupts i + 1 nodes. Dominance mono- tonicity merely requires all nodes that are dis- rupted under this top-down control regime to be well-formed when re-evaluated. We will see that this will ensure the termination of the gen- eration algorithm within n- 1 steps, where n is the number of lexical signs input to the process. We are currently investigating the mathematical characterisation of grammars and instantiated signs that obey these constraints. So far, we have not found these restrictions particularly problematic. 2.3 The Generation Algorithm The generator cycles through two phases: a test phase and a rewrite phase. Imagine a bag of signs, corresponding to "the big brown dog barked", has been passed to the generation phase. The first step in the generation process is to convert it into some arbitrary TNCB structure, say the one in figure 6. In order to verify whether this structure is valid, we evaluate the TNCB. This is the test phase. If the TNCB evaluates successfully, the orthography of its value is the desired result. If not, we enter the rewrite phase. If we were continuing in the spirit of the origi- nal Shake-and-Bake generation process, we would now form some arbitrary mutation of the TNCB and retest, repeating this test-rewrite cycle until we ei- ther found a well-formed TNCB or failed. However, this would also be intractable due to the undirected- ness of the search through the vast number of possi- bilities. Given the added derivation information con- tained within TNCBs and the properties mentioned above, we can direct this search by incrementally improving on previously evaluated results. We enter the rewrite phase, then, with an ill- formed TNCB. Each move operation must improve p lg Figure 6: An arbitrary right-branching TNCB struc- ture it. Let us see why this is so. The move operation maintains the same number of nodes in the tree. The deletion of a maximal TNCB removes two ill-formed nodes (figure 2). At the deletion site, a new undetermined node is cre- ated, which may or may not be ill-formed. At the destination site of the movement (whether conjunc- tion or adjunction), a new well-formed node is cre- ated. The ancestors of the new well-formed node will be at least as well-formed as they were prior to the movement. We can verify this by case: 1. When two maximal TNCBs are conjoined, nodes dominating the new node, which were previously ill-formed, become undetermined. When re-evaluated, they may remain ill-formed or some may now become well-formed. 2. When we adjoin a maximal TNCB within an- other TNCB, nodes dominating the new well- formed node are disrupted. By dominance monotonicity, all nodes which were disrupted by the adjunction must become well-formed af- ter re-evaluation. And nodes dominating the maximal disrupted node, which were previously ill-formed, may become well-formed after re- evaluation. We thus see that rewriting and re-evaluating must improve the TNCB. Let us further consider the contrived worst-case starting point provided in figure 6. After the test phase, we discover that every single interior node is ill-formed. We then scan the TNCB, say top-down from left to right, looking for a maximal TNCB to move. In this case, the first move will be PAST to bark, by conjunction (figure 7). Once again, the test phase fails to provide a well- formed TNCB, so we repeat the rewrite phase, this time finding dog to conjoin with the (figure 8 shows the state just after the second pass through the test phase). After further testing, we again re-enter the rewrite phase and this time note that brown can be inserted in the maximal TNCB the dog barked adjoined with dog (figure 9). Note how, after combining dog and the, the parent sign reflects the correct orthography 264 Figure 7: The initial guess L___t/ \ PAST bark ~ brown .tg Figure 8: The TNCB after "PAST" is moved to "bark" even though they did not have the correct linear precedence. PAST bark the = browm t-___-J big Figure 9: The TNCB after "dog" is moved to "the" After finding that big may not be conjoined with the brown dog, we try to adjoin it within the latter. Since it will combine with brown dog, no adjunction to a lower TNCB is attempted. The final result is the TNCB in figure 11, whose orthography is "the big brown dog barked". We thus see that during generation, we formed a basic constituent, the dog, and incrementally refined it by adjoining the modifiers in place. At the heart of this approach is that, once well-formed, constituents can only grow; they can never be dismantled. Even if generation ultimately fails, maximal well- formed fragments will have been built; the latter may be presented to the user, allowing graceful degradation of output quality. the b~ PAST bXark d'og b~o.n ~he ~'bfg, Figure 10: The TNCB after "brown" is moved to "dog" the big brown dog barked PA k he Figure 11: The final TNCB after "big" is moved to "brown dog" 3 Initialising the Generator Considering the algorithm described above, we note that the number of rewrites necessary to repair the initial guess is no more than the number of ill-formed TNCBs. This can never exceed the number of inte- rior nodes of the TNCB formed from n lexical signs (i.e. n-2). Consequently, the better formed the ini- tial TNCB used by the generator, the fewer the num- ber of rewrites required to complete generation. In the last section, we deliberately illustrated an initial guess which was as bad as possible. In this section, we consider a heuristic for producing a motivated guess for the initial TNCB. Consider the TNCBs in figure 1. If we interpret the S, O and V as Subject, Object and Verb we can observe an equivalence between the structures with the bracketings: (S (V O)), (S (O V)), ((V O) S), and ((O V) S). The implication of this equivalence is that if, say, we are translating into a (S (V O)) language from a head-finM language and have iso- morphic dominance structures between the source and target parses, then simply mirroring the source parse structure in the initial target TNCB will pro- vide a correct initiM guess. For example, the English sentence (5): (5) the book is red 265 has a corresponding Japanese equivalent (6): (6) ((hon wa) (akai desu)) ((book TOP) (red is)) If we mirror the Japanese bracketing structure in English to form the initial TNCB, we obtain: ((book the) (red is)). This will produce the correct answer in the test phase of generation without the need to rewrite at all. Even if there is not an exact isomorphism between the source and target commutative bracketings, the first guess is still reasonable as long as the majority of child commutative bracketings in the target lan- guage are isomorphic with their equivalents in the source language. Consider the French sentence: (7) ((le ((grandchien) brun)) aboya) (8) ((the ((big dog) brown)) barked) The TNCB implied by the bracketing in (8) is equivalent to that in figure 10 and requires just one rewrite in order to make it well-formed. We thus see how the TNCBs can mirror the dominance in- formation in the source language parse in order to furnish the generator with a good initial guess. On the other hand, no matter how the SL and TL struc- tures differ, the algorithm will still operate correctly with polynomial complexity. Structural transfer can be incorporated to improve the efficiency of genera- tion, but it is never necessary for correctness or even tractability. 4 The Complexity of the Generator The theoretical complexity of the generator is O (n4), where n is the size of the input. We give an informal argument for this. The complexity of the test phase is the number of evaluations that have to be made. Each node must be tested no more than twice in the worst case (due to precedence monotonicity), as one might have to try to combine its children in either direction according to the grammar rules. There are always exactly n - 1 non-leaf nodes, so the complex- ity of the test phase is O(n). The complexity of the rewrite phase is that of locating the two TNCBs to be combined. In the worst case, we can imagine picking an arbitrary child TNCB (O(n)) and then trying to find another one with which it combines (O(n)). The complexity of this phase is therefore the product of the picking and combining complex- ities, i.e. O(n2). The combined complexity of the test-rewrite cycle is thus O(n3). Now, in section 3, we argued that no more than n - 1 rewrites would ever be necessary, thus the overall complexity of gen- eration (even when no solution is found) is O(n4). Average case complexity is dependent on the qual- ity of the first guess, how rapidly the TNCB struc- ture is actually improved, and to what extent the TNCB must be re-evaluated after rewriting. In the SLEMaT system (Poznarlski et al., 1993), we have tried to form a good initial guess by mirroring the source structure in the target TNCB, and allowing some local structural modifications in the bilingual equivalences. Structural transfer operations only affect the ef- ficiency and not the functionality of generation. Transfer specifications may be incrementally refined and empirically tested for efficiency. Since complete specification of transfer operations is not required for correct generation of grammatical target text, the version of Shake-and-Bake translation presented here maintains its advantage over traditional trans- fer models, in this respect. The monotonicity constraints, on the other hand, might constitute a dilution of the Shake-and-Bake ideal of independent grammars. For instance, prece- dence monotonicity requires that the status of a clause (strictly, its lexical head) as main or sub- ordinate has to be transferred into German. It is not that the transfer of information per se compro- mises the ideal -- such information must often ap- pear in transfer entries to avoid grammatical but incorrect translation (e.g. a great man translated as un homme grand). The problem is justifying the main/subordinate distinction in every language that we might wish to translate into German. This distinction can be justified monolingually for the other languages that we treat (English, French, and Japanese). Whether the constraints will ultimately require monolingual grammars to be enriched with entirely unmotivated features will only become clear as translation coverage is extended and new lan- guage pairs are added. 5 Conclusion We have presented a polynomial complexity gener- ation algorithm which can form part of any Shake- and-Bake style MT system with suitable grammars and information transfer. The transfer module is free to attempt structural transfer in order to pro- duce the best possible first guess. We tested a TNCB-based generator in the SLEMaT MT sys- tem with the pathological cases described in (Brew, 1992) against Whitelock's original generation algo- rithm, and have obtained speed improvements of several orders of magnitude. Somewhat more sur- prisingly, even for short sentences which were not problematic for Whitelock's system, the generation component has performed consistently better. References V. Allegranza, P. Bennett, J. Durand, F. van Eynde, L. Humphreys, P. Schmidt, and E. Steiner. 1991. Linguistics for Machine Translation: The Eurotra Linguistic Specifications. In C. Copeland, J. Du- rand, S. Krauwer, and B. Maegaard, editors, The Eurotra Formal Specifications. Studies in Machine 266 Translation and Natural Language Processing 2, pages 15-124. Office for Official Publications of the European Communities. D. Arnold, S. Krauwer, L. des Tombe, and L. Sadler. 1988. 'Relaxed' Compositionality in Machine Translation. In Second International Conference on Theoretical and Methodological Issues in Ma- chine Translation of Natural Languages, Carnegie Mellon Univ, Pittsburgh. John L. Beaven. 1992a. Lexicalist Unification-based Machine Translation. Ph.D. thesis, University of Edinburgh, Edinburgh. John L. Beaven. 1992b. Shake-and-Bake Machine Translation. In Proceedings of COLING 92, pages 602-609, Nantes, France. Chris Brew. 1992. Letting the Cat out of the Bag: Generation for Shake-and-Bake MT. In Proceed- ings of COLING 92, pages 29-34, Nantes, France. Peter F. Brown, John Cocke, A Della Pietra, Vin- cent J. Della Pietra, Fredrick Jelinek, John D. Lafferty, Robert L. Mercer, and Paul S. Roossin. 1990. A Statistical Approach to Machine Trans- lation. Computational Linguistics, 16(2):79-85, June. Hsin-Hsi Chen and Yue-Shi Lee. 1994. A Correc- tive Training Algorithm for Adaptive Learning in Bag Generation. In International Conference on New Methods in Language Processing (NeMLaP), pages 248-254, Manchester, UK. UMIST. Bonnie Jean Dorr. 1993. Machine Translation: A View from the Lexicon. Artificial Intelligence Se- ries. The MIT Press, Cambridge, Mass. Sergei Nirenburg, Jaime Carbonell, Masaru Tomita, and Kenneth Goodman. 1992. Machine Trans- lation: A Knowledge-Based Approach. Morgan Kaaufmann, San Mateo, CA. Fred Popowich. 1994. Improving the Efficiency of a Generation Algorithm for Shake and Bake Machine Translation using Head-Driven Phrase Structure Grammar. TechnicM Report CMPT- TR 94-07, School of Computing Science, Simon Fraser University, Burnaby, British Columbia, CANADA V5A 1S6. V. Poznariski, John L. Beaven, and P. Whitelock. 1993. The Design of SLEMaT Mk II. Technical Report IT-1993-19, Sharp Laboratories of Europe, LTD, Edmund Halley Road, Oxford Science Park, Oxford OX4 4GA, July. P. Whitelock. 1992. Shake and Bake Translation. In Proceedings of COLING 92, pages 610-616, Nantes, France. P. Whitelock. 1994. Shake-and-Bake Translation. In C. J. Rupp, M. A. Rosner, and R. L. Johnson, editors, Constraints, Language and Computation, pages 339-359. Academic Press, London. 267 | 1995 | 35 |
Some Novel Applications of Explanation-Based Learning to Parsing Lexicalized Tree-Adjoining Grammars" B. Srinivas and Aravind K. Joshi Department of Computer and Information Science University of Pennsylvania Philadelphia, PA 19104, USA {srini, joshi} @linc.cis.upenn.edu Abstract In this paper we present some novel ap- plications of Explanation-Based Learning (EBL) technique to parsing Lexicalized Tree-Adjoining grammars. The novel as- pects are (a) immediate generalization of parses in the training set, (b) generaliza- tion over recursive structures and (c) rep- resentation of generalized parses as Finite State Transducers. A highly impoverished parser called a "stapler" has also been in- troduced. We present experimental results using EBL for different corpora and archi- tectures to show the effectiveness of our ap- proach. 1 Introduction In this paper we present some novel applications of the so-called Explanation-Based Learning technique (EBL) to parsing Lexicalized Tree-Adjoining gram- mars (LTAG). EBL techniques were originally intro- duced in the AI literature by (Mitchell et al., 1986; Minton, 1988; van Harmelen and Bundy, 1988). The main idea of EBL is to keep track of problems solved in the past and to replay those solutions to solve new but somewhat similar problems in the future. Although put in these general terms the approach sounds attractive, it is by no means clear that EBL will actually improve the performance of the system using it, an aspect which is of great interest to us here. Rayner (1988) was the first to investigate this technique in the context of natural language pars- ing. Seen as an EBL problem, the parse of a sin- gle sentence represents an explanation of why the sentence is a part of the language defined by the grammar. Parsing new sentences amounts to find- ing analogous explanations from the training sen- tences. As a special case of EBL, Samuelsson and *This work was partiaJly supported by ARC) grant DAAL03-89-0031, ARPA grant N00014-90-J-1863, NSF STC grsmt DIR-8920230, and Ben Franklin Partnership Program (PA) gremt 93S.3078C-6 Rayner (1991) specialize a grammar for the ATIS domain by storing chunks of the parse trees present in a treebank of parsed examples. The idea is to reparse the training examples by letting the parse tree drive the rule expansion process and halting the expansion of a specialized rule if the current node meets a 'tree-cutting' criteria. However, the prob- lem of specifying an optimal 'tree-cutting' criteria was not addressed in this work. Samuelsson (1994) used the information-theoretic measure of entropy to derive the appropriate sized tree chunks automati- cally. Neumann (1994) also attempts to specialize a grammar given a training corpus of parsed exam- pies by generalizing the parse for each sentence and storing the generalized phrasal derivations under a suitable index. Although our work can be considered to be in this general direction, it is distinct in that it ex- ploits some of the key properties of LTAG to (a) achieve an immediate generalization of parses in the training set of sentences, (b) achieve an additional level of generalization of the parses in the training set, thereby dealing with test sentences which are not necessarily of the same length as the training sentences and (c) represent the set of generalized parses as a finite state transducer (FST), which is the first such use of FST in the context of EBL, to the best of our knowledge. Later in the paper, we will make some additional comments on the relation- ship between our approach and some of the earlier approaches. In addition to these special aspects of our work, we will present experimental results evaluating the effectiveness of our approach on more than one kind of corpus. We also introduce a device called a "sta- pler", a considerably impoverished parser, whose only job is to do term unification and compute alter- nate attachments for modifiers. We achieve substan- tial speed-up by the use of "stapler" in combination with the output of the FST. The paper is organized as follows. In Section 2 we provide a brief introduction to LTAG with the help of an example. In Section 3 we discuss our approach to using EBL and the advantages provided 268 (a) (b) Figure 1: Substitution and Adjunction in LTAG ~ t~ ~ b U W by LTAG. The FST representation used for EBL is illustrated in Section 4. In Section 5 we present the "stapler" in some detail. The results of some of the experiments based on our approach are presented in Section 6. In Section 7 we discuss the relevance of our approach to other lexicalized grammars. In Section 8 we conclude with some directions for future work. 2 Lexicalized Tree-Adjoining Grammar Lexicalized Tree-Adjoining Grammar (LTAG) (Sch- abes et al., 1988; Schabes, 1990) consists of ELE- MENTARY TREES, with each elementary tree hav- ing a lexical item (anchor) on its frontier. An el- ementary tree serves as a complex description of the anchor and provides a domain of locality over which the anchor can specify syntactic and semantic (predicate-argument) constraints. Elementary trees are of two kinds - (a) INITIAL TREES and (b) AUX- ILIARY TREES. Nodes on the frontier of initial trees are marked as substitution sites by a '~'. Exactly one node on the frontier of an auxiliary tree, whose label matches the label of the root of the tree, is marked as a foot node by a '.'; the other nodes on the frontier of an auxiliary tree are marked as substitution sites. El- ementary trees are combined by Substitution and Adjunction operations. Each node of an elementary tree is associated with the top and the bottom feature structures (FS). The bottom FS contains information relating to the sub- tree rooted at the node, and the top FS contains information relating to the supertree at that node. 1 The features may get their values from three differ- ent sources such as the morphology of anchor, the structure of the tree itself, or by unification during the derivation process. FS are manipulated by sub- stitution and adjunction as shown in Figure 1. The initial trees (as) and auxiliary trees (/3s) for the sentence show me the flights from Boston to Philadelphia are shown in Figure 2. Due to the lim- ited space, we have shown only the features on the al tree. The result of combining the elementary trees 1Nodes marked for substitution are associated with only the top FS. shown in Figure 2 is the derived tree, shown in Fig- ure 2(a). The process of combining the elementary trees to yield a parse of the sentence is represented by the derivation tree, shown in Figure 2(b). The nodes of the derivation tree are the tree names that are anchored by the appropriate lexical items. The combining operation is indicated by the nature of the arcs-broken line for substitution and bold line for adjunction-while the address of the operation is indicated as part of the node label. The derivation tree can also be interpreted as a dependency tree 2 with unlabeled arcs between words of the sentence as shown in Figure 2(c). Elementary trees of LTAG are the domains for specifying dependencies. Recursive structures are specified via the auxiliary trees. The three aspects of LTAG - (a) lexicalization, (b)-extended domain of locality and (c) factoring of recursion, provide a nat- ural means for generalization during the EBL pro- ce88. 3 Overview of our approach to using EBL We are pursuing the EBL approach in the context of a wide-coverage grammar development system called XTAG (Doran et al., 1994). The XTAG sys- tem consists of a morphological analyzer, a part-of- speech tagger, a wide-coverage LTAG English gram- mar, a predictive left-to-right Early-style parser for LTAG (Schabes, 1990) and an X-windows interface for grammar development (Paroubek et al., 1992). Figure 3 shows a flowchart of the XTAG system. The input sentence is subjected to morphological analysis and is parts-of-speech tagged before being sent to the parser. The parser retrieves the elemen- tary trees that the words of the sentence anchor and combines them by adjunction and substitution op- erations to derive a parse of the sentence. Given this context, the training phase of the EBL process involves generalizing the derivation trees generated by XTAG for a training sentence and stor- ing these generalized parses in the generalized parse 2There axe some differences between derivation trees and conventional dependency trees. However we will not discuss these differences in this paper as they are not relevant to the present work. 269 I, rl I • ~..u.,,,(] ,,,,,(-.,-1 ~.~,-] I dm~ NIP I N I 14 I D I eke C~3 NIP i)elP ~ N I I$&eld~ ~4 NP r ~t* Pr A I P NP~, N I I J~l ~5 A I~ NPI, I N I ~6 fr • ¥ ~ lqr N lef I~ me llrlr ~ • If le • If D I¢ I I I I I p ~ ~ N --....u I I (a) al [daow] ~Z [reel (2.2) ~ (n~td (~L.~) . a3[~] O) ¢~ (try] (0) p2 (to] (e) tb, tr~ to ' ' I I ! ! a5 (~o1 (2.2) ~ (l~niladdp~Ja] (2.2) ~ Pbl~ael: (b) (c) Figure 2: (as and/~s) Elementary trees, (a) Derived Tree, (b) Derivation Tree, and (c) Dependency tree for the sentence: show me the flights from Boston to Philadelphia. 270 Input Segtcnce t t i J L -I P.O.SBb~ 11 , , Tree ,?peb¢tion Derivation Structm~ Figure 3: Flowchart of the XTAG system Iwalfag~ - - ° ~ - = o ....... o ....... ~ ...... J Figure 4: Flowchart of the XTAG system with the EBL component database under an index computed from the mor- phological features of the sentence. The application phase of EBL is shown in the flowchart in Figure 4. An index using the morphological features of the words in the input sentence is computed. Using this index, a set of generalized parses is retrieved from the generalized parse database created in the train- ing phase. If the retrieval fails to yield any gener- alized parse then the input sentence is parsed using the full parser. However, if the retrieval succeeds then the generalized parses are input to the "sta- pler". Section 5 provides a description of the "sta- pler". 3.1 Implications of LTAG representation for EBL An LTAG parse of a sentence can be seen as a se- quence of elementary trees associated with the lexi- cal items of the sentence along with substitution and adjunction links among the elementary trees. Also, the feature values in the feature structures of each node of every elementary tree are instantiated by the parsing process. Given an LTAG parse, the general- ization of the parse is truly immediate in that a gen- eralized parse is obtained by (a) uninstantiating the particular lexical items that anchor the individual el- ementary trees in the parse and (h) uninstantiating the feature values contributed by the morphology of the anchor and the derivation process. This type of generalization is called feature-generalization. In other EBL approaches (Rayner, 1988; Neu- mann, 1994; Samuelsson, 1994) it is necessary to walk up and down the parse tree to determine the appropriate subtrees to generalize on and to sup- press the feature values. In our approach, the pro- cess of generalization is immediate, once we have the output of the parser, since the elementary trees an- chored by the words of the sentence define the sub- trees of the parse for generalization. Replacing the elementary trees with unistantiated feature values is all that is needed to achieve this generalization. The generalized parse of a sentence is stored in- dexed on the part-of-speech (POS) sequence of the training sentence. In the application phase, the POS sequence of the input sentence is used to retrieve a generalized parse(s) which is then instantiated with the features of the sentence. This method of retriev- ing a generalized parse allows for parsing of sen- tences of the same lengths and the same POS se- quence as those in the training corpus. However, in our approach there is another generalization that falls out of the LTAG representation which allows for flexible matching of the index to allow the system to parse sentences that are not necessarily of the same length as any sentence in the training corpus. Auxiliary trees in LTAG represent recursive struc- tures. So if there is an auxiliary tree that is used in an LTAG parse, then that tree with the trees for its arguments can be repeated any number of times, or possibly omitted altogether, to get parses of sen- tences that differ from the sentences of the training corpus only in the number of modifiers. This type of generalization is called modifier-generalization. This type of generalization is not possible in other EBL approaches. This implies that the POS sequence covered by the auxiliary tree and its arguments can be repeated zero or more times. As a result, the index of a gener- alized parse of a sentence with modifiers is no longer a string but a regular expression pattern on the POS sequence and retrieval of a generalized parse involves regular expression pattern matching on the indices. If, for example, the training example was (1) Show/V me/N the/D fiights/N from/P Boston/N to/P Philadelphia/N. then, the index of this sentence is (2) VNDN(PN)* since the two prepositions in the parse of this sen- tence would anchor (the same) auxiliary trees. 271 The most efficient method of performing regular expression pattern matching is to construct a finite state machine for each of the stored patterns and then traverse the machine using the given test pat- tern. If the machine reaches the final state, then the test pattern matches one of the stored patterns. Given that the index of a test sentence matches one of the indices from the training phase, the gen- eralized parse retrieved will be a parse of the test sentence, modulo the modifiers. For example, if the test sentence, tagged appropriately, is (3) Show/V me/S the/D flights/N from/P Boston/N to/P Philadelphia/N on/P Monday/N. then, Mthough the index of the test sentence matches the index of the training sentence, the gen- eralized parse retrieved needs to be augmented to accommodate the additional modifier. To accommodate the additional modifiers that may be present in the test sentences, we need to pro- vide a mechanism that assigns the additional modi- fiers and their arguments the following: 1. The elementary trees that they anchor and 2. The substitution and adjunction links to the trees they substitute or adjoin into. We assume that the additional modifiers along with their arguments would be assigned the same elementary trees and the same substitution and ad- junction links as were assigned to the modifier and its arguments of the training example. This, of course, means that we may not get all the possi- ble attachments of the modifiers at this time. (but see the discussion of the "stapler" Section 5.) 4 FST Representation The representation in Figure 6 combines the gener- alized parse with the POS sequence (regular expres- sion) that it is indexed by. The idea is to annotate each of the finite state arcs of the regular expression matcher with the elementary tree associated with that POS and also indicate which elementary tree it would be adjoined or substituted into. This results in a Finite State Transducer (FST) representation, illustrated by the example below. Consider the sen- tence (4) with the derivation tree in Figure 5. (4) show me the flights from Boston to Philadelphia. An alternate representation of the derivation tree that is similar to the dependency representation, is to associate with each word a tuple (this_tree, head_word, head_tree, number). The description of the tuple components is given in Table 1. Following this notation, the derivation tree in Fig- ure 5 (without the addresses of operations) is repre- sented as in (5). al [d~ow] oo'%% ~2 [me] (2.~) a~ [n~,ht~] (Z3) as ltl~l (1) I~ [frem] (0) 1~2 [to] (0) Z I ! ! a5 [m~tou] (2.2) ~ []~t-&lpU~] (2.2) Figure 5: Derivation Tree for the sentence: show me the flights from Boston to Philadelphia this_tree : the elementary tree that the word anchors head_word : the word on which the current word is dependent on; "-" if the current word does not depend on any other word. head_tree : the tree anchored by the head word; "-" if the current word does not depend on any other word. number : a signed number that indicates the direction and the ordinal position of the particular head elementary tree from the position of the current word OR : an unsigned number that indicates the Gorn-address (i.e., the node address) in the derivation tree to which the word attaches OR : "-" if the current word does not depend on any other word. Table 1: Description of the tuple components (5) show/(al, -, -, -) the/(a3, flights, ~4,+1) from/(fll, flights, a4, 2) to/(fi2, flights,a4, 2) me/(a2, show,al,-l) fiights/ ( a4,show , ~I , - I ) Boston/(as, from, fll -1) Philadelphia/(as, to, f12,-1) Generalization of this derivation tree results in the representation in (6). (6) -, -, -) D/(a3, N, a4,+l) (P/(fil, N, a4, 2) (P/(fl2, N, a4, 2) N/(a~, V,al,-1) N/(c~4,V, C~l,-1) N/(as, P, fl,-1))* N/(a6, P, fl,-1))* After generalization, the trees /h and f12 are no longer distinct so we denote them by ft. The trees a5 and a6 are also no longer distinct, so we denote them by a. With this change in notation, the two Kleene star regular expressions in (6) can be merged into one, and the resulting representation is (7) 272 v/(al,-,- ,-) N/(a2,v,a1,-t) I)/(%, l~.a 4 ,+t) N/(a4,v, at,-1 ) P/( ~.N.a 4,2) ~Y( a, P, ~, -t) Figure 6: Finite State Transducer Representation for the sentences: show me the flights from Boston to Philadelphia, show me the flights from Boston to Philadelphia on Monday, ... (v) -, -, -) D/(as, N, o~4,+1) (P/(3, N, o~4, 2) V,al,-1) N/(~4,V, ~1,-1) N/(a, P, 3,-1) )* which can be seen as a path in an FST as in Figure 6. This FST representation is possible due to the lex- icalized nature of the elementary trees. This repre- sentation makes a distinction between dependencies between modifiers and complements. The number in the tuple associated with each word is a signed num- ber if a complement dependency is being expressed and is an unsigned number if a modifier dependency is being expressed, s 5 Stapler In this section, we introduce a device called "sta- pler", a very impoverished parser that takes as in- put the result of the EBL lookup and returns the parse(s) for the sentence. The output of the EBL lookup is a sequence of elementary trees annotated with dependency links - an almost parse. To con- struct a complete parse, the "stapler" performs the following tasks: • Identify the nature of link: The dependency links in the almost parse are to be distinguished as either substitution links or adjunction links. This task is extremely straightforward since the types (initial or auxiliary) of the elementary trees a dependency link connects identifies the nature of the link. • Modifier Attachment: The EBL lookup is not guaranteed to output all possible modifier- head dependencies for a give input, since the modifier-generalization assigns the same modifier-head link, as was in the training ex- ample, to all the additional modifiers. So it is the task of the stapler to compute all the alter- nate attachments for modifiers. • Address of Operation: The substitution and ad- junction links are to be assigned a node ad- dress to indicate the location of the operation. The "staPler" assigns this using the structure of 3In a complement auxiliary tree the anchor subcat- egorizes for the foot node, which is not the case for a modifier auxiliaxy tree. the elementary trees that the words anchor and their linear order in the sentence. Feature Instantiation: The values of the fea- tures on the nodes of the elementary trees are to be instantiated by a process of unification. Since the features in LTAGs are finite-valued and only features within an elementary tree can be co-indexed, the "stapler" performs term- unification to instantiate the features. 6 Experiments and Results We now present experimental results from two dif- ferent sets of experiments performed to show the effectiveness of our approach. The first set of ex- periments, (Experiments l(a) through 1(c)), are in- tended to measure the coverage of the FST represen- tation of the parses of sentences from a range of cor- pora (ATIS, IBM-Manual and Alvey). The results of these experiments provide a measure of repeti- tiveness of patterns as described in this paper, at the sentence level, in each of these corpora. Experiment l(a): The details of the experiment with the ATIS corpus are as follows. A total of 465 sentences, average length of 10 words per sentence, which had been completely parsed by the XTAG sys- tem were randomly divided into two sets, a train- ing set of 365 sentences and a test set of 100 sen- tences, using a random number generator. For each of the training sentences, the parses were ranked us- ing heuristics 4 (Srinivas et al., 1994) and the top three derivations were generMized and stored as an FST. The FST was tested for retrieval of a gener- alized parse for each of the test sentences that were pretagged with the correct POS sequence (In Ex- periment 2, we make use of the POS tagger to do the tagging). When a match is found, the output of the EBL component is a generalized parse that associates with each word the elementary tree that it anchors and the elementary tree into which it ad- joins or substitutes into - an almost parse, s 4We axe not using stochastic LTAGs. For work on Stochastic LTAGs see (Resnik, 1992; Schabes, 1992). SSee (Joshi and Srinivas, 1994) for the role of almost parse in supertag disaanbiguation. 273 Corpus ATIS IBM Alvey Size of # of states % Coverage Response Time Training set (sees) 365 6000 80% 1.00 see/sent 1100 21000 40% 4.00 sec/sent 80 500 50% 0.20 sec/NP Table 2: Coverage and Retrieval times for various corpora Experiment l(b) and 1(c): Similar experiments were conducted using the IBM-manual corpus and a set of noun definitions from the LDOCE dictionary that were used as the Alvey test set (Carroll, 1993). Results of these experiments are summarized in Table 2. The size of the FST obtained for each of the corpora, the coverage of the FST and the traversal time per input are shown in this table. The cover- age of the FST is the number of inputs that were as- signed a correct generalized parse among the parses retrieved by traversing the FST. Since these experiments measure the performance of the EBL component on various corpora we will refer to these results as the 'EBL-Lookup times'. The second set of experiments measure the perfor- mance improvement obtained by using EBL within the XTAG system on the ATIS corpus. The per- formance was measured on the same set of 100 sen- tences that was used as test data in Experiment l(a). The FST constructed from the generalized parses of the 365 ATIS sentences used in experiment l(a) has been used in this experiment as well. Experiment 2(a): The performance of XTAG on the 100 sentences is shown in the first row of Table 3. The coverage represents the percentage of sentences that were assigned a parse. Experiment 2(b): This experiment is similar to Experiment l(a). It attempts to measure the cov- erage and response times for retrieving a general- ized parse from the FST. The results are shown in the second row of Table 3. The difference in the response times between this experiment and Exper- iment l(a) is due to the fact that we have included here the times for morphological analysis and the POS tagging of the test sentence. As before, 80% of the sentences were assigned a generalized parse. However, the speedup when compared to the XTAG system is a factor of about 60. Experiment 2(c): The setup for this experiment is shown in Figure 7. The almost parse from the EBL lookup is input to the full parser of the XTAG sys- tem. The full parser does not take advantage of the dependency information present in the almost parse, however it benefits from the elementary tree assign- ment to the words in it. This information helps the full parser, by reducing the ambiguity of assigning a correct elementary tree sequence for the words of the sentence. The speed up shown in the third row of Table 3 is entirely due to this ambiguity reduc- tion. If the EBL lookup fails to retrieve a parse, which happens for 20% of the sentences, then the s ................. .i l~.ivsttm llm Figure 7: System Setup for Experiment 2(c). tree assignment ambiguity is not reduced and the full parser parses with all the trees for the words of the sentence. The drop in coverage is due to the fact that for 10% of the sentences, the generalized parse retrieved could not be instantiated to the features of the sentence. System Coverage % Average time (in es) XTAG 100% 125.18 EBL lookup 80% 1.78 EBL+XTAG parser 90% 62.93 EBL+Stapler 70% 8.00 Table 3: Performance comparison of XTAG with and without EBL component Experiment 2(d): The setup for this experiment is shown in Figure 4. In this experiment, the almost parse resulting from the EBL lookup is input to the "stapler" that generates all possible modifier attach- ments and performs term unification thus generating all the derivation trees. The "stapler" uses both the elementary tree assignment information and the de- pendency information present in the almost parse and speeds up the performance even further, by a factor of about 15 with further decrease in coverage by 10% due to the same reason as mentioned in Ex- periment 2(c). However the coverage of this system is limited by the coverage of the EBL lookup. The results of this experiment are shown in the fourth row of Table 3. 274 7 Relevance to other lexicalized grammars Some aspects of our approach can be extended to other lexicalized grammars, in particular to catego- rial grammars (e.g. Combinatory Categorial Gram- mar (CCG) (Steedman, 1987)). Since in a categorial grammar the category for a lexical item includes its arguments, the process of generalization of the parse can also be immediate in the same sense of our ap- proach. The generalization over recursive structures in a categorial grammar, however, will require fur- ther annotations of the proof trees in order to iden- tify the 'anchor' of a recursive structure. If a lexi- cal item corresponds to a potential recursive struc- ture then it will be necessary to encode this informa- tion by making the result part of the functor to be X --+ X. Further annotation of the proof tree will be required to keep track of dependencies in order to represent the generalized parse as an FST. 8 Conclusion In this paper, we have presented some novel applica- tions of EBL technique to parsing LTAG. We have also introduced a highly impoverished parser called the "stapler" that in conjunction with the EBL re- suits in a speed up of a factor of about 15 over a system without the EBL component. To show the effectiveness of our approach we have also discussed the performance of EBL on different corpora, and different architectures. As part of the future work we will extend our ap- proach to corpora with fewer repetitive sentence pat- terns. We propose to do this by generalizing at the phrasal level instead of at the sentence level. References John Carroll. 1993. Practical Unification-based Parsing of Natural Language. University of Cambridge, Com- puter Laboratory, Cambridge, England. Christy Doran, DahLia Egedi, Beth Ann Hockey, B. Srini- vas, and Martin Zaidel. 1994. XTAG System - A Wide Coverage Grammar for English. In Proceedings of the 17 *h International Conference on Computational Lin- guistics (COLING '9~), Kyoto, Japan, August. Aravind K. Joshi and B. Srinivas. 1994. Disambigu~- tion of Super Parts of Speech (or Supertags): Almost Parsing. In Proceedings of the 17 th International Con- ]erence on Computational Linguistics (COLING '9~), Kyoto, Japan, August. Steve Minton. 1988. Qunatitative Results concerning the utility of Explanation-Based Learning. In Proceed- ings of 7 ~h AAAI Conference, pages 564-569, Saint Paul, Minnesota. Tom M. Mitchell, Richard M. Keller, and Smadax T. Kedar-Carbelli. 1986. Explanation-Based Generaliza- tion: A Unifying View. Machine Learning 1, 1:47-80. Gfinter Neumann. 1994. Application of Explanation- based Learning for Efficient Processing of Constraint- based Grammars. In 10 th IEEE Conference on Artifi- cial Intelligence for Applications, Sazt Antonio, Texas. Patrick Paroubek, Yves Schabes, and Aravind K. Joshi. 1992. Xtag - a graphical workbench for developing tree-adjoining grammars. In Third Conference on Ap- plied Natural Language Processing, Trento, Italy. Manny Rayner. 1988. Applying Explanation-Based Generalization to Natural Langua4ge Processing. In Proceedings of the International Conference on Fifth Generation Computer Systems, Tokyo. Philip Resnik. 1992. Probabilistic tree-adjoining gram- max as a framework for statistical natural language processing. In Proceedings of the Fourteenth In- ternational Conference on Computational Linguistics (COLING '9~), Ntntes, France, July. Christer Samuelsson aJad Manny Rayner. 1991. Quan- titative Evaluation of Explanation-Based Learning as an Optimization Tool for Large-Scale Natural Laat- guage System. In Proceedings of the I~ h Interna. tional Joint Conference on Artificial Intelligence, Syd- ney, Australia. Chister Samuelsson. 1994. Grammar Specialization through Entropy Thresholds. In 32nd Meeting of the Association for Computational Linguistics, Las Cruces, New Mexico. Yves Schabes, Anne Abeill~, aJad Aravind K. Joshi. 1988. parsing strategies with 'lexicalized' grammars: Application to "l~ee Adjoining Grammars. In Pro- ceedings of the 12 *4 International Con/erence on Com- putational Linguistics ( COLIN G '88), Budapest, Hun- gary, August. Yves Sch&bes. 1990. Mathematical and Computational Aspects of Lexicalized Grammars. Ph.D. thesis, Com- puter Science Department, University of Pennsylva- nia. Yves Schabes. 1992. Stochastic lexicalized tree- adjoining grammars. In Proceedings o] the Fourteenth International Con]erence on Computational Linguis- tics (COLING '9~), Nantes, Fr&ace, July. B. Srinivas, Christine Dora,s, Seth Kullck, and Anoop Sarkar. 1994. Evaluating a wide-coverage grammar. Manuscript, October. Mark Steedman. 1987. Combinatory Graanmaxs and Paxasitic Gaps. Natural Language and Linguistic The- ory, 5:403-439. Frank van Haxmelen a~d Allan Bundy. 1988. Explemation-Based Generafization -- Paxtial Evalua- tion. Artificial Intelligence, 36:401-412. 275 | 1995 | 36 |
Statistical Decision-Tree Models for Parsing* David M. Magerman Bolt Beranek and Newman Inc. 70 Fawcett Street, Room 15/148 Cambridge, MA 02138, USA magerman@bbn, com Abstract Syntactic natural language parsers have shown themselves to be inadequate for pro- cessing highly-ambiguous large-vocabulary text, as is evidenced by their poor per- formance on domains like the Wall Street Journal, and by the movement away from parsing-based approaches to text- processing in general. In this paper, I de- scribe SPATTER, a statistical parser based on decision-tree learning techniques which constructs a complete parse for every sen- tence and achieves accuracy rates far bet- ter than any published result. This work is based on the following premises: (1) grammars are too complex and detailed to develop manually for most interesting do- mains; (2) parsing models must rely heav- ily on lexical and contextual information to analyze sentences accurately; and (3) existing n-gram modeling techniques are inadequate for parsing models. In exper- iments comparing SPATTER with IBM's computer manuals parser, SPATTER sig- nificantly outperforms the grammar-based parser. Evaluating SPATTER against the Penn Treebank Wall Street Journal corpus using the PARSEVAL measures, SPAT- TER achieves 86% precision, 86% recall, and 1.3 crossing brackets per sentence for sentences of 40 words or less, and 91% pre- cision, 90% recall, and 0.5 crossing brackets for sentences between 10 and 20 words in length. This work was sponsored by the Advanced Research Projects Agency, contract DABT63-94-C-0062. It does not reflect the position or the policy of the U.S. Gov- ernment, and no official endorsement should be inferred. Thanks to the members of the IBM Speech Recognition Group for their significant contributions to this work. 1 Introduction Parsing a natural language sentence can be viewed as making a sequence of disambiguation decisions: de- termining the part-of-speech of the words, choosing between possible constituent structures, and select- ing labels for the constituents. Traditionally, disam- biguation problems in parsing have been addressed by enumerating possibilities and explicitly declaring knowledge which might aid the disambiguation pro- cess. However, these approaches have proved too brittle for most interesting natural language prob- lems. This work addresses the problem of automatically discovering the disambiguation criteria for all of the decisions made during the parsing process, given the set of possible features which can act as disambigua- tors. The candidate disambiguators are the words in the sentence, relationships among the words, and re- lationships among constituents already constructed in the parsing process. Since most natural language rules are not abso- lute, the disambiguation criteria discovered in this work are never applied deterministically. Instead, all decisions are pursued non-deterministically accord- ing to the probability of each choice. These proba- bilities are estimated using statistical decision tree models. The probability of a complete parse tree (T) of a sentence (S) is the product of each decision (dl) conditioned on all previous decisions: P(T[S) = H P(dildi-ldi-2""dlS)" diET Each decision sequence constructs a unique parse, and the parser selects the parse whose decision se- quence yields the highest cumulative probability. By combining a stack decoder search with a breadth- first algorithm with probabilistic pruning, it is pos- sible to identify the highest-probability parse for any sentence using a reasonable amount of memory and time. 276 The claim of this work is that statistics from a large corpus of parsed sentences combined with information-theoretic classification and training al- gorithms can produce an accurate natural language parser without the aid of a complicated knowl- edge base or grammar. This claim is justified by constructing a parser, called SPATTER (Statistical PATTErn Recognizer), based on very limited lin- gnistic information, and comparing its performance to a state-of-the-art grammar-based parser on a common task. It remains to be shown that an accu- rate broad-coverage parser can improve the perfor- mance of a text processing application. This will be the subject of future experiments. One of the important points of this work is that statistical models of natural language should not be restricted to simple, context-insensitive models. In a problem like parsing, where long-distance lex- ical information is crucial to disambiguate inter- pretations accurately, local models like probabilistic context-free grammars are inadequate. This work illustrates that existing decision-tree technology can be used to construct and estimate models which se- lectively choose elements of the context which con- tribute to disambignation decisions, and which have few enough parameters to be trained using existing resources. I begin by describing decision-tree modeling, showing that decision-tree models are equivalent to interpolated n-gram models. Then I briefly describe the training and parsing procedures used in SPAT- TER. Finally, I present some results of experiments comparing SPATTER with a grammarian's rule- based statistical parser, along with more recent re- suits showing SPATTER applied to the Wall Street Journal domain. 2 Decision-Tree Modeling Much of the work in this paper depends on replac- ing human decision-making skills with automatic decision-making algorithms. The decisions under consideration involve identifying constituents and constituent labels in natural language sentences. Grammarians, the human decision-makers in pars- ing, solve this problem by enumerating the features of a sentence which affect the disambiguation deci- sions and indicating which parse to select based on the feature values. The grammarian is accomplish- ing two critical tasks: identifying the features which are relevant to each decision, and deciding which choice to select based on the values of the relevant features. Decision-tree classification algorithms account for both of these tasks, and they also accomplish a third task which grammarians classically find dif- ficult. By assigning a probability distribution to the possible choices, decision trees provide a ranking sys- tem which not only specifies the order of preference for the possible choices, but also gives a measure of the relative likelihood that each choice is the one which should be selected. 2.1 What is a Decision Tree? A decision tree is a decision-making device which assigns a probability to each of the possible choices based on the context of the decision: P(flh), where f is an element of the future vocabulary (the set of choices) and h is a history (the context of the de- cision). This probability P(flh) is determined by asking a sequence of questions ql q2 ... qn about the context, where the ith question asked is uniquely de- termined by the answers to the i - 1 previous ques- tions. For instance, consider the part-of-speech tagging problem. The first question a decision tree might ask is: 1. What is the word being tagged? If the answer is the, then the decision tree needs to ask no more questions; it is clear that the deci- sion tree should assign the tag f = determiner with probability 1. If, instead, the answer to question 1 is bear, the decision tree might next ask the question: 2. What is the tag of the previous word? If the answer to question 2 is determiner, the de- cision tree might stop asking questions and assign the tag f = noun with very high probability, and the tag f = verb with much lower probability. How- ever, if the answer to question 2 is noun, the decision tree would need to ask still more questions to get a good estimate of the probability of the tagging deci- sion. The decision tree described in this paragraph is shown in Figure 1. Each question asked by the decision tree is repre- sented by a tree node (an oval in the figure) and the possible answers to this question are associated with branches emanating from the node. Each node de- fines a probability distribution on the space of pos- sible decisions. A node at which the decision tree stops asking questions is a leaf node. The leaf nodes represent the unique states in the decision-making problem, i.e. all contexts which lead to the same leaf node have the same probability distribution for the decision. 2.2 Decision Trees vs. n-graxns A decision-tree model is not really very different from an interpolated n-gram model. In fact, they 277 I I I P(aoun I bear, determiner)f0.8 P(vo~ I bear, determiner)--0.2 I -" Figure I: Partially-grown decision tree for part-of- speech tagging. are equivalent in representational power. The main differences between the two modeling techniques are how the models are parameterized and how the pa- rameters are estimated. 2.2.1 Model Parameterization First, let's be very clear on what we mean by an n-gram model. Usually, an n-gram model refers to a Markov process where the probability of a particular token being generating is dependent on the values of the previous n - 1 tokens generated by the same process. By this definition, an n-gram model has IWI" parameters, where IWI is the number of unique tokens generated by the process. However, here let's define an n-gram model more loosely as a model which defines a probability distri- bution on a random variable given the values of n- 1 random variables, P(flhlh2... hn-1). There is no assumption in the definition that any of the random variables F or Hi range over the same vocabulary. The number of parameters in this n-gram model is IFI I'[ IH, I. Using this definition, an n-gram model can be represented by a decision-tree model with n - 1 questions. For instance, the part-of-speech tagging model P(tilwiti_lti_2) can be interpreted as a 4- gram model, where HI is the variable denoting the word being tagged, Ha is the variable denoting the tag of the previous word, and Ha is the variable de- noting the tag of the word two words back. Hence, this 4-gram tagging model is the same as a decision- tree model which always asks the sequence of 3 ques- tions: 1. What is the word being tagged? 2. What is the tag of the previous word? 3. What is the tag of the word two words back? But can a decision-tree model be represented by an n-gram model? No, but it can be represented by an interpolated n-gram model. The proof of this assertion is given in the next section. 2.2.2 Model Estimation The standard approach to estimating an n-gram model is a two step process. The first step is to count the number of occurrences of each n-gram from a training corpus. This process determines the empir- ical distribution, Count(hlhz ... hn-lf) P(flhlh2... hn-1)= Count(hlh2... hn-1) The second step is smoothing the empirical distri- bution using a separate, held-out corpus. This step improves the empirical distribution by finding statis- tically unreliable parameter estimates and adjusting them based on more reliable information. A commonly-used technique for smoothing is deleted interpolation. Deleted interpolation es- timates a model P(f[hlh2... hn-1) by us- ing a linear combination of empirical models P(f]hklhk=... hk.,), where m < n and k,-x < ki < n for all i < m. For example, a model [~(fihlh2h3) might be interpolated as follows: P(.flhl h2hs ) = AI (hi h2hs)P(.fJhl h2h3) + :~2(h~h2h3)P(flhlh2) + As(hlh2h3)P(Ylhzh3) + )~(hlhuha)P(flh2hs) + As(hzhshs)P(f]hlh2) + )~ (hi h2h3)P(.flhl) + A~ (hi h2ha)P(.flh2) + AS (hlh2hs)P(flh3) where ~'~)q(hlh2h3) = 1 for all histories hlhshs. The optimal values for the A~ functions can be estimated using the forward-backward algorithm (Baum, 1972). A decision-tree model can be represented by an interpolated n-gram model as follows. A leaf node in a decision tree can be represented by the sequence of question answers, or history values, which leads the decision tree to that leaf. Thus, a leaf node defines a probability distribution based on values of those questions: P(flhklhk2 ... ha.,), where m < n and ki-1 < ki < n, and where hk~ is the answer to one of the questions asked on the path from the root to the leaf. ~ But this is the same as one of the terms in the interpolated n-gram model. So, a decision 1Note that in a decision tree, the leaf distribution is not affected by the order in which questions are asked. Asking about hi followed by h2 yields the same future distribution as asking about h2 followed by hi. 278 tree can be defined as an interpolated n-gram model where the At function is defined as: 1 if hk~hk2.., h~. is aleaf, Ai(hk~hk2... hk,) = 0 otherwise. 2.3 Decision-Tree Algorithms The point of showing the equivalence between n- gram models and decision-tree models is to make clear that the power of decision-tree models is not in their expressiveness, but instead in how they can be automatically acquired for very large modeling problems. As n grows, the parameter space for an n-gram model grows exponentially, and it quickly becomes computationally infeasible to estimate the smoothed model using deleted interpolation. Also, as n grows large, the likelihood that the deleted in- terpolation process will converge to an optimal or even near-optimal parameter setting becomes van- ishingly small. On the other hand, the decision-tree learning al- gorithm increases the size of a model only as the training data allows. Thus, it can consider very large history spaces, i.e. n-gram models with very large n. Regardless of the value of n, the number of param- eters in the resulting model will remain relatively constant, depending mostly on the number of train- ing examples. The leaf distributions in decision trees are empiri- cal estimates, i.e. relative-frequency counts from the training data. Unfortunately, they assign probabil- ity zero to events which can possibly occur. There- fore, just as it is necessary to smooth empirical n- gram models, it is also necessary to smooth empirical decision-tree models. The decision-tree learning algorithms used in this work were developed over the past 15 years by the IBM Speech Recognition group (Bahl et al., 1989). The growing algorithm is an adaptation of the CART algorithm in (Breiman et al., 1984). For detailed descriptions and discussions of the decision- tree algorithms used in this work, see (Magerman, 1994). An important point which has been omitted from this discussion of decision trees is the fact that only binary questions are used in these decision trees. A question which has k values is decomposed into a se- quence of binary questions using a classification tree on those k values. For example, a question about a word is represented as 30 binary questions. These 30 questions are determined by growing a classifi- cation tree on the word vocabulary as described in (Brown et al., 1992). The 30 questions represent 30 different binary partitions of the word vocabulary, and these questions are defined such that it is possi- ble to identify each word by asking all 30 questions. For more discussion of the use of binary decision-tree questions, see (Magerman, 1994). 3 SPATTER Parsing The SPATTER parsing algorithm is based on inter- preting parsing as a statistical pattern recognition process. A parse tree for a sentence is constructed by starting with the sentence's words as leaves of a tree structure, and labeling and extending nodes these nodes until a single-rooted, labeled tree is con- structed. This pattern recognition process is driven by the decision-tree models described in the previous section. 3.1 SPATTER Representation A parse tree can be viewed as an n-ary branching tree, with each node in a tree labeled by either a non-terminal label or a part-of-speech label. If a parse tree is interpreted as a geometric pattern, a constituent is no more than a set of edges which meet at the same tree node. For instance, the noun phrase, "a brown cow," consists of an edge extending to the right from "a," an edge extending to the left from "cow," and an edge extending straight up from "brown". Figure 2: Representation of constituent and labeling of extensions in SPATTER. In SPATTER, a parse tree is encoded in terms of four elementary components, or features: words, tags, labels, and extensions. Each feature has a fixed vocabulary, with each element of a given feature vo- cabulary having a unique representation. The word feature can take on any value of any word. The tag feature can take on any value in the part-of-speech tag set. The label feature can take on any value in the non-terminal set. The extension can take on any of the following five values: right - the node is the first child of a constituent; left - the node is the last child of a constituent; up - the node is neither the first nor the last child of a constituent; unary - the node is a child of a unary constituent; 279 root - the node is the root of the tree. For an n word sentence, a parse tree has n leaf nodes, where the word feature value of the ith leaf node is the ith word in the sentence. The word fea- ture value of the internal nodes is intended to con- tain the lexical head of the node's constituent. A deterministic lookup table based on the label of the internal node and the labels of the children is used to approximate this linguistic notion. The SPATTER representation of the sentence (S (N Each_DD1 code_NN1 (Tn used_VVN (P by_II (N the_AT PC_NN1)))) (V is_VBZ listed_VVN)) is shown in Figure 3. The nodes are constructed bottom-up from left-to-right, with the constraint that no constituent node is constructed until all of its children have been constructed. The order in which the nodes of the example sentence are constructed is indicated in the figure. 14 10 Each | 4 t2 ,~i~4 l~tOd mind ~¢ tho PC ~- Ii~od Figure 3: Treebank analysis encoded using feature values. 3.2 Training SPATTER's models SPATTER consists of three main decision-tree models: a part-of-speech tagging model, a node- extension model, and a node-labeling model. Each of these decision-tree models are grown using the following questions, where X is one of word, tag, label, or extension, and Y is either left and right: • What is the X at the current node? • What is the X at the node to the Y? • What is the X at the node two nodes to the Y? • What is the X at the current node's first child from the Y? • What is the X at the current node's second child from the Y? For each of the nodes listed above, the decision tree could also ask about the number of children and span of the node. For the tagging model, the values of the previous two words and their tags are also asked, since they might differ from the head words of the previous two constituents. The training algorithm proceeds as follows. The training corpus is divided into two sets, approx- imately 90% for tree growing and 10% for tree smoothing. For each parsed sentence in the tree growing corpus, the correct state sequence is tra- versed. Each state transition from si to 8i+1 is an event; the history is made up of the answers to all of the questions at state sl and the future is the value of the action taken from state si to state Si+l. Each event is used as a training example for the decision- tree growing process for the appropriate feature's tree (e.g. each tagging event is used for growing the tagging tree, etc.). After the decision trees are grown, they are smoothed using the tree smoothing corpus using a variation of the deleted interpolation algorithm described in (Magerman, 1994). 3.3 Parsing with SPATTER The parsing procedure is a search for the highest probability parse tree. The probability of a parse is just the product of the probability of each of the actions made in constructing the parse, according to the decision-tree models. Because of the size of the search space, (roughly O(ITI"INJ"), where [TJ is the number of part-of- speech tags, n is the number of words in the sen- tence, and [NJ is the number of non-terminal labels), it is not possible to compute the probability of every parse. However, the specific search algorithm used is not very important, so long as there are no search errors. A search error occurs when the the high- est probability parse found by the parser is not the highest probability parse in the space of all parses. SPATTER's search procedure uses a two phase approach to identify the highest probability parse of 280 a sentence. First, the parser uses a stack decoding algorithm to quickly find a complete parse for the sentence. Once the stack decoder has found a com- plete parse of reasonable probability (> 10-5), it switches to a breadth-first mode to pursue all of the partial parses which have not been explored by the stack decoder. In this second mode, it can safely discard any partial parse which has a probability lower than the probability of the highest probabil- ity completed parse. Using these two search modes, SPATTER guarantees that it will find the highest probability parse. The only limitation of this search technique is that, for sentences which are modeled poorly, the search might exhaust the available mem- ory before completing both phases. However, these search errors conveniently occur on sentences which SPATTER is likely to get wrong anyway, so there isn't much performance lossed due to the search er- rors. Experimentally, the search algorithm guaran- tees the highest probability parse is found for over 96% of the sentences parsed. 4 Experiment Results In the absence of an NL system, SPATTER can be evaluated by comparing its top-ranking parse with the treebank analysis for each test sentence. The parser was applied to two different domains, IBM Computer Manuals and the Wall Street Journal. 4.1 IBM Computer Manuals The first experiment uses the IBM Computer Man- uals domain, which consists of sentences extracted from IBM computer manuals. The training and test sentences were annotated by the University of Lan- caster. The Lancaster treebank uses 195 part-of- speech tags and 19 non-terminal labels. This tree- bank is described in great detail in (Black et al., 1993). The main reason for applying SPATTER to this domain is that IBM had spent the previous ten years developing a rule-based, unification-style prob- abilistic context-free grammar for parsing this do- main. The purpose of the experiment was to esti- mate SPATTER's ability to learn the syntax for this domain directly from a treebank, instead of depend- ing on the interpretive expertise of a grammarian. The parser was trained on the first 30,800 sen- tences from the Lancaster treebank. The test set included 1,473 new sentences, whose lengths range from 3 to 30 words, with a mean length of 13.7 words. These sentences are the same test sentences used in the experiments reported for IBM's parser in (Black et al., 1993). In (Black et al., 1993), IBM's parser was evaluated using the 0-crossing- brackets measure, which represents the percentage of sentences for which none of the constituents in the parser's parse violates the constituent bound- aries of any constituent in the correct parse. After over ten years of grammar development, the IBM parser achieved a 0-crossing-brackets score of 69%. On this same test set, SPATTER scored 76%. 4.2 Wall Street Journal The experiment is intended to illustrate SPATTER's ability to accurately parse a highly-ambiguous, large-vocabulary domain. These experiments use the Wall Street Journal domain, as annotated in the Penn Treebank, version 2. The Penn Treebank uses 46 part-of-speech tags and 27 non-terminal labels. 2 The WSJ portion of the Penn Treebank is divided into 25 sections, numbered 00 - 24. In these exper- iments, SPATTER was trained on sections 02 - 21, which contains approximately 40,000 sentences. The test results reported here are from section 00, which contains 1920 sentences, s Sections 01, 22, 23, and 24 will be used as test data in future experiments. The Penn Treebank is already tokenized and sen- tence detected by human annotators, and thus the test results reported here reflect this. SPATTER parses word sequences, not tag sequences. Further- more, SPATTER does not simply pre-tag the sen- tences and use only the best tag sequence in parsing. Instead, it uses a probabilistic model to assign tags to the words, and considers all possible tag sequences according to the probability they are assigned by the model. No information about the legal tags for a word are extracted from the test corpus. In fact, no information other than the words is used from the test corpus. For the sake of efficiency, only the sentences of 40 words or fewer are included in these experiments. 4 For this test set, SPATTER takes on average 12 2This treebank also contains coreference information, predicate-argument relations, and trace information in- dicating movement; however, none of this additional in- formation was used in these parsing experiments. SFor an independent research project on coreference, sections 00 and 01 have been annotated with detailed coreference information. A portion of these sections is being used as a development test set. Training SPAT- TER on them would improve parsing accuracy signifi- cantly and skew these experiments in favor of parsing- based approaches to coreference. Thus, these two sec- tions have been excluded from the training set and re- served as test sentences. 4SPATTER returns a complete parse for all sentences of fewer then 50 words in the test set, but the sentences of 41 - 50 words required much more computation than the shorter sentences, and so they have been excluded. 281 seconds per sentence on an SGI R4400 with 160 megabytes of RAM. To evaluate SPATTER's performance on this do- main, I am using the PARSEVAL measures, as de- fined in (Black et al., 1991): Precision no. of correct constituents in SPATTER parse no. of constituents in SPATTER parse Recall no. of correct constituents in SPATTER parse no. of constituents in treebank parse Crossing Brackets no. of constituents which vio- late constituent boundaries with a constituent in the treebank parse. The precision and recall measures do not consider constituent labels in their evaluation of a parse, since the treebank label set will not necessarily coincide with the labels used by a given grammar. Since SPATTER uses the same syntactic label set as the Penn Treebank, it makes sense to report labelled precision and labelled recall. These measures are computed by considering a constituent to be correct if and only if it's label matches the label in the tree- bank. Table 1 shows the results of SPATTER evaluated against the Penn Treebank on the Wall Street Jour- nal section 00. Comparisons Avg. Sent. Length Treebank Constituents Parse Constituents Tagging Accuracy Crossings Per Sentence Sent. with 0 Crossings Sent. with 1 Crossing Sent. with 2 Crossings Precision Recall Labelled Precision Labelled Recall 1759 1114 653 22.3 16.8 15.6 17.58 13.21 12.10 17.48 13.13 12.03 96.5% 96.6% 96.5% 1.33 0.63 0.49 55.4% 69.8% 73.8% 69.2% 83.8% 86.8% 80.2% 92.1% 95.1% 86.3% 89.8% 90.8% 85.8% 89.3% 90.3% 84.5% 88.1% 89.0% 84.0% 87.6% 88.5% Table 1: Results from the WSJ Penn Treebank ex- periments. Figures 5, 6, and 7 illustrate the performance of SPATTER as a function of sentence length. SPAT- TER's performance degrades slowly for sentences up to around 28 words, and performs more poorly and more erratically as sentences get longer. Figure 4 in- dicates the frequency of each sentence length in the test corpus. 80 70 80 SO 40 30 20 10 0 iii 4 • II 10 12 14 lid 18 20 2| 24 2il 28:10:12 34 :ill 38 40 Senbmce Length Figure 4: Frequency in the test corpus as a function of sentence length for Wall Street Journal experi- ments. 3.5 $ 2.5 2 1.S 1 0.6 0 t l ........................................................................................ $ 8 10 12 14 18 15 20 22 24 28 ~Zll 'lO $2:14 ~l ~8 40 Sentence Length Figure 5: Number of crossings per sentence as a function of sentence length for Wall Street Journal experiments. 5 Conclusion Regardless of what techniques are used for parsing disambiguation, one thing is clear: if a particular piece of information is necessary for solving a dis- ambiguation problem, it must be made available to the disambiguation mechanism. The words in the sentence are clearly necessary to make parsing de- cisions, and in some cases long-distance structural information is also needed. Statistical models for 282 100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% . . . . . . . '. : ', '. : : '. : ', ~ ~ ~ I ~ ~ : : : : : : : : : : ', '. ', ~ : : : : : :: I II; il 1012141118 |0 2= J4 te 20 30 5t $4 ~lll ~18 40 Sentence L~gth Figure 6: Percentage of sentence with 0, 1, and 2 crossings as a function of sentence length for Wall Street Journal experiments. 100% 96% 90% 85% 00% 76% -.-ememon I 8 lO 1| 14 1(1 18 s*O || |4 |$ 18 =0 S| S4 =e $8 40 Sentence Length Figure 7: Precision and recall as a function of sen- tence length for Wall Street Journal experiments. parsing need to consider many more features of a sentence than can be managed by n-gram modeling techniques and many more examples than a human can keep track of. The SPATTER parser illustrates how large amounts of contextual information can be incorporated into a statistical model for parsing by applying decision-tree learning algorithms to a large annotated corpus. References L. R. Bahl, P. F. Brown, P. V. deSouza, and R. L. Mercer. 1989. A tree-based statistical language model for natural language speech recognition. IEEE ~Pransactions on Acoustics, Speech, and Sig- nal Processing, Vol. 36, No. 7, pages 1001-1008. L. E. Baum. 1972. An inequality and associated maximization technique in statistical estimation of probabilistic functions of markov processes. In- equalities, Vol. 3, pages 1-8. E. Black and et al. 1991. A procedure for quanti- tatively comparing the syntactic coverage of en- glish grammars. Proceedings o/ the February 1991 DARPA Speech and Natural Language Workshop, pages 306-311. E. Black, R. Garside, and G. Leech. 1993. Statistically-driven computer grammars of english: the ibm/lancaster approach. Rodopi, Atlanta, Georgia. L. Breiman, J. H. Friedman, R. A. Olshen, and C. J. Stone. 1984. Ci~ssi]ication and Regression Trees. Wadsworth and Brooks, Pacific Grove, California. P. F. Brown, V. Della Pietra, P. V. deSouza, J. C. Lai, and R. L. Mercer. 1992. "Class-based n-gram models of natural language." Computa- tional Linguistics, 18(4), pages 467-479. D. M. Magerman. 1994. Natural Language Pars- ing as Statistical Pattern Recognition. Doctoral dissertation. Stanford University, Stanford, Cali- fornia. 283 | 1995 | 37 |
Evaluation of Semantic Clusters Rajeev Agarwal Mississippi State University Mississippi State, MS 39762 USA [email protected] Abstract Semantic clusters of a domain form an important feature that can be useful for performing syntactic and semantic disam- biguation. Several attempts have been made to extract the semantic clusters of a domain by probabilistic or taxonomic tech- niques. However, not much progress has been made in evaluating the obtained se- mantic clusters. This paper focuses on an evaluation mechanism that can be used to evaluate semantic clusters produced by a system against those provided by human experts. 1 Introduction 1 Most natural language processing (NLP) systems are designed to work on certain specific domains and porting them to other domains is often a very time- consuming and human-intenslve process. As the need for applying NLP systems to more and var- ied domains grows, it becomes increasingly impor- tant that some techniques be used to make these systems more portable. Several researchers (Lang and Hirschman, 1988; Rau et al., 1989; Pustejovsky, 1992; Grishman and Sterling, 1993; Basili et al., 1994), either directly or indirectly, have addressed issues that assist in making it easier to move an NLP system from one domain to another. One of the reasons for the lack of portability is the need for domain-specific semantic features that such systems often use for lexical, syntactic, and semantic disam- biguation. One such feature is the knowledge of the semantic clusters in a domain. Since semantic classes are often domain-specific, their automatic acquisition is not trivial. Such classes can be derived either by distributional means or from existing taxonomies, knowledge bases, dic- tionaries, thesauruses, and so on. A prime exam- ple of the latter is WordNet which has been used to 1The author is currently at Texas Instruments and all inquiries should be addressed to [email protected]. provide such semantic classes (Resnik, 1993; Basili et al., 1994) to assist in text understanding. Our efforts to obtain such semantic clusters with limited human intervention have been described elsewhere (Agarwal, 1995). This paper concentrates on the aspect of evahiating the obtained clusters against classes provided by human experts. 2 The Need Although there has been a lot of work done in ex- tracting semantic classes of a given domain, rela- tively little attention has been paid to the task of evaluating the generated classes. In the absence of an evaluation scheme, the only way to decide if the semantic classes produced by a system are "reason- able" or not is by having an expert analyze them by inspection. Such informal evaluations make it very difficult to compare one set of classes against an- other and are also not very reliable estimates of the quality of a set of classes. It is clear that a formal evaluation scheme would be of great help. Hatzivassiloglou and McKeown (1993) duster ad- jectives into partitions and present an interest- ing evaluation to compare the generated adjective classes against those provided by an expert. Their evaluation scheme bases the comparison between two classes on the presence or absence of pairs of words in them. Their approach involves filling in a YES-NO contingency table based on whether a pair of words (adjectives, in their case) is classified in the same class by the human expert and by the system. This method works very well for partitions. How- ever, if it is used to evaluate sets of classes where the classes may be potentiaily overlapping, their tech- nique yields a weaker measure since the same word pair could possibly be present in more than one class. An ideal scheme used to evaluate semantic classes should be able to handle overlapping classes (as o1>. posed to partitions) as well as hierarchies. The tech- nique proposed by Hatzivassiloglou and McKeown does not do a good job of evaluating either of these. In this paper, we present an evaluation methodology which makes it possible to properly evaluate over- 284 Table 1: Two Example Classes Class A Class B (System) (Expert) cat dog stomach pig COW hair cattle goat horse COW cat pig lamb dog sheep mare cattle swine goat lapping classes. Our scheme is also capable of in- corporating hierarchies provided by an expert into the evaluation, but still lacks the ability to compare hierarchies against hierarchies. In the discussion that follows, the word "cluster- ing" is used to refer to the set of classes that may be either provided by an expert or generated by the system, and the word "class" is used to refer to a single class in the clustering. 3 Evaluation Approach As mentioned above, we intend to be able to com- pare a clustering generated by a system against one provided by an expert. Since a word can occur in more than one class, it is important to find some kind of mapping between the classes generated by the system and the classes given by the expert. Such a mapping tells us which class in the system's clus- tering maps to which one in the expert's clustering, and an overall comparison of the clusterings is based on the comparison of the mutually mapping classes. Before we delve deeper into the evaluation pro- cess, we must decide on some measure of "closeness" between a pair of classes. We have adopted the F-measure (Hatzivassiloglou and McKeown, 1993; Chincor, 1992). In our computation of the F- measure, we construct a contingency table based on the presence or absence of individual elements in the two classes being compared, as opposed to basing it on pairs of words. For example, suppose that Class A is generated by the system and Class B is provided by an expert (as shown in Table 1). The contingency table obtained for this pair of classes is shown in Table 2. The three main steps in the evaluation process are the acquisition of "correct" classes from domain ex- perts, mapping the experts' clustering to that gener- ated by the system, and generating an overall mea- sure that represents the system's performance when compared against the expert. Table 2: Contingency Table for Classes A and B System- NO 5 0 3.1 Knowledge Acquisition from Experts The objective of this step is to get human experts to undertake the same task that the system performs, i.e., classifying a set of words into several potentially overlapping classes. The classes produced by a sys- tem are later compared to these "correct" classifica- tions provided by the expert. 3.2 Mapping Algorithm In order to determine pairwise mappings between the clustering generated by the system and one pro- vided by an expert, a table of F-measures is con- structed, with a row for each class generated by the system, and a column for every class provided by the expert. Note that since the expert actually provides a hierarchy, there is one column corresponding to every individual class and subclass provided by the expert. This allows the system's classes to map to a class at any level in the expert's hierarchy. This table gives an estimate of how well each class gen- erated by the system maps to the ones provided by the expert. The algorithm used to compute the actual map- pings from the F-measure table is briefly described here. In each row of the table, mark the cell with the highest F-measure as a potential mapping. In gen- eral, conflicts arise when more than one class gener- ated by the system maps to a given class provided by the expert. In other words, whenever a column in the table has more than one cell marked as a po- tential mapping, a conflict is said to exist. To re- solve a conflict, one of the system classes must be re-mapped. The heuristic used here is that the class for which such a re-mapping results in minimal loss of F-measure is the one that must be re-mapped. Several such conflicts may exist, and re-mapping may lead to further conflicts. The mapping algo- rithm iteratively searches for conflicts and resolves them till no more conflicts exist. Note also that a system class may map to an expert class only if the F-measure between them exceeds a certain threshold value. This ensures that a certain degree of similar- ity must exist between two classes for them to map to each other. We have used a threshold value of 0.20. This value is obtained purely by observations made on the F-measures between different pairs of classes with varying degrees of similarity. 285 Table 3: Noun Clustering Results Expert System Precision I Recall I F-measure Expert A 75.38 29.09 0.42 Expert B 77.08 25.23 0.38 Expert C 73.85 37.88 0.50 3.3 Computation of the Overall F-measure Once the mappings have been determined between the clusterings of the system and the expert, the next step is to compute the F-measure between the two clusterings. Rather than populating separate con- tingency tables for every pair of classes, construct a single contingency table. For every pairwise map- ping found for the classes in these two clusterings, populate the YES-YES, YES-NO, and NO-YES cells of the contingency table appropriately (see Table 2). Once all the mapped classes have been incorporated into this contingency table, add every element of all unmapped classes generated by the system to the YES-NO cell and every element of all unmapped classes provided by the expert to the NO-YES cell of this table. Once all classes in the two clusterings have been accounted for, calculate the precision, re- call, and F-measure as explained in (Hatzivassiloglou and McKeown, 1993). 4 Results and Discussion In one of our experiments, the 400 most frequent nouns in the Merck Veterinary Manual were clus- tered. Three experts were used to evaluate the gen- erated noun clusters. Some examples of the classes that were generated by the system for the veteri- nary medicine domain are PROBLEM, TREAT- MENT, ORGAN, DIET, ANIMAL, MEASURE- MENT, PROCESS, and so on. The results obtained by comparing these noun classes to the clusterings provided by three different experts are shown in Ta- ble 3. We have also experimented with the use of WordNet to improve the classes obtained by a dis- tributional technique. Some initial experiments have shown that WordNet consistently improves the F- measures for these noun classes by about 0.05 on an average. Details of these experiments can be found in (Agarwal, 1995). It is our belief that the evaluation scheme pre- sented in this paper is useful for comparing different clusterings produced by the same system or those produced by different systems against one provided by an expert. The resulting precision, recall, and F-measure should not be treated as a kind of "gold standard" to represent the quality of these classes in some absolute sense. It has been our experience that, as semantic clustering is a highly subjective task, evaluating a given clustering against different experts may yield numbers that vary considerably. However, when different clusterings generated by a system are compared against the same expert (or the same set of experts), such relative comparisons are useful. The evaluation scheme presented here still suffers from one major limitation -- it is not capable of evaluating a hierarchy generated by a system against one provided by an expert. Such evaluations get complicated because of the restriction of one-to-one mapping. More work definitely needs to be done in this area. References Rajeev Agarwal. 1995. Semantic feature eztraction from technical tezts with limited human interven- tion. Ph.D. thesis, Mississippi State University, May. Roberto Basili, Maria Pazienza, and Paola Velardi. 1994. The noisy channel and the braying donkey. In Proceedings of the ACL Balancing Act Work- shop, pages 21-28, Las Cruces, New Mexico, July. Nancy Chincor. 1992. MUC-4 evaluation metrics. In Proceedings of the Fourth Message Understand- ing Conference (MUC-4). Ralph Grishman and John Sterling. 1993. Smooth- ing of automatically generated selectional con- straints. In Proceedings of the ARPA Workshop on Human Language Technology. Morgan Kauf- mann Publishers, Inc., March. Vasileios Hatzivassiloglou and Kathleen R. McKe- own. 1993. Towards the automatic identifica- tion of adjectival scales: Clustering adjectives ac- cording to meaning. In Proceedings of the 31st Annual Meeting of the Association for Computa- tional Linguistics, pages 172-82. Francois-Michel Lang and Lynette Hirschman. 1988. Improved portability and parsing through interac- tive acquisition of semantic information. In Pro- ceedings of the Second Conference on Applied Nat- ural Language Processing, pages 49-57, February. James Pustejovsky. 1992. The acquisition of lex- ical semantic knowledge from large corpora. In Proceedings of the Speech and Natural Language Workshop, pages 243--48, Harriman, N.Y., Febru- ary. Lisa Rau, Paul Jacobs, and Uri Zernik. 1989. In- formation extraction and text summarization us- ing linguistic knowledge acquisition. Information Processing and Management, 25(4):419-28. Philip Resnik. 1993. Selection and Information: A Class-Based Approach to Lezical Relationships. Ph.D. thesis, University of Pennsylvania, Decem- ber. (Institute for Research in Cognitive Science report IRCS-93-42). 286 | 1995 | 38 |
Tagset P.eduction Without Information Loss Thorsten Brants Universit~t des Saarlandes Computerlinguistik D-66041 Saarbrficken, Germany thorst en~coli, uni- sb. de Abstract A technique for reducing a tagset used for n-gram part-of-speech disambiguation is introduced and evaluated in an experi- ment. The technique ensures that all in- formation that is provided by the original tagset can be restored from the reduced one. This is crucial, since we are intere- sted in the linguistically motivated tags for part-of-speech disambiguation. The redu- ced tagset needs fewer parameters for its statistical model and allows more accurate parameter estimation. Additionally, there is a slight but not significant improvement of tagging accuracy. 1 Motivation Statistical part-of-speech disambiguation can be ef- ficiently done with n-gram models (Church, 1988; Cutting et al., 1992). These models are equivalent to Hidden Markov Models (HMMs) (Rabiner, 1989) of order n - 1. The states represent parts of speech (categories, tags), there is exactly one state for each category, and each state outputs words of a particu- lar category. The transition and output probabilities of the HMM are derived from smoothed frequency counts in a text corpus. Generally, the categories for part-of-speech tag- ging are linguistically motivated and do not reflect the probability distributions or co-occurrence pro- babilities of words belonging to that category. It is an implicit assumption for statistical part-of-speech tagging that words belonging to the same category have similar probability distributions. But this as- sumption does not hold in many of the cases. Take for example the word cliff which could be a proper (NP) 1 or a common noun (NN) (ignoring ca- pitalization of proper nouns for the moment). The two previous words are a determiner (AT) and an 1All tag names used in this paper are inspired by those used for the LOB Corpus (Garside et al., 1987). adjective (J J). The probability of cliff being a com- mon noun is the product of the respective contextual and lexical probabilities p(N N ]AT, JJ) • p(c//fflN N), regardless of other information provided by the ac- tual words (a sheer cliff vs. the wise Cliff). Obvi- ously, information useful for probability estimation is not encoded in the tagset. On the other hand, in some cases information not needed for probability estimation is encoded in the tagset. The distributions for comparative and su- perlative forms of adjectives in the Susanne Corpus (Sampson, 1995) are very similar. The number of correct tag assignments is not affected when we com- bine the two categories. However, it does not suffice to assign the combined tag, if we are interested in the distinction between comparative and superlative form for further processing. We have to ensure that the original (interesting) tag can be restored. There are two contradicting requirements. On the one hand, more tags mean that there is more infor- mation about a word at hand, on the other hand, the more tags, the severer the sparse-data problem is and the larger the corpora that are needed for training. This paper presents a way to modify a given tag- set, such that categories with similar distributions in a corpus are combined without losing information provided by the original tagset and without losing accuracy. 2 Clustering of Tags The aim of the presented method is to reduce a tag- set as much as possible by combining (clustering) two or more tags without losing information and wi- thout losing accuracy. The fewer tags we have, the less parameters have to be estimated and stored, and the less severe is the sparse data problem. Incoming text will be disambiguated with the new reduced tagset, but we ensure that the original tag is still uniquely ide:.ltified by the new tag. The basic idea is to exploit the fact that some of the categories have a very similar frequency distri- bution in a corpus. If we combine categories with 287 similar distribution characteristics, there should be only a small change in the tagging result. The main change is that single tags are replaced by a cluster of tags, from which the original has to be identified. First experiments with tag clustering showed that, even for fully automatic identification of the original tag, tagging accuracy slightly increased when the re- duced tagset was used. This might be a result of ha- ving more occurrences per tag for a smaller tagset, and probability estimates are preciser. 2.1 Unique Identification of Original Tags A crucial property of the reduced tagset is that the original tag information can be restored from the new tag, since this is the information we are intere- sted in. The property can be ensured if we place a constraint on the clustering of tags. Let )'V be the set of words, C the set of clusters (i.e. the reduced tagset), and 7" the original tagset. To restore the original tag from a combined tag (clu- ster), we need a unique function foria : W x C ~ 7-, (1) To ensure that there is such a unique function, we prohibit some of the possible combinations. A cluster is allowed if and only if there is no word in the lexicon which can have two or more of the original tags combined in one cluster. Formally, seeing tags as sets of words and clusters as sets of tags: VcEC, tl,t2Ec, tl~t2,wE}/Y: wEtl::~w~t2 (2) If this condition holds, then for all words w tagged with a cluster e, exactly one tag two fulfills w E twe A t~.e E c, yielding fo.,(w, c) = t o. So, the original tag can be restored any time and no information from the original tagset is lost. Example: Assume that no word in the lexicon can be both comparative (JJ R) and superlative adjective (JJT). The categories are combined to {JJR,JJT}. When processing a text, the word easier is tagged as {JJR,JJT}. Since the lexicon states that easier can be of category J JR but not of category JJT, the original tag must be J JR. 2.2 Criteria For Combining Tags The are several criteria that can determine the qua- lity of a particular clustering. 1. Compare the trigram probabilities p(BIXi , A), P(BIA, Xi), and p(XilA, B), i = 1, 2. Combine two tags X1 and X2, if these probabilities coin- cide to a certain extent. 2. Maximize the probability that the training cor- pus is generated by the HMM which is described by the trigram probabilities. 3. Maximize the tagging accuracy for a training corpus. Criterion (1) establishes the theoretical basis, while criteria (2) and (3) immediately show the be- nefit of a particular combination. A measure of si- milarity for (1) is currently under investigation. We chose (3) for our first experiments, since it was the easiest one to implement. The only additional ef- fort is a separate, previously unused part of the trai- ning corpus for this purpose, the clustering part. We combine those tags into clusters which give the best results for tagging of the clustering part. 2.3 The Algorithm The total number of potential clusterings grows ex- ponential with the size of the tagset. Since we are interested in the reduction of large tagsets, a full search regarding all potential clusterings is not fea- sible. We compute the local maximum which can be found in polynomial time with a best-first search. We use a slight modification of the algorithm used by (Stolcke and Omohundro, 1994) for merging HMMs. Our task is very similar to theirs. Stolcke and Omohundro start with a first order tIMM where every state represents a single occurrence of a word in a corpus, and the goal is to maximize the a po- steriori probability of the model. We start with a second order HMM (since we use trigrams) where each state represents a part of speech, and our goal is to maximize the tagging accuracy for a corpus. The clustering algorithm works as follows: 1. Compute tagging accuracy for the clustering part with the original tagset. 2. Loop: (a) Compute a set of candidate clusters (obey- ing constraint (2) mentioned in section 2.1), each consisting of two tags from the previous step. (b) For each candidate cluster build the resul- ting tagset and compute tagging accuracy for that tagset. (c) If tagging accuracy decreases for all combi- nations of tags, break from the loop. (d) Add the cluster which maximized the tag- ging accuracy to the tagset and remove the two tags previously used. 3. Output the resulting tagset. 2.4 Application of Tag Clustering Two standard trigram tagging procedures were performed as the baseline. Then clustering was per- formed on the same data and tagging was done with the reduced tagset. The reduced tagset was only in- ternally used, the output of the tagger consisted of the original tagset for all experiments. The Susanne Corpus has about 157,000 words and uses 424 tags (counting tags with indices denoting 288 Table 1: Tagging results for the test parts in the clustering experiments. Exp. 1 and 2 are used as the baseline. Training Clustering Testing Result (known words) 1. parts A and B - part C 93.7% correct 2. parts A and C - part B 94.6% correct 3. part A part B part C 93.9% correct 4. part A part C part B 94.7% correct multi-word lexemes as separate tags). The tags are based on the LOB tagset (Garside et al., 1987). Three parts are taken from the corpus. Part A consists of about 127,000 words, part B of about 10,000 words, and part C of about 10,000 words. The rest of the corpus, about 10,000 words, is not used for this experiment. All parts are mutually disjunct. First, part A and B were used for training, and part C for testing. Then, part A and C were used for training, and part B for testing. About 6% of the words in the test parts did not occur in the training parts, i.e. they are unknown. For the moment we only care about the known words and not about the unknown words (this is treated as a separate pro- blem). Table 1 shows the tagging results for known words. Clustering was applied in the next steps. In the third experiment, part A was used for trigram trai- ning, part B for clustering and part C for testing. In the fourth experiment, part A was used for trigram training, part C for clustering and part B for testing. The baseline experiments used the clustering part for the normal training procedure to ensure that bet- ter performance in the clustering experiments is not due to information provided by the additional part. Clustering reduced the tagset by 33 (third exp.), and 31 (fourth exp.) tags. The tagging results for the known words are shown in table 1. The improvement in the tagging result is too small to be significant. However, the tagset is reduced, thus also reducing the number of parameters without losing accuracy. Experiments with larger texts and more permutations will be performed to get precise results for the improvement. 3 Conclusions We have shown a method for reducing a tagset used for part-of-speech tagging without losing informa- tion given by the original tagset. In a first expe- riment, we were able to reduce a large tagset and needed fewer parameters for the n-gram model. Ad- ditionally, tagging accuracy slightly increased, but the improvement was not significant. Further inve- stigation will focus on criteria for cluster selection. Can we use a similarity measure of probability dis- tributions to identify optimal clusters? How far can we reduce the tagset without losing accuracy? References Kenneth Ward Church. 1988. A stochastic parts program and noun phrase parser for unrestricted text. In Proc. Second Conference on Applied Na- tural Language Processing, pages 136-143, Austin, Texas, USA. Doug Cutting, Julian Kupiec, Jan Pedersen, and Pe- nelope Sibun. 1992. A practical part-of-speech tagger. In Proceedings of the 3rd Conference on Applied Natural Language Processing (ACL), pa- ges 133-140. R. G. Garside, G. N. Leech, and G. R. Sampson (eds.). 1987. The Computationai Analysis of Eng- lish. Longman. L. R. Rabiner. 1989. A tutorial on hidden markov models and selected applications in speech reco- gnition. In Proceedings of the IEEE, volume 77(2), pages 257-285. Geoffrey Sampson. 1995. English for the Computer. Oxford University Press, Oxford. Andreas Stolcke and Stephen M. Omohundro. 1994. Best-first model merging for hidden markov mo- del induction. Technical Report TR-94-003, In- ternational Computer Science Institute, Berkeley, California, USA. 289 | 1995 | 39 |
A Morphographemic Model for Error Correction in Nonconcatenative Strings Tanya Bowden * and George Anton Kiraz t University of Cambridge Computer Laboratory Pembroke Street, Cambridge CB2 3QG {Tanya. Bowden, George.Kiraz}@cl. cam. ac. uk http://www, cl. cam. ac .uk/users/{tgblO00, gkl05} Abstract • This paper introduces a spelling correction system which integrates seamlessly with morphological analysis using a multi-tape formalism. Handling of various Semitic er- ror problems is illustrated, with reference to Arabic and Syriac examples. The model handles errors vocalisation, diacritics, pho- netic syncopation and morphographemic idiosyncrasies, in addition to Damerau er- rors. A complementary correction strategy for morphologically sound but morphosyn- tactically ill-formed words is outlined. 1 Introduction Semitic is known amongst computational linguists, in particular computational morphologists, for its highly inflexional morphology. Its root-and-pattern phenomenon not only poses difficulties for a mor- phological system, but also makes error detection a difficult task. This paper aims at presenting a morphographemic model which can cope with both issues. The following convention has been adopted. Mor- phemes are represented in braces, { }, surface (phonological) forms in solidi, //, and orthographic strings in acute brackets, (). In examples of gram- mars, variables begin with a capital letter. Cs de- note consonants, Vs denote vowels and a bar denotes complement. An asterisk, *, indicates ill-formed strings. The difficulties in morphological analysis and er- ror detection in Semitic arise from the following facts: * Supported by a British Telecom Scholarship, ad- ministered by the Cambridge Commonwealth Trust in conjunction with the Foreign sad Commonwealth Office. t Supported by a Benefactor Studentship from St John's College. Non-Linearity A Semitic stem consists of a root and a vowel melody, arranged accord- ing to a canonical pattern. For example, Arabic/kuttib/ 'caused to write - perfect pas- sive' is composed from the root morpheme {ktb} 'notion of writing' and the vowel melody morpheme {ul} 'perfect passive'; the two are arranged according to the pattern morpheme {CVCCVC} 'causative'. This phenomenon is analysed by (McCarthy, 1981) along the fines of autosegmental phonology (Goldsmith, 1976). The analysis appears in (1). 1 (1) DERIVATION OF /kuttib/ u i I I /kuttib/-- C V C C V C a v i k t b • Vocalisation Orthographically, Semitic texts appear in three forms: (i) consonantal texts do not incorporate any short vowels but ma- ttes lectionis, 2 e.g. Arabic (ktb) for /katab/, /kutib/and/kutub/, but (kaatb) for/kaatab/ and /kaatib/; (ii) partially voealised texts incorporate some short vowels to clarify am- biguity, e.g. (kutb) for /kutib/ to distinguish it from /katab/; and (iii) voealised texts in- corporate full vocalisation, e.g. (tadahra]) for /tada ay. 1We have used the CV model to describe pattern mor- phemes instead of prosodic terms because of its familiar- ity in the computational linguistics literature. For the use of moraic sad affLxational models in handling Arabic morphology computationally, see (Kiraz,). 2'Mothers of reading', these are consonantal letters which play the role of long vowels, sad are represented in the pattern morpheme by VV (e.g. /aa/, /uu/, /ii/). Mattes lectionis cannot be omitted from the or- thographic string. 24 • Vowel and Diacritic Shifts Semitic lan- guages employ a large number of diacritics to represent enter alia short vowels, doubled let- ters, and nunation. 3 Most editors allow the user to enter such diacritics above and below letters. To speed data entry, the user usually enters the base characters (say a paragraph) and then goes back and enters the diacritics. A common mis- take is to place the cursor one extra position to the left when entering diacritics. This re- sults in the vowels being shifted one position, e.g. *(wkatubi) instead of (wakutib). • Vocalisms The quality of the perfect and im- perfect vowels of the basic forms of the Semitic verbs are idiosyncratic. For example, the Syr- iac root {ktb} takes the perfect vowel a, e.g. /ktab/, while the root {nht} takes the vowel e, e.g. /nhet/. It is common among learners to make mistakes such as */kteb/or */nhat/. • Phonetic Syncopation A consonantal seg- ment may be omitted from the phonetic surface form, but maintained in the orthographic sur- face from. For example, Syriac (md/nt~)'city' is pronounced/mdit~/. * Idiosyncrasies The application of a mor- phographemic rule may have constraints as on which lexical morphemes it may or may not ap- ply. For example, the glottal stop [~] at the end of a stem may become [w] when followed by the relative adjective morpheme {iyy}, as in Arabic /samaaP+iyy/-+/samaawiyy/'heavenly', but /hawaaP+iyy/-~/hawaa~iyy/'of air'. * Morphosyntactic Issues In broken plurals, diminutives and deverbal nouns, the user may enter a morphologically sound, but morphosyn- tactically ill-formed word. We shall discuss this in more detail in section 4. 4 To the above, one adds language-independent issues in spell checking such as the four Damerau trans- formations: omission, insertion, transposition and substitution (Damerau, 1964). 2 A Morphographemic Model This section presents a morphographemic model which handles error detection in non-linear strings. 3When indefinite, nouns and adjectives end in a pho- netic In] which is represented in the orthographic form by special diacritics. 4For other issues with respect to syntactic dependen- cies, see (Abduh, 1990). Subsection 2.1 presents the formalism used, and sub- section 2.2 describes the model. 2.1 The Formalism In order to handle the non-linear phenomenon of Arabic, our model adopts the two-level formalism presented by (Pulman and Hepple, 1993), with the multi tape extensions in (Kiraz, 1994). Their for- realism appears in (2). (2) TwO-LEVEL FORMALISM LLC - LEX RLC LSC - SURF - RSC where LLC LEX RLC LSC SURF RSC = left lexical context = lexical form = right lexical context = left surface context = surface form = right surface context The special symbol * is a wildcard matching any con- text, with no length restrictions. The operator caters for obligatory rules. A lexical string maps to a surface string if[ they can be partitioned into pairs of lexical-surface subsequences, where each pair is licenced by a =~ or ~ rule, and no partition violates a ¢~ rule. In the multi-tape version, lexical expres- sions (i.e. LLC, LEX and RLC) are n-tuple of regu- lax expressions of the form (xl, x2, ..., xn): the/th expression refers to symbols on the ith tape; a nill slot is indicated by ~.5 Another extension is giving LLC the ability to contain ellipsis, ... , which in- dicates the (optional) omission from LLC of tuples, provided that the tuples to the left of... are the first to appear on the left of LEx. In our morphographemic model, we add a similar formalism for expressing error rules (3). (3) ERROR FORMALISM ErrSurf =~ Surf { PLC- PRC } where PLC = partition left context (has been done) PRC = partition right context (yet to be done) 5Our implementation interprets rules directly; hence, we allow ~. If the rules were to be compiled into au- tomata, a genuine symbol, e.g. 0, must be used. For the compilation of our formalism into automata, see (Kiraz and Grimley-Evans, 1995). 25 The error rules capture the correspondence be- tween the error surface and the correct surface, given the surrounding partition into surface and lexical contexts. They happily utilise the multi-tape format and integrate seamlessly into morphological analy- sis. PLC and PRC above are the left and right con- texts of both the lexical and (correct) surface levels. Only the =~ is used (error is not obligatory). 2.2 The Model 2.2.1 Finding the error Morphological analysis is first called with the as- sumption that the word is free of errors. If this fails, analysis is attempted again without the 'no error' re- striction. The error rules are then considered when ordinary morphological rules fail. If no error rules succeed, or lead to a successful partition of the word, analysis backtracks to try the error rules at succes- sively earlier points in the word. For purposes of simplicity and because oh the whole is it likely that words will contain no more than one error (Damerau, 1964; Pollock and Zamora, 1983), normal 'no error' analysis usually resumes if an error rule succeeds. The exception occurs with a vowel shift error (§3.2.1). If this error rule succeeds, an expectation of further shifted vowels is set up, but no other error rule is allowed in the subsequent partitions. For this reason rules are marked as to whether they can occur more than once. 2.2.2 Suggesting a correction Once an error rule is selected, the corrected sur- face is substituted for the error surface, and nor- mai analysis continues - at the same position. The substituted surface may be in the form of a vari- able, which is then ground by the normal analysis sequence of lexical matching over the lexicon tree. In this way only lexical words a~e considered, as the variable letter can only he instantiated to letters branching out from the current position on the lexi- con tree. Normal prolog backtracking to explore al- ternative rules/lexical branches applies throughout. 3 Error Checking in Arabic We demonstrate our model on the Arabic verbal stems shown in (4) (McCarthy, 1981). Verbs are classified according to their measure (M): there are 15 trilateral measures and 4 quadrilateral ones. Moving horizontally across the table, one notices a change in vowel melody (active {a}, passive {ui}); everything else remains invariant. Moving vertically, a change in canonical pattern occurs; everything else remains invariant. Subsection 3.1 presents a simple two-level gram- mar which describes the above data. Subsection 3.2 presents error checking. (4) ARABIC VERBAL STEMS Measure Active Passive 1 katab kutib 2 kattab kuttib 3 kaatab kuutib 4 ~aktab ~uktib 5 takattab tukuttib 6 takaatab tukuutib 7 nkatab nkutib 8 ktatab ktutib 9 ktabab 10 staktab stuktib 11 ktaabab 12 ktawtab 13 ktawwab 14 ktanbab 15 ktanbay Q1 dahraj duhrij Q2 tadahraj tuduhrij Q3 dhanraj dhunrij Q4 dl~arjaj dhurjij 3.1 Two-Level Rules The lexicai level maintains three lexieai tapes (Kay, 1987; Kiraz, 1994): pattern tape, root tape and vo- calism tape; each tape scans a lexical tree. Exam- pies of pattern morphemes are: (ClVlC2VlC3} (M 1), {ClC2VlnC3v2c4} (M Q3). The root morphemes are {ktb} and {db_rj}, and the vocalism morphemes are {a} (active) and {ui} (passive). The following two-level grammar handles the above data. Each lexical expression is a triple; lex- ical expressions with one symbol assume e on the remaining positions. (5) GENERAL RULES * X - * ::~ R0: , _ X - * * - (Pc, C,~) - * =~ RI: . _ C - * * - (P~,~,V) * =~ R2: , _ V * where Pc E {Cl, c2, c3, c4}, P~ E {vl, v2}, 26 (5) gives three general rules: R0 allows any char- acter on the first lexical tape to surface, e.g. in- fixes, prefixes and suffixes. R1 states that any P E {Cl, c2, c3, c4} on the first (pattern) tape and C on the second (root) tape with no transition on the third (vocalism) tape corresponds to C on the sur- face tape; this rule sanctions consonants. Similarly, tL2 states that any P E {Vl, v2} on the pattern tape and V on vocalism tape with no transition on the root tape corresponds to V on the surface tape; this rule sanctions vowels. (6) BOUNDARY RULES R3: (B,e,~) - + - * =~ • - 6 - * R4: (B,*,*) (+,+,+) - * ==~ where B ~ + (6) gives two boundary rules: R3 is used for non- stem morphemes, e.g. prefixes and suffixes. R4 ap- plies to stem morphemes reading three boundary symbols simultaneously; this marks the end of a stem. Notice that LLC ensures that the right bound- ary rule is invoked at the right time. Before embarking on the rest of the rules, an il- lustrated example seems in order. The derivation of/dhunrija/(M Q5, passive), from the three mor- phemes {ClC2VlnCsv2c4} , {dhrj} and {ui}, and the suffix {a} '3rd person' is illustrated in (7). (7) DERIVATION OF M Q3 + {a} u[ i [ + vocalisrn tape c2 vxlnlc3 v21c4 a[+ pattern tape 1120121403 IdlhlulnlrlilJl lal Isurfacetape The numbers between the surface tape and the lexical tapes indicate the rules which sanction the moves. (s) SPREADING RULES R5: (P1, C, s) .-. P * • C * =:~ R6: (Vl, 6, V) .... Vl " * • V - * =:~ where P1 e {c2, c3, c4} Resuming the description of the grammar, (8) presents spreading rules. Notice the use of ellipsis to indicate that there can be tuples separating LEX and LLC, as far as the tuples in LLC are the nearest ones to LEX. R5 sanctions the spreading (and gem- ination) of consonants. R6 sanctions the spreading of the first vowel. Spreading examples appear in (9). (9) DERIVATION OF M 1- M 3 a. /katab/= a[ +]VT Cl vile2 vllc3 + PT 121614 Ik]a[t[a]b[ IST a I +]VT b. /kattab/ = cx VllC2 c21vllc3 + PT 1215614 [klaltltlalb [ ]ST k t b RT c. /kaatab/= cl vl[vl[c2 v1[c3 PT 1261614 [k[ala[t[alb[ [ST The following rules allow for the different possible orthographic vocalisations in Semitic texts: R7 (V, - (v, (V, e, * . g * R8 (Pcl, CI, e) (P, e, V) (Pc2, C2, e) =~ R9 A (vl,e,e) p =~ where A = (V1,6,V).- "(Pc1,Cl,e) and p = (Pc2,C2,e). R7 and R8 allow the optional deletion of short vowels in non-stem and stem morphemes, respec- tively; note that the lexical contexts make sure that long vowels are not deleted. R9 allows the optional deletion of a short vowel what is the cause of spread- ing. For example the rules sanction both /katab/ (M 1, active) and /kutib/ (M 1, passive) as inter- pretations of (ktb) as showin in (10). 3.2 Error Rules Below are outlined error rules resulting from pecu- liarly Semitic problems. Error rules can also be con- structed in a similar vein to deal with typographical Damerau error (which also take care of the issue of 27 wrong vocalisms). (lO) TwO-LEVEL DERIVATION OF M 1 a. kl tJ bl RT /katab/=lctlvllc~lvllc31 PT 181914 Ikl Itl Ibl ]ST ul i] +IVT b. /kutib/= cl v11c2 v11c3 + PT 181914 Ikl Itl Ibl IST 3.2.1 Vowel ShiR A vowel shift error rule will be tried with a parti- tion on a (short) vowel which is not an expected (lex- ical) vowel at that position. Short vowels can legiti- mately be omitted from an orthographic representa- tion - it is this fact which contributes to the problem of vowel shifts. A vowel is considered shifted if the same vowel has been omitted earlier in the word. The rule deletes the vowel from the surface. Hence in the next pass of (normal) analysis, the partition is analysed as a legitimate omission of the expected vowel. This prepares for the next shifted vowel to be treated in exactly the same way as the first. The expectation of this reapplieation is allowed for in reap = y. (11) E0: X =~ e where reap = y ( [om_stmv,e,(*,*,X)] .... * } El: X ::~ e where reap=y { [*,*,(vl,~,X)] ... [om_sprv,6,(*,*,6)] .... * } In the rules above, 'X' is the shifted vowel. It is deleted from the surface. The partition contextual tuples consist of [RULE NAME, SURF, LEX]. The LEX element is a tuple itself of [PATTERN, ROOT, VOCALISM]. In E0 the shifted vowel was analysed earlier as an omitted stem vowel (ore_stray), whereas in E1 it was analysed earlier as an omitted spread vowel (om_sprv). The surface/lexical restrictions in the contexts could be written out in more detail, but both rules make use of the fact that those contexts are analysed by other partitions, which check that they meet the conditions for an omitted stem vowel or omitted spread vowel. For example, *(dhruji) will be interpreted as (duhrij). The 'E0's on the rule number line indicate where the vowel shift rule was applied to replace an error surface vowel with 6. The error surface vowels are written in italics. (12) TwO-LEVEL ANALYSIS OF *(dhruji) I u] i I +IVT I d[ hlr[ j[ +[RT ICllVllC lC3} lv lc, I I+lPT 1 8 1 1E08 1E04 [d] Ihlr]ul [Jlil [ST 3.2.2 Deleted Consonant Problems resulting from phonetic syncopation can be treated as accidental omission of a consonant, e.g. *(mdit~), (mdint~). (13) E2:6 =~ X where cons(X),reap = n {,-,} 3.2.3 Deleted Long Vowel Although the error probably results from a differ- ent fault, a deleted long vowel can be treated in the same way as a deleted consonant. With current tran- scription practice, long vowels are commonly written as two characters - they are possibly better repre- sented as a single, distinct character. (14) E3: e =~ XX where vowel(X),reap = n (,-,} The form *(tuktib) can be interpreted as either (tukuttib) with a deleted consonant (geminated 't') or (tukuutib) with a deleted long vowel. (15) Two-LEVEL ANALYSIS OF *(tuktib) I nil I i, I+iVT k t b+ RT a. M 5 = t ]vllcl v11c2 Ic~1v21c3 + PT 0 2 1 9 1E21 2 1 4 Itlulkl Itl Itlilbl IST b. M6= ul il +IvT k Ivll c1[I t b +1RT t Vl vt c21v2 c3 +1PT 0 2 1E36 6 12 14 Itlulk] lulultli[bl IST 28 3.2.4 Substituted Consonant One type of morphographemic error is that conso- nant substitution may not take place before append- ing a suffix. For example/samaaP/'heaven' + {iyy) 'relative adjective' surfaces as (samaawiyy), where P-~ w in the given context. A common mistake is to write it as *(samma~iyy). (16) F_A: P ::~ w where reap = n { *- /glottal_change, w,(Pc,P,~)] } The 'glottal_change' rule would be a normal mor- phological spelling change rule, incorporating con- textual constraints (e.g. for the morpheme bound- ary) as necessary. 4 Broken Plurals, Diminutive and Deverbal Nouns This section deals with morphosyntactic errors which are independent of the two-level analy- sis. The data described below was obtained from Daniel Ponsford (personal communication), based on (Wehr, 1971). Recall that a Semitic stems consists of a root mor- pheme and a vocalism morpheme arranged accord- ing to a canonical pattern morpheme. As each root does not occur in all vocalisms and patterns, each lexical entry is associated with a feature structure which indicates inter alia the possible patterns and vocalisms for a particular root. Consider the nomi- nal data in (17). (17) BROKEN PLURALS Singular Plural Forms kadi~ kud~, *kidaa~ kaafil kuffal, *kufalaa~, *kuffaal kaffil kufalaaP sahm *Pashaam, suhuum, Pashum Patterns marked with * are morphologically plausi- ble, but do not occur lexically with the cited nouns. A common mistake is to choose the wrong pattern. In such a case, the two-level model succeeds in finding two-level analyses of the word in question, but fails when parsing the word morphosyntacti- cally: at this stage, the parser is passed a root, vo- calism and pattern whose feature structures do not unify. Usually this feature-clash situation creates the problem of which constituent to give preference to (Langer, 1990). Here the vocalism indicates the in- flection (e.g. broken plural) and the preferance of vocalism pattern for that type of inflection belongs to the root. For example *(kidaa~)would be anal- ysed as root {kd~} with a broken plural vocalism. The pattern type of the vocalism clashes with the broken plural pattern that the root expects. To cor- rect, the morphological analyser is executed in gen- eration mode to generate the broken plural form of {kd~} in the normal way. The same procedure can be applied on diminutive and deverbal nouns. 5 Conclusion The model presented corrects errors resulting from combining nonconcatenative strings as well as more standard morphological or spelling errors. It cov- ers Semitic errors relating to vocalisation, diacrit- ics, phonetic syncopation and morphographemic id- iosyncrasies. Morphosyntactic issues of broken plu- rals, diminutives and deverbal nouns can be handled by a complementary correction strategy which also depends on morphological analysis. Other than the economic factor, an important ad- vantage of combining morphological analysis and er- ror detection/correction is the way the lexical tree associated with the analysis can be used to deter- mine correction possibilities. The morphological analysis proceeds by selecting rules that hypothesise lexical strings for a given surface string. The rules are accepted/rejected by checking that the lexical string(s) can extend along the lexical tree(s) from the current position(s). Variables introduced by er- ror rules into the surface string are then instantiated by associating surface with lexical, and matching lexical strings to the lexicon tree(s). The system is unable to consider correction characters that would be lexical impossibilities. Acknowledgements The authors would like to thank their supervisor Dr Stephen Pulman. Thanks to Daniel Ponsford for providing data on the broken plural and Nuha Adly Atteya for discussing Arabic examples. References Abduh, D. (1990). .suqf~bat tadqfq Pal-PimlSP PSliyyan fi Pal-qarabiyyah [Difficulties in auto- matic spell checking of Arabic]. In Proceedings of the Second Cambridge Conference: Bilingual Computing in Arabic and English. In Arabic. Damerau, F. (1964). A technique for computer de- tection and correction of spelling errors. Comm. of the Assoc. for Computing Machinery, 7(3):171- 6. 29 Goldsmith, J. (1976). Autosegmental Phonology. PhD thesis, MIT. Published as Autosegmental and Metrical Phonology, Oxford 1990. Kay, M. (1987). Nonconcatenative finite-state mor- phology. In Proceedings of the Third Conference of the European Chapter o`f the Association for Computational Linguistics, pages 2-10. Kiraz, G. Computational analyses of Arabic mor- phology. Forthcoming in Narayanan, A. and Ditters, E., editors, The Linguistic Computa- tion o.f Arabic. Intellect. Article 9408002 in cmp-lgQxxx, lanl. gov archive. Kiraz, G. (1994). Multi-tape two-level morphology: a case study in Semitic non-linear morphology. In COLING-g4: Papers Presented to the 15th Inter- national Conference on Computational Linguis- tics, volume 1, pages 180-6. Kiraz, G. and Grirnley-Evans, E. (1995). Compi- lation of n:l two-level rules into finite state au- tomata. Manuscript. Langer, H. (1990). Syntactic normalization of spon- taneous speech. In COLING-90: Papers Pre- sented to the 14th International Conference on Computational Linguistics, pages 180-3. McCarthy, J. (1981). A prosodic theory of non- concatenative morphology. Linguistic Inquiry, 12(3):373-418. Pollock, J. and Zamora, A. (1983). Collection and characterization of spelling errors in scientific and scholarly text. Journal of the American Society .for Information Science, 34(1):51-8. Pulman, S. and Hepple, M. (1993). A feature-based formalism for two-level phonology: a description and implementation. Computer Speech and Lan- guage, 7:333-58. Wehr, H. (1971). A Dictionary of Modern Written Arabic. Spoken Language Services, Ithaca. 30 | 1995 | 4 |
The Effect of Pitch Accenting on Pronoun Referent Resolution Janet Cahn Massachusetts Institute of Technology Cambridge, MA 02139 USA cahn~media.mit.edu Abstract By strictest interpretation, theories of both centering and intonational meaning fail to predict the existence of pitch accented pronominals. Yet they occur felicitously in spoken discourse. To explain this, I emphasize the dual functions served by pitch accents, as markers of both propo- sitional (semantic/pragmatic) and atten- tional salience. This distinction underlies my proposals about the attentional conse- quences of pitch accents when applied to pronominals, in particular, that while most pitch accents may weaken or reinforce a cospecifier's status as the center of atten- tion, a contrastively stressed pronominal may force a shift, even when contraindi- cated by textual features. Introduction To predict and track the center of attention in dis- course, theories of centering (Grosz et al., 1983; Brennan et al., 1987; Grosz et al., 1989) and im- mediate focus (Sidner, 1986) rely on syntactic and grammatical features of the text such as pronominal- ization and surface sentence position. This may be sufficient for written discourse. For oral discourse, however, we must also consider the way intonation affects the interpretation of a sentence, especially the cases in which it alters the predictions of centering theories. I investigate this via a phenomenon that, by the strictest interpretation of either centering or intonation theories, should not occur -- the case of pitch accented pronominals. Centering theories would be hard pressed to pre- dict pitch accents on pronominals, on grounds of redundancy. To bestow an intonational marker of salience (the pitch accent) on a textual marker of salience (the pronominal) is unnecessarily redundant and especially when textual features correctly pre- dict the focus of attention. Intonational theories would be similarly hard pressed, but on grounds of information quality and efficient use of limited resources. Given the serial and ephemeral nature of speech and the limits of working memory, it is most expedient to mark as salient the information-rich nonpronominals, rather than their semantically impoverished pronominal stand-ins. To do otherwise is an injudicious use of an attentional cue. However, when uttered with contrastive stress on the pronouns, (I) John introduced Bill as a psycholinguist and then HE insulted HIM. (after Lakoff, 1971) is felicitously understood to mean that after a slanderous introduction, Bill re- taliated in kind against John. What makes (1) felicitous is that the pitch ac- cents on the pronominals contribute attentional in- formation that cannot be gleaned from text alone. This suggests an attentional component to pitch ac- cents, in addition to the propositional component explicated in Pierrehumbert and Hirschberg (1990). In this paper, I combine their account of pitch ac- cent semantics with Grosz, Joshi and Weinstein's (1989) account of centering to yield insights into the phenomenon of pitch accented pronominals, and the attentional consequences of pitch accents in general. The relevant claims in PH90 and GJW89 are re- viewed in the next two sections. Pitch accent semantics A pitch accent is a distinctive intonational con- tour applied to a word to convey sentential stress (Bolinger, 1958; Pierrehumbert, 1980). PH90 cata- logues six pitch accents, all combinations of high (H) and low (L) pitch targets, and structured as a main tone and an optional leading or trailing tone. The form of the accent -- L, H, L+H or H+L -- informs about the operation that would relate the salient item to the mutual beliefs 1 of the conversants; the main tone either commits (H*) or fails to commit 1 Mutual beliefs: propositions expressed or implied by the discourse, and which all conversants believe each other to accept as true and relevant same (Clark and Marshall, 1981). 290 (L*) to the salience of the proposition itself, or the relevance of the operation. • H* predicates a proposition as mutually be- lieved, and proclaims its addition to the set of mutual beliefs; L* fails to predicate a proposi- tion as mutually believed. As PH90 points out, failure to predicate has contradictory sources: the proposition has already been predicated as mutually believed; or, the speaker, but not the hearer, is prevented from predication (perhaps by social constraints); or the speaker actively believes the salient proposition to be false. • H+L evokes an inference path. H*+L commits to the existence of inference path that would support the proposition as mutually believed, indicates that it can be found or derived from the set of mutual beliefs; H+L* conveys uncer- tainty about the existence of such a path. • L+H evokes a scale or ordered set to which the accented constituent belongs: L+H* commits to the salience of the scale, and is typically used to convey contrastive stress; L*+H also evokes a scale but fails to commit to its salience, e.g., conveying uncertainty about the salience of the scale with regard to the accented constituent. Centering structures and operations To explain how speakers move an entity in and out of the center of [mutual] attention, GJW89 formal- izes attentional operations with two computational structures -- the forward.looking center list (Cf) and the backward-looking center (the Cb). Cf is a par- tially ordered list of centering candidates; 2 the Cb, at the head of Cf, is the current center of attention. After each utterance, one of three operations are possible: * The Cb retains both its position at the head of Cf and its status as the Cb; therefore it contin- ues as the center in the next utterance. • The Cb retains its centered status for the cur- rent utterance but its rank is lowered -- it no longer resides at the head of Cf and therefore ceases to be the center in the next utterance. • The Cb loses both its centered status and rank- ing in the current utterance as attention shifts to a new center. In addition, GJW89 constrains pronominalization such that no element in an utterance can be real- ized as a pronoun unless the Cb is also realized as a pronoun, and imposes a preference ordering for op- erations on Cf, such that the least reordering is al- ways preferred. That is, a sequence of continuations 2For simplicity's sake, we assume the items in Cf to be words and phrases; in actuality, they may be nonlexical representations of concepts, or some hybrid of lexical, conceptual and sensory data. is preferred over a sequence of retentions, which is preferred over a sequence of shifts. When intonation and centering collide My synthesis of the claims in PH90 and GJW89 pro- duces an attentional interpretation of pitch accents, modeled by operations on Cf, and derived for each accent from their corresponding propositional effect as described in PH90. The corollaries for pitch accented pronominals are: (1) when a pitch accent is applied to a pronominal, its main effect is attentional, on the order of items in Cf; (2) the obligation to accent a pronominal for attentional r~asons depends on the variance between what the text predicts and what the speaker would like to assert about the order of items in Cf. These hypotheses arise from the following chain of assumptions: (1) To analyze the effects of pitch accents on pronominals, it is necessary to distinguish between attentional and propositional salience. Attentional salience measures the degree to which an item is salient, expressible as a partial ordering, e.g., its ranking in Cf. It is a quantitative feature. In con- trast, propositional salience, addressing an item's status in relation to mutual beliefs, is qualitative. It is calculated through inference chains that link semantic and pragmatic propositions. Both attentional (Cf) and propositional (mu- tual beliefs) structures are updated throughout. However, unlike attentional structures which are ephemeral in various time scales and empty at the end of the discourse (Grosz and Sidner, 1986), mu- tual beliefs persist throughout the conversation, pre- serving at the end the semantic and pragmatic out- come of the discourse. In addition, while propositions can be excluded from the mutual beliefs because they fail to meet some inclusion criterion, no lexical denotation is ex- cluded from Cf regardless of its propositional value. This is because the salience most relevant to the at- tentional state is the proximity of a discourse entity to the head of Cf -- the closer it is, the more it is centered and therefore, attentionally salient. (2) Pitch accents on pronominals are primarily interpreted for what they say about attentional salience. One determiner of whether attentional or propositional effects are dominant is the type of information provided by the accented constituent. Because nonpronominals contribute discourse con- tent, pitch accented nonpronominals are mainly in- terpreted with respect to the mutual beliefs, that is, for their propositional content. However, pronomi- nals, with little intrinsic semantics, perform primar- ily an attentional function. Therefore pitch accented pronominals are mainly interpreted with respect to Cf, for their attentional content. (3) The specific attentional consequences of each 291 pitch accent on pronominals can be extrapolated by analogy from the propositional interpretations in PHgO, by replacing mutual beliefs with Cf as the salient set. Thus, • H* indicates instantiation of the pronominal's cospecifier as the Cb, while L* fails to instanti- ate it as the Cb; • The partially ordered set (salient scale) invoked by L+H is Cf; • The inference path evoked by H+L is, for at- tentional purposes, a traversal of Cf. (~) And therefore, the attentionai effect of pitch ac- cents can be formally expressed as an effect on the order of items in Cf. From these assumptions, I derive the following at- tentional consequences for pitch accented pronomi- nals: • Only one pitch accent, L+H*, selects a Cb other than that predicted by centering theory and thereby reorders Cf. • L*+H appears to support an impending re- ordering but does not compel it. • By analogy, the remaining pitch accents, seem to either weaken or strengthen the current cen- ter's Cb status, but do not force a reordering. Availability of cospecifiers The attentional interpretations are constrained by what has been mutually established in the prior dis- course, or is situationally evident. Therefore, while contrastive stress may be mandated when grammat- ical features select the wrong cospecifier, the accent- ing is only felicitous when there is an alternate ref- erent available. For example, in (2) John introduced Bill as a psycholinguist and then he/,+//, insulted him. L+H* indicates that he no longer cospecifies with John. If the hearer is hasty, she might select Bill as the new Cb. However, this is not borne out by the unaccented him, which continues to cospec- ify with Bill. Since he and him cannot select the same referent, he requires a cospecifier that is nei- ther John nor B£11. Because, the utterance itself does not provide a any other alternatives, heL+g, is only felicitous (and coherent) if an alternate cospec- ifier has been placed in Cf by prior discourse, or by the speaker's concurrent deictic gesture towards a discourteous male. Conclusion and Future Work By combining Pierrehumbert and Hirschberg's (1990) analysis of intonational meaning with Grosz, Joshi and Weinstein's (1989) theory of centering in discourse, the attentional affect of pitch accents be- comes evident, and the paradox of pitch accented pronominals unravels. My goal here is to develop an analysis and a line of inquiry and to suggest that my derivative claims are plausible, and even extensible to an attentional analysis of pitch accents on non- pronominals. The proof, of course, will come from investigation by multiple means -- constructed ex- amples (e.g., Cahn, 1990), computer simulation, em- pirical analysis of speech data (e.g., Nakatani, 1993), and psycholinguistic experiments. References Dwight Bolinger. A Theory of Pitch Accent in En- glish. Word, 14(2-3):109-149, 1958. Susan E. Brennan, Marilyn W. Friedman, and Carl J. Pollard. A Centering Approach to Pronouns. Proceedings of the 25th Conference of the Associa- tion for Computational Linguistics, 1987. Janet Cahn. The Effect of Intonation on Pro- noun Referent Resolution. Draft, 1990. Available as: Learning and Common Sense TR 94-06, M.I.T. Media Laboratory. Herbert H. Clark and Catherine R. Marshall. Def- inite Reference and Mutual Knowledge. In Webber, Joshi and Sag, editors, Elements of Discourse Un- derstanding. Cambridge University Press, 1981. Barbara Grosz, Aravind K. Joshi, and Scott We- instein. Providing a unified account of definite noun phrases in discourse. Proceedings of the 21st Confer- ence of the Association for Computational Linguis- tics, 1983. Barbara J. Grosz, Aravind K. Joshi, and Scott Weinstein. Towards a Computational Theory of Dis- course Interpretation. Draft, 1989. Barbara J. Grosz and Candace L. Sidner. At- tention, Intentions, and the Structure of Discourse. Computational Linguistics, 12(3): 175-204, 1986. George Lakoff. Presupposition and relative well- formedness. In Danny D. Steinberg and Leon A. Jakobovits, editors, Semantics: An Interdisciplinary Reader in Philosophy, Linguistics and Psychology, Cambridge University Press, 1971. Christine Nakatani. Accenting on Pronouns and Proper Names in Spontaneous Narrative. Proceed- ings of the European Speech Communication Asso- ciation Workshop on Prosody, 1993. Janet B. Fierrehumbert. The Phonology and Pho- netics of English Intonation. Ph.D. thesis, Mas- sachusetts Institute of Technology, 1980. Janet B. Pierrehumbert and Julia Hirschberg. The Meaning of Intonation Contours in the Inter- pretation of Discourse. In Philip R. Cohen, Jerry Morgan, and Martha E. Pollack, editors, Intentions in Communication, MIT Press, 1990. Candace L. Sidner. Focusing in the Comprehen- sion of Definite Anaphora. In Barbara J. Grosz, Karen Sparck-Jones, and Bonnie Lynn Webber, edi- tors, Readings in Natural Language Processing, Mor- gan Kaufman Publishers, Inc., 1986. 292 | 1995 | 40 |
Sense Disambiguation Using Semantic Relations and Adjacency Information Anil S. Chakravarthy MIT Media Laboratory 20 Ames Street E15-468a Cambridge MA 02139 anil @ media.mit.edu Abstract This paper describes a heuristic-based approach to word-sense disambiguation. The heuristics that are applied to disambiguate a word depend on its part of speech, and on its relationship to neighboring salient words in the text. Parts of speech are found through a tagger, and related neighboring words are identified by a phrase extractor operating on the tagged text. To suggest possible senses, each heuristic draws on semantic rela- tions extracted from a Webster's dictionary and the semantic thesaurus WordNet. For a given word, all applicable heuristics are tried, and those senses that are rejected by all heuristics are discarded. In all, the disam- biguator uses 39 heuristics based on 12 relationships. 1 Introduction Word-sense disambiguation has long been recognized as a difficult problem in computational linguistics. As early as 1960, Bar-Hillel [1] noted that a computer program would find it challenging to recognize the two different senses of the word "pen" in "The pen is in the box," and "The box is in the pen." In recent years, there has been a resurgence of interest in word-sense disambiguation due to the availability of linguistic resources like dictionar- ies and thesauri, and due to the importance of disambig- uation in applications like information retrieval and machine translation. The task of disambiguation is to assign a word to one or more senses in a reference by taking into account the context in which the word occurs. The reference can be a standard dictionary or thesaurus, or a lexicon con- structed specially for some application. The context is provided by the text unit (paragraph, sentence, etc.) in which the word occurs. The disambiguator described in this paper is based on two reference sources, the Webster's Seventh Dictionary and the semantic thesaurus WordNet [12]. Before the disambiguator is applied, the text input is processed first by a part-of-speech tagger and then by a phrase extrac- tor which detects phrase boundaries. Therefore, for each ambiguous word, the disambiguator knows the part of speech, and other phrase headwords and modifiers that are adjacent to it. Based on this context information, the disambiguator uses a set of heuristics to assign one or more senses from the Webster's dictionary or WordNet to the word. Here is an example of a heuristic that relies on the fact that conjoined head nouns are likely to refer to objects of the same category. Consider the ambiguous word "snow" in the sentence "Slush and snow filled the roads." In this sentence, the tagger identifies "snow" as a noun. The phrase extractor indicates that "snow" and "slush" are conjoined head words of a noun phrase. Then, the heuristic uses WordNet to identify the senses of "slush" and "snow" that belong to a common cate- gory. Therefore, the sense of "snow" as "cocaine" is dis- carded by this heuristic. The disambiguator has been incorporated into two infor- mation retrieval applications which use semantic rela- tions (like A-KIND-OF) from the dictionary and WordNet to match queries to text. Since semantic rela- tions are attached to particular word senses in the dictio- nary and WordNet, disambiguated representations of the text and the queries lead to targeted use of semantic rela- tions in matching. The rest of the paper is organized as follows. The next section reviews existing approaches to disambiguation with emphasis on directly related methods. Section 3 describes in more detail the heuristics and adjacency relationships used by the disambiguator. 293 2 Previous Work on Disambiguation In computational linguistics, considerable effort has been devoted to word-sense disambiguation [8]. These approaches can be broadly classified based on the refer- ence from which senses are assigned, and on the method used to take the context of occurrence into account. The references have ranged from detailed custom-built lexi- cons (e.g., [l 1]) to standard resources like dictionaries and thesauri like Roget's (e.g., [2, 10, 14]). To take the context into account, researchers have used a variety of statistical weighting and spreading activation models (e.g., [9, 14, 15]). This section gives brief descriptions of some approaches that use on-line dictionaries and WordNet as references. WordNet is a large, manually-constructed semantic net- work built at Princeton University by George Miller and his colleagues [12]. The basic unit of WordNet is a set of synonyms, called a synset, e.g., [go, travel, move]. A word (or a word collocation like "operating room") can occur in any number of synsets, with each synset reflect- ing a different sense of the word. WordNet is organized around a taxonomy of hypernyms (A-KIND-OF rela- tions) and hyponyms (inverses of A-KIND-OF), and 10 other relations. The disambiguation algorithm described by Voorhees [16] partitions WordNet into hoods, which are then used as sense categories (like dictionary subject codes and Roget's thesaurus classes). A single synset is selected for nouns based on the hood overlap with the surrounding text. The research on extraction of semantic relations from dictionary definitions (e.g., [5, 7]) has resulted in new methods for disambiguation, e.g., [2, 15]. For example, Vanderwende [15] uses semantic relations extracted from LDOCE to interpret nominal compounds (noun sequences). Her algorithm disambiguates noun sequences by using the dictionary to search for pre- defined relations between the two nouns; e.g., in the sequence "bird sanctuary," the correct sense of"sanctu- ary" is chosen because the dictionary definition indi- cates that a sanctuary is an area for birds or animals. Our algorithm, which is described in the next section, is in the same spirit as Vanderwende's but with two main differences. In addition to noun sequences, the algo- rithm has heuristics for handling 11 other adjacency relationships. Second, the algorithm brings to bear both WordNet and semantic relations extracted from an on- line Webster's dictionary during disambiguation. 3 Sense Disambiguation with Adjacency Information The input to the disambiguator is a pair of words, along with the adjacency relationship that links them in the input text. The adjacency relationship is obtained auto- matically by processing the text through the Xerox PARC part-of-speech tagger [6] and a phrase extractor. The 12 adjacency relationships used by the disambigua- tor are listed below. These adjacency relationships were derived from an analysis of captions of news photo- graphs provided by the Associated Press. The examples from the captions also helped us identify the heuristic rules necessary for automatic disambiguation using WordNet and the Webster's dictionary. In the table below, each adjacency category is accompanied by an example. 39 heuristic rules are used currently. Adjacency Relationship Example Adjective modifying a noun Express train Possessive modifying a noun Pharmacist's coat Noun followed by a proper Tenor Luciano name Pavarotti Present participle gerund Training drill modifying a noun Noun noun Conjoined nouns Noun modified by a noun at the head of a following "of' PP Noun modified by a noun at the head of a following "non- of" PP Noun that is the subject of an action verb Noun that is the object of an action verb Basketball fan A church and a home Barrel of the rifle A mortar with a shell A monitor displays information Write a mystery Noun that is at the head of a Sentenced to life prepositional phrase follow- ing a verb Nouns that are subject and The hawk found a object of the same action perch Given a pair of words and the adjacency relationship, the disambiguator applies all heuristics corresponding to that category, and those word senses that are rejected by all heuristics are discarded. Due to space considerations, we will not describe the heuristic rules individually but 294 instead identify some common salient features. The heu- ristics are described in detail in [3]. • Several heuristics look for a particular semantic rela- tion like hypernymy or purpose linking the two input words, e.g., "return" is a hypernym of "forehand." • Many heuristics look for particular semantic rela- tions linking the two input words to a common word or synset; e.g., a "church" and a "home" are both buildings. • Many heuristics look for analogous adjacency pat- terns either in dictionary definitions or in example sentences, e.g., "write a mystery" is disambiguated by analogy to the example sentence "writes poems and essays." • Some heuristics look for specific hypernyms such as person or place in the input words; e.g., if a noun is followed by a proper name (as in "tenor Luciano Pavarotti" or "pitcher Curt Schilling"), those senses of the noun that have "person" as a hypernym are chosen. The disambiguator has been used in two retrieval pro- grams, ImEngine, a program for semantic retrieval of image captions, and NetSerf, a program for finding Internet information archives [3, 4]. The initial results have not been promising, with both programs reporting deterioration in performance when the disambiguator is included. This agrees with the current wisdom in the IR community that unless disambiguation is highly accu- rate, it might not improve the retrieval system's perfor- mance [ 13]. References 1. Bar-Hillel, Yehoshua. 1960. "The Present Status of Automatic Translation of Languages," in Advances in Computers, F. L. Alt, editor, Academic Press, New York. 2. Braden-Harder, Lisa. 1992. "Sense Disambiguation Using On-line Dictionaries," in Natural Language Processing: The PLNLP Approach, Jensen, K., Heidorn, G. E., and Richardson, S. D., editors, Klu- wer Academic Publishers. 3. Chakravarthy, Anil S. 1995. "Information Access and Retrieval with Semantic Background Knowl- edge" Ph.D thesis, MIT Media Laboratory. 4. Chakravarthy, Anil S. and Haase, Kenneth B. 1995. "NetSerf: Using Semantic Knowledge to Find Inter- net Information Archives," to appear in Proceedings of SIGIR'95. 5. Chodorow, Martin. S., Byrd, Roy. J., and Heidorn, George. E. 1985. "Extracting Semantic Hierarchies from a Large On-Line Dictionary," in Proceedings of the 23rd ACL. 6. Cutting, Doug, Julian Kupiec, Jan Pedersen, and Penelope Sibun. 1992. "A Practical Part-of-Speech Tagger," in Proceedings of the Third Conference on Applied NLP. 7. Dolan, William B., Lucy Vanderwende, and Richard- son, Steven. D. 1993. "Automatically Deriving Structured Knowledge Bases from On-line Dictio- naries," in Proceedings of the First Conference of the Pacific Association for Computational Linguis- tics, Vancouver. 8. Gale, William, Church, Kenneth. W., and David Yarowsky. 1992. "Estimating Upper and Lower Bounds on the Performance of Word-sense Disam- biguation Programs," in Proceedings of ACL-92. 9. Hearst, Marti. 1991. "Noun Homograph Disambigu- ation Using Local Context in Large Text Corpora," Proceedings of the 7th Annual Conference of the UW Centre for the New OED and Text Research, Oxford, England. 10. Lesk, Michael. 1986. "Automatic Sense Disambigu- ation: How to Tell a Pine Cone from an Ice Cream Cone," in Proceedings of the SIGDOC Conference 11. McRoy, Susan. 1992. "Using Multiple Knowledge Sources for Word Sense Discrimination," in Compu- tational Linguistics, 18(1). 12. Miller, George A. 1990. "WordNet: An On-line Lex- ical Database," in International Journal of Lexicog- raphy, 3(4). 13. Sanderson, Mark. 1994. "Word Sense Disambigua- tion and Information Retrieval," in Proceedings of SIGIR '94. 14. Yarowsky, David. 1992. "Word Sense Disambigua- tion Using Statistical Models of Roget's Categories Trained on Large Corpora," in Proceedings of COL- ING-92, Nantes, France. 15. Vanderwende, Lucy. 1994. "Algorithm for Auto- matic Interpretation of Noun Sequences," in Pro- ceedings of COLING-94, Kyoto, Japan. 16. Voorhees, Ellen. M. 1993. "Using WordNet to Dis- ambiguate Word Senses for Text Retrieval," in Pro- ceedings of SIGIR'93. 295 | 1995 | 41 |
CONSTRAINT-BASED EVENT RECOGNITION FOR INFORMATION EXTRACTION Jeremy Crowe* Department of Artificial Intelligence Edinburgh University Edinburgh, EH1 1HN UK [email protected] Abstract Event recognition We present a program for segmenting texts ac- cording to the separate events they describe. A modular architecture is described that al- lows us to examine the contributions made by particular aspects of natural language to event structuring. This is applied in the context of terrorist news articles, and a technique is sug- gested for evaluating the resulting segmenta- tions. We also examine the usefulness of vari- ous heuristics in forming these segmentations. Introduction One of the issues to emerge from recent evaluations of information extraction systems (Sundheim, 1992) is the importance of discourse processing (Iwafiska et al., 1991) and, in particular, the ability to recognise multiple events in a text. It is this task that we address here. We are developing a program that assigns message- level event structures to newswire texts. Although the need to recognise events has been widely acknowledged, most approaches to information extraction (IE) perform this task either as a part of template merging late in the IE process (Grishman and Sterling, 1993) or, in a few cases, as an integral part of some deeper reasoning mechanism (e.g. (Hobbs et al., 1991)). Our approach is based on the assumption that dis- course processing should be done early in the informa- tion extraction process. This is by no means a new idea. The arguments in favour of an early discourse segmen- tation are well known - easier coreference of entities, a reduced volume of text to be subjected to necessarily deeper analysis, and so on. Because of this early position in the IE process, an event recognition program is faced with a necessarily shallow textual representation. The purpose of our work is, therefore, to investigate the quality of text segmenta- tion that is possible given such a surface form. *I would like to thank Chris Mellish and the anony- mous referees for their helpful comments. Supported by a grant from the ESRC. What is an event? If we are to distinguish between events, it is important that we know what they look like. This is harder than it might at first seem. A closely related (though not identical) problem is found in recognising boundaries in discourse, and there seems to be little agreement in the literature as to the properties and functions they pos- sess (Morris and Hirst, 1991), (Grosz and Sidner, 1986). Our system is aimed at documents typified by those in the MUC-4 corpus (Sundheim, 1992). These deal with Latin American terrorist incidents, and vary widely in terms of origin, medium and purpose. In the task description for the MUC-4 evaluation, two events are deemed to be distinct if they describe either multiple types of incident or multiple instances of a particular type of incident, where instances are distinguished by having different locations, dates, categories or perpetra- tors. (NRaD, 1992) Although this definition suffers from a certain amount of circularity, it nonetheless points to an interesting fea- ture of events at least in so far as physical incidents are concerned. It is generally the case that such incidents do possess only one location, date, category or description. Perhaps we can make use of this information in assigning an event-segmentation to a text? Current approaches As an IE system processes a document, it typically cre- ates a template for each sentence (Hobbs, 1993), a frame- like data structure that contains a maximally explicit and regularised representation of the information the system is designed to extract. Templates are merged with earlier ones unless they contain incompatible slot- fills. Although more exotic forms of event recognition exist at varying levels of analysis (such as within the abductive reasoning mechanism of SRI's TACITUS system (Hobbs et al., 1991), in a thesaurus-based lexical cohesion algo- rithm (Morris and Hirst, 1991) and in a semantic net- work (Kozima, 1993)), template merging is the most used method. 296 Modular constraint-based event recognition The system described here consists of (currently) three analysis modules and an event manager (see figure 1). Two of the analysis modules perform a certain amount of island-driven parsing (one extracts time-related infor- mation, and the other location-related information), and the third is simply a pattern marcher. They are designed to run in parallel on the same text. [ PREPRO ' , I F) I ANAL YSIS ))i)i)E)[ ANAL YSIS I)E)i)E)( ANAL YSIS I~::i) :.:.:.:.:.:.:.:.:.:.:.:.:.:.:,:,:.:.:.:.:.:.:,:,:.:.:.:.:.:.:.:.:. :.:.:.:.:..:...:.:.: .:.-.:.:.:.-.- ,,.,... ti!i!iiiiiiii!!iiiiiiiiiiiiiiiiiiiiiiiiiUiiMiiiiiiiill VE.T iiiii l)))iiii)))i)iiiii)ii)))i)iiiiiii] MAN~ER ~ ~ i ! i i EEEEEEE~E~i~E]EEEiIE]E!EEEEEEEEEEEE!EEi6iiiiiiiii Ei6i6iiiii~ CLAUSE [EEE I ) IE SYSTEM m • . . . . . . I Figure 1: System architecture Event manager The role of the event manager is to propose an event segmentation of the text. To do this, it makes use of the constraints it receives from the analysis modules com- bined with a number of document-structuring heuristics. Many clauses ("qulet clauses") are free from constraint relationships, and it is in these cases that the heuristics are used to determine how clauses should be clustered. A text segmentation can be represented as a grid with clauses down one side, and events along the other. Fig- ure 2 contains a representation of a sample news text, and shows how this maps onto a clause/event grid. The phrases overtly referring to time and location have been underlined. A klvdP $ Iool nil~N h dovmlall Tldo ~ oh0, ~ lnwldma 3 de,as aMno wgm~ e iK~Wem -'aJloa~ Rs dnn,nl~d. '1"~ v,jhldo~D ~ -t-- doanw0od. V,.cwWlaf',m ~ w~ ocedw, we4 ~ cigl~telo, InaCAma, ~npJ o,o~a~id0~ll Im¢ I~ Itoe,daNhmt, Events im ..'tS'F'"F~ . . . . Jt I*~*.. , 0 , , t TTI -._ t i ! ! imL. FT TI ...'/~y--y..~ . . . . ¥oWm~ll~fL. ~PiY .*. '".." :.....:....~.... j [~ba- Blnaty eb'Mg: 0011100011100111000011011110 Figure 2: Example text segmentation Analysis modules The fragments of natural language that represent time and location are by no means trivial to recognise, let alone interpret. Consequently, and in keeping with the fast and shallow approach we have adopted, the range of spatio-temporal concepts the program handles has been restricted. • For example, the semantic components of both mod- ules know about points in time/space only, and not about durations. There are practical and theoretical reasons for this policy decision - the aim of the system is only to distinguish between events, and though the ability to represent durations is in a very few situations useful for this task, the engineering overheads in incor- porating a more complex reasoning mechanism make it difficult to do so within such a shallow paradigm. The first two analysis modules independently assign explicit, regularised PATR-like representations to the time- and location-phrases they find. Graph unification is then used to build a set of constraints determining which clauses 1 in a text can refer to the same event. Each module then passes its constraints to the event manager. The third module identifies sentences containing a subset of cue phrases. The presence of a cue phrase in a sentence is used to signal the start of a (totally) new event. IA clause in this case is delimited in much the same way as in Hobbs et al's terminal substring parser (Hobbs et al., 1991), i.e. by commas, relative pronouns, some conjunctions and some forms of that. Structuring strategies Although the legal event assignments for a particular clause may be restricted by constraints, there may still be multiple events to which that clause can he assigned. Three structuring strategies are being investigated. The first dictates that clauses should be assigned to the lowest non-conflicting event value; the second favours non-confllcting event values of the most recently assigned clauses. The third strategy involves a mix of the above, favouring the event value of the previous clause, followed by the lowest non-conflicting event values. Heuristics Various heuristics are used to gel together quiet clauses in the document. The first heuristic operates at the paragraph level. If a sentence-iuitial clause ap- pears in a sentence that is not paragraph-initial, then it is assigned to the same event as the first clause in the previous sentence. We are therefore making some assumptions about the way reporters structure their ar- ticles, and part of our work will be to see whether such assumptions are valid ones. The second heuristic operates in much the same way as the first, but at the level of sentences. It is based on the reasoning that quiet clauses should be assigned to the same event as previous clauses within the sentence. As such, it only operates on clauses that are not sentence- initial. Finally, a third heuristic is used which identifies sim- ilarities between sentences based on n-gram frequen- cies (Salton and Buckley, 1992). Areas to investigate are the optimum value for n, the effect of normalization 297 on term vector calculation, and the potential advantages of using a threshold. This heuristic also interacts with the text structuring strategies described above; when it is activated, it can be used to override the default strategy. Experiments and evaluation Whilst the issue of evaluation of information extraction in general has been well addressed, the evaluation of event recognition in particular has not. We have devised a method of evaluating segmentation grids that seems to closely match our intuitions about the "goodness" of a grid when compared to a model. The system is being tested on a corpus of 400 messages (average length 350 words). Each message is processed by the system in each of 192 different configurations (i.e. wlth/without paragraph heuristic, varying the cluster- ing strategy etc.), and the resulting grids are converted into binary strings. Essentially, each clause is compared asymmetrically with each other, with a "1" denoting a difference in events, and a "0" denoting same events. Figure 2 Shows an example of a binary string corre- sponding to the grid in the same figure. Figure 3 shows a particular 4-clause grid scored against all other possible 4-clause grids, where the grid at the top is the intended correct one, and the scores reflect degrees of similarity between relevant binary strings. 100% I i®1 - Figure 3: Comparison of scores for a 4-clause grid In order to evaluate these computer generated grids, a set of manually derived grids is needed. For the final evaluation, these will be supplied by naive subjects so as to minimise the possibility of any knowledge of the pro- gram's techniques influencing the manual segmentation. Conclusions and future work We have manually segmented 100 texts and have com- pared them against computer-generated grids. Scoring has yielded some interesting results, as well as suggesting further areas to investigate. The results show that fragments of time-oriented lan- guage play an important role in signalling shifts in event structure. Less important is location information - in fact, the use of such information actually results in a slight overall degradation of system performance. Whether this is because of problems in some aspect of the location analysis module, or simply a result of the way we use location descriptions, is an area currently under investigation. The paragraph and clause heuristics also seem to be useful, with the omission of the clause heuristic causing a considerable degradation in performance. The contribu- tions of n-gram frequencies and the cue phrase analysis module are yet to be fully evaluated, although early re- sults axe encouraging. It therefore seems that, despite both the shallow level of analysis required to have been performed (the program doesn't know what the events actually are) and our sim- plification of the nature of events (we don't know what they really are either), a modular constraint-based event recognition system is a useful tool for exploring the use of particular aspects of language in structuring multiple events, and for studying the applicability of these aspects for automatic event recognition. References Ralph Grishm~n and John Sterling. 1993. Description of the Proteus system as used for MUC-5. In Proc. MUC-5. ARPA, Morgan Kaufmann. Barbara Grosz and Candy Sidner. 1986. Attention, intensions and the structure of discourse. Computa- tional Linguistics, 12(3). Jerry R Hobbs, Douglas E Appelt, John S Bear, Mabry Tyson, and David Magerman. 1991. The TACITUS system. Technical Report 511, SRI. Jerry R Hobbs. 1993. The generic information extrac- tion system. In Proc. MUC-5. ARPA, Morgan Kauf- mann. Lucia lwadska, Douglas Appelt, Damarls Ayuso, Kathy Dahlgren, Bonnie Glover Stalls, Ralph Grishman, George Krupka, Christine Montgomery, and Ellen Pdloff. 1991. Computational aspects of discourse in the context of MUC-3. In Proc. MUC-3, pages 256- 282. DARPA, Morgan Kanfmann. Hideki Kozima. 1993. Text segmentation based on sim- ilarity between words. In Proc. A CL, student session. Jane Morris and Graeme Hirst. 1991. Lexical cohe- sion computed by thesaural relations as an indicator of the structure of text. Computational Linguistics, 17(1):21-42. NRaD. 1992. MUC-4 task documentation. NRaD (pre- viously Naval Ocean Systems Center) On-line docu- ment. Gerald Salton and Chris Buckley. 1992. Automatic text structuring experiments. In Paul S Jacobs, editor, Tezt-Based Intelligent Systems, chapter 10, pages 199- 210. Lawrence Erlbaum Associates. Beth M Sundheim. 1992. Overview of the fourth message understanding conference. In Proc. MUC-4, pages 3-21. DARPA, Morgan Kaufmann. 298 | 1995 | 42 |
From route descriptions to sketches: a model for a text-to-image translator Lidia Fraczak LIMSI-CNRS, b£t. 508, BP 133 91403 Orsay cedex, France [email protected] Abstract This paper deals with the automatic trans- lation of route descriptions into graphic sketches. We discuss some general prob- lems implied by such inter-mode transcrip- tion. We propose a model for an automatic text-to-image translator with a two-stage intermediate representation in which the linguistic representation of a route descrip- tion precedes the creation of its conceptual representation. 1 Introduction Computer text-image transcription has lately be- come a subject of interest, prompting research on relations between these two modes of representa- tion and on possibilities of transition from one to the other. Different types of text and of images have been considered, for example: narrative text and motion pictures (Kahn, 1979; Abraham and De- scl~s, 1992), spatial descriptions and 3-dimensional sketches (Yamada et al., 1992; Arnold and Lebrun, 1992), 2-dimensional spatial scenes and linguistic de- scriptions (Andr~ et al., 1987), 2-dimensional image sequences and linguistic reports (Andr~ et al., 1988). Linguistic and pictorial modes may be considered as complementary since they are capable of convey- ing different kinds of content (Arnold, 1990). This complementarity of expression is explored in order to be used in multi-modal systems for human-computer interaction such as computer assisted architectural conception (Arnold and Lebrun, 1992). Such sys- tems should not only use different modes to ensure better communication, but should also be able to pass from one to the other. Given the differences in capacities of these two means of expression, one may expect some problems in trying to encode into a picture the information contained in a linguistic description. The present research is concerned with route descriptions (RDs) and their translation into 2- dimensional graphic sketches. We deal with a type of discourse whose informational content may seem quite easy to represent in a graphic mode. In every- day communication situations, verbal RDs are often accompanied by sketches, thus participating in a 2- mode representation. A sketch can also function as a route representation by itself. We will first outline some problems that may ap- pear while translating descriptions into graphics. Then we will describe our general model for an auto- matic translator and some aspects of the underlying knowledge representation. 2 Some translation problems Our first approach to translate RDs into graphic maps consisted in manually transcribing linguistic descriptions into sketches. By doing this, we encoun- tered several problems, some of which we will try to illustrate through the following example, taken from the French corpus of (Gryl, 1992). Example 2.1 A la sortie des tourniquets du RER tu prends sur ta gauche. II y a une magni]ique de- scente~ prendre. Puis tu tournes ~ droite, tu tombes sur une sdrie de panneaux d'informations. Tu con- tinues tout droit en longeant les terrains de tennis et tu tombes sur le bdtiment A. 1 In the description here above we can observe some ambiguities, or incompleteness of information, which may be a problem for a graphic depiction. The most striking case is the information about the ten- nis courts: we do not know on which side of the path, right or left, they are located. 1 At the turnstiles of the RER station you turn left. There is a steep (a magnificent) downgrade to take. Then you turn right, you come across a series of sign posts. You continue straight on, passing alongside the tennis courts, and you come to building A. 299 There is also another kind of ambiguity due to the fact that in a RD the whole path does not have to be "linguistically covered". Consider the fragment about turning to the left ("tu prends sur ta gauche") and the downgrade ("descente"). It is difficult to judge whether the downgrade is lo- cated right after the turn, or "a little further". The same question holds for the right turn ("puis tu tournes ~ droite") and the sign posts ("panneaux d'informations"): should the posts be represented as immediately following the turning point (as ex- pressed in the text) or should there be a path be- tween them? This kind of ambiguity is not really perceived unless we want to derive a graphic repre- sentation of the route. The information is complete enough for a real life situation of finding one's way. Another kind of problem concerns the "magnifique descente". It would not be easy to represent a slope in a simple sketch and, even less so, its characteristic of being steep, which the French word "magnifique" suggests in this context. The incompleteness of in- formation will occur on the graphic side this time, not all properties of the described element being pos- sible to express in this mode. Such transcription constraints, once defined and analyzed, should be taken into account in order to obtain a "faithful" graphic representation. It seems that, in some cases, verbal-side incompleteness prob- lems might be solved thanks to some relevant linguis- tic markers, as well as to the knowledge included in the conceptual model of the route. We think here in particular of the questions whether there is a significant stretch of path between two elements of environment (landmarks), or a turn and a land- mark, mentioned in the text immediately one after • the other. Concerning the ambiguity related to the location of landmarks, one can either choose an ar- bitrary value or try to find a way of preserving the ambiguity in the graphic mode itself. We have mentioned here only some of the prob- lems concerning the translation of RDs into graphic sketches. We have not considered those parts of linguistic description contents which are not repre- sentable by images, such as comments or evaluations (e.g. "you can't miss it"; "it's very simple"). 3 Steps of the translation process Translating linguistic utterances into a pictorial code cannot be done without an intermediate representa- tion, that is, a conceptual structure that bridges the gap between these two expression modes (Arnold, 1990). Abraham and Descl~s (1992) talk about the necessity of creating a common semantics for the two modes. In our case, the purpose of the intermediate repre- sentation is to extract from the linguistic description the information concerning the route with the aim of representing it in the form of a sketch. However, in- stead of trying to create a unique "super-structure", we envisage a dual representation, with the linguistic and the conceptual levels. The core of the process of translating RDs into graphic maps will thus consist in the transition from the linguistic representation to the conceptual one. For the sake of the linguistic representation, we thought it necessary to carry out an analysis of real examples and elaborate a linguistic model of this particular type of discourse. We have worked on a corpus of 60 route descriptions in French. The anal- ysis has been performed at two levels: the global level and the local level. Global analysis consisted in dividing descriptions into global units, defined as sequences and connections, and in categorizing these units on a functional and thematic basis. We have thus specified several categories of route de- scription sequences, the main ones being action pre- scriptions (e.g. "tu continues tout droit") and land- mark indications (e.g. "tu tombes sur le b£timent A."). 2 The inter-sequence connections (e.g. "puis", "quand", "ou": "then", "when", "or"), which mark the relationships between sequences or groups of se- quences, have been categorized according to their functions (e.g. succession, anchorage, alternative). Local analysis consisted in the determination of se- mantic sub-units of descriptions and in the definition of the content of different sequences with respect to these sub-units. These latter will enable, during the processing of a RD, to extract and represent infor- mation concerning actions and landmarks, and their attributes. Thus, one of the objectives of local anal- ysis has been to determine which types of verbs in the RD express travel actions and which ones serve to introduce landmarks. The sub-units have been further analyzed and divided into types (e.g. differ- ent types of actions). For the purpose of the conceptual representation of RDs, we need a prototypical model of their refer- ent which is the route. We have decomposed it into a path and landmarks. A path is made up of trans- fers and relays. Relays are abstract points initiating transfers and may be "covered" by a turn. Land- marks can be either associated with relays or with transfers. More formally, a route is structured into a list of segments, each segment consisting of a re- lay and of a transfer. Landmarks are represented as possible attributes (among others) of these two ele- 2 Cf. Example 2.1 300 ments. Having such a prototype for routes, with all elements defined in terms of attribute-value pairs, it is relatively easy to re-construct the route de- scribed by the linguistic input: the reconstruction consists in recognizing the relevant elements and in assigning values to their attributes. Using the route model, some elements missing in the text can be inferred. For example, since every route segment contains one relay (which may be a turn) and one transfer, the information concerning the fragment of the route expressed by: "tournez k gauche et puis droite" ("turn to the left and then to the right"), must be completed by adding a transfer between the two turns. Apart from models for linguistic and conceptual representations, the rules of transition have to be defined. For this purpose, it is necessary to establish relationships between different linguistic and con- ceptual entities. For example, the action of the type "progression" (e.g. "continuer", "aller") corresponds to a transfer and the actions of the type "change of direction" (e.g. "tourner") or "taking a way" (e.g. "prendre la rue") to a relay (which will coincide with a turn or with the beginning of a way-landmark, e.g. a street, respectively). Another aspect of modeling consists in specifying graphic objects corresponding to the entities in the route model. For the time being, we decided to do with simple symbolic elements, without a fine dis- tinction between landmarks. The graphic symbols have been created on the basis of the information accessible from the context rather than the one con- tained in the "names" of landmarks. These latter are included in sketches in the form of verbal labels. Once the whole route has been reconstructed at the conceptuM level, we start to generate the corre- sponding graphic map, like the one here below. 0 b&timen~ A OOO panneaux d'informations dQscenl;@ 4 to~"niquets du RER 4 Conclusion Computer translation of route descriptions into sketches raises some interesting issues. Firstly, one has to investigate the relationships between the lin- guistic and the graphic modes, the constraints and possibilities which appear while generating images from linguistic descriptions. Secondly, a thorough linguistic analysis of route descriptions is necessary. We have used a discourse based approach and analyze "local" linguistic ele- ments by filtering them through the discourse struc- ture, described at the "global" level. Our goal is to build a linguistic model for the text type "route description". Another interesting problem is the form and the derivation of the conceptual representation of the de- scribed route. We believe that it cannot be directly obtained from the linguistic material itself. During the understanding process, the linguistic meaning has to be represented before the conceptual repre- sentation can be created. That is why we need a two-stage internal representation, based on specific linguistic and conceptual models. References M. Abraham and J-P. Desclds. 1992. Interaction be- tween lexicon and image: Linguistic specifications of animation. In Proc. o] COLING-92, pages 1043-1047, Nantes. E. Andrd, G. Bosch, G. Herzog, and T. Rist. 1987. Cop- ing with the intrinsic and the deictic uses of spatial prepositions. In K. Jorrand and L. Sgurev, editors, Artificial Intelligence II: Methodology, Systems, Appli- cations, pages 375-382. North-Holland, Amsterdam. E. Andrd, G. Herzog, and T. Rist. 1988. On the simul- taneous interpretation of real world image sequences and their natural language description: The system SOCCER. In Proc. o] the 8th ECAI, pages 449-454, Munich. M. Arnold and C. Lebrun. 1992. Utilisation d'une langue pour la creation de sc~nes architecturales en image de synthbse. Exp6rience et r6flexions. Intellec- tica, 3(15):151-186. M. Arnold. 1990. Transcription automatique verbal- image et vice versa. Contribution ~ une revue de la question. In Proc. of EuropIA-90, pages 30-37, Paris. A. Gryl. 1992. Op6rations cognitives mises en oeuvre dans la description d'itin6ralres. Mdmoire de DEA, Universitd Paris 11, France. K.M. Kahn. 1979. Creation of computer animation from story descriptions. A.I. Technical report 540, M.I.T. Artificial Intelligence Laboratory, Cambridge, MA. A. Yamada, T. Yamamoto, H. Ikeda, T. Nishida, and S. Doshita. 1992. Reconstructing spatial image from natural language texts. In Proc. of COLING-9P, pages 1279-1283, Nantes. 301 | 1995 | 43 |
A Computational Framework for Composition in Multiple Linguistic Domains Elvan GS~men Computer Engineering Department Middle East Technical University 06531, Ankara, Turkey [email protected] Abstract We describe a computational framework for a grammar architecture in which dif- ferent linguistic domains such as morphol- ogy, syntax, and semantics are treated not as separate components but compositional domains. The framework is based on Combinatory Categorial Grammars and it uses the morpheme as the basic building block of the categorial lexicon. 1 Introduction In this paper, we address the problem of mod- elling interactions between different levels of lan- guage analysis. In agglutinative languages, affixes are attached to stems to form a word that may cor- respond to an entire phrase in a language like En- glish. For instance, in Turkish, word formation is based on suffixation of derivational and inflectional morphemes. Phrases may be formed in a similar way (1). (1) Yoksul-la~-t~r-zl-makta-lar poor-V-CAUS-PASS-ADV-PERS '(They) are being made poor (impoverished)'. In Turkish, there is a significant amount of in- teraction between morphology and syntax. For in- stance, causative suffixes change the valence of the verb, mad the reciprocal suffix subcategorize the verb for a noun phrase marked with the comitative case. Moreover, the head that a bound morpheme modi- fies may be not its stem but a compound head cross- ing over the word boundaries, e.g., (2) iyi oku-mu~ ~ocuk well read-REL child 'well-educated child' In (2), the relative suffix -mu~ (in past form of subject participle) modifies [iyi oku] to give the scope [[[iyi oku]mu~] 9ocuk]. If syntactic composi- tion is performed after morphological composition, we would get compositions such as [iyi [okumu~ 6ocuk]] or [[iyi okurnu~] ~ocuk] which yield ill-formed semantics for this utterance. As pointed out by Oehrle (1988), there is no rea- son to assume a layered grammatical architecture which has linguistic division of labor into compo- nents acting on one domain at a time. As a computa- tional framework, rather than treating morphology, syntax and semantics in a cascaded manner, we pro- pose an integrated model to capture the high level of interaction between the three domains. The model, which is based on Combinatory Categorial Gram- mars (CCG) (Ades and Steedman, 1982; Steedman, 1985), uses the morpheme as the building block of composition at all three linguistic domains. 2 Morpheme-based Compositions When the morpheme is given the same status as the lexeme in terms of its lexical, syntactic, and semantic contribution, the distinction between the process models of morphotactics and syntax disap- pears. Consider the example in (3). (3) uzun kol-lu g5mlek long sleeve-ADJ shirt Two different compositions 1 in CCG formalism are given in Figure 1. Both interpretations are plau- sible, with (la) being the most likely in the absence of a long pause after the first adjective. To account for both cases, the suffix -lu must be allowed to mod- ify the head it is attached to (e.g., lb in Figure 1), or a compound head encompassing the word bound- aries (e.g., 1:~ in Figure 1). 3 Multi-domain Combination Operator Oehrle (1988) describes a model of multi-dimen- sional composition in which every domain Di has an algebra with a finite set of primitive operations 1Derived and basic categories in the examples are in fact feature structures; see section 4. We use ~ '~ to denote the combination of categories x and y giving the result z. 302 lexical entry syntactic category semantic category ~z~n n/~ Ap.Zong(p( z ) ) kol n Ax.sleeve(x) -l~ (~1~) \ n ~q.x~.~(y, ha~(q)) g5mlek n Aw.shirt(w) uzun kol .In gJmlek (la) • n/n shirt(y, has(long(sleeve(z)))) = 'a shirt with long sl ..... ' (lb) ~z~n kol -lu g6mlek n/n long(shirt(y, has(sleeve(z)))) = 'a long shirt with sleeves' Figure 1: Scope ambiguity of a nominal bound mor- pheme Fi. As indicated by Turkish data in sections 1 and 2, Fi may in fact have a domain larger than--but com- patible with--Di. In order to perform morphological and syntactic compositions in a unified framework, the slash oper- ators of Categorial Grammar must be enriched with the knowledge about the type of process and the type of morpheme. We adopt a representation sim- ilar to Hoeksema and Janda's (1988) notation for the operator. The 3-tuple <direction, morpheme type, process type> indicates direction 2 (left, right, unspecified), morpheme type (free, bound), and the type of morphological or syntactic attachment (e.g., affix, clitic, syntactic concatenation, reduplica- tion). Examples of different operator combinations are given in Figure 2. 4 Information Structure and Tactical Constraints Entries in the eategorial lexicon have tactical con- straints, grammatical and semantic features, and phonological representation. Similar to HPSG (Pol- lard and Sag, 1994), every entry is a signed attribute-value matrix. Lexical and phrasal ele- 2We have not yet incorporated into our model the word-order variation in syntax. See (Hoffman, 1992) for a CCG based approach to this phenomenon. Operator Morp. < \, bound, clitic> de < \, bound, affix> -de </, bound, redup> ap- </, free, concat> nzun < \, free, concat> ba~ka <[, free, concat> gSr Example Ben de git-ti.m I too go-TENSE-PERS 'I went too.' Ben-de kalem ear I-LOCATIVE pen exist 'I have a pen.' ap-afzk durum INT-clear situation 'Very clear situation' uzun yol long road 'long road' bu- ndan ba~ka this-ABLATIVE other 'other than this' ktz kedi-yi gSr-dii girl cat-ACC see-TENSE or ktz g6rdii kediyi 'The girl saw the cat' Figure 2: Operators in the proposed model. ments are of the following f (function) sign: Fres ] /LphonJ res-op-arg is the categorial notation for the ele- ment. phon represents the phonological string. Lex- ical elements may have (a) phonemes, (b) mete- phonemes such as H for high vowel, and D for a dental whose voicing is not yet determined, and (c) optional segments, e.g., -(y)lA, to model vowel/consonant drops, in the phon feature. During composition, the surface forms of composed elements are mapped and saved in phon. phon also allows efficient lexicon search. For instance, the causative suffix -DHr has eight different realizations but only one lexical entry. Every res and arg feature has an f or p (property) sign: syn 1 pLSernj syn and sere are the sources of grammatical (g sign) and semantic (s sign) properties, respectively. These properties include agreement features such as person, number, and possessive, and selectional re- 303 strictions: "cat type form restr <cond> $ "person " number poss nprop case relative form "reflexive reciprocal causative passive vprop tense modal aspect person form restr <cond> g A special feature value called none is used for imposing certain morphotactic constraints, and to make sure that the stem is not inflected with the same feature more than once. It also ensures, through syn constraints, that inflections are marked in the right order (cf., Figure 3). 5 Conclusion Turkish is a language in which grammatical func- tions can be marked morphologically (e.g., case), or syntactically (e.g., indirect objects). Semantic composition is also affected by the interplay of mor- phology and syntax, for instance the change in the scope of modifiers and genitive suffixes, or valency and thematic role change in causatives. To model interactions between domains, we propose a catego- rial approach in which composition in all domains proceed in parallel. As an implementation, we have been working on the modelling of Turkish causatives using this framework. 6 Acknowledgements I would like to thank my advisor Cem Bozsahin for sharing his ideas with me. This research is supported in part by grants from Scientific and Technical Re- search Council of Thrkey (contract no. EEEAG- 90), NATO Science for Stability Programme (con- tract name TU-LANGUAGE), and METU Gradu- ate School of Applied Sciences. References A. E. Ades and M. Steedman. 1982. On the order of words. Linguistics and Philosophy, 4:517-558. res op arg sere }hon "]H" res cat n r person none number none possessive none syn nprop |case none |relative none Lform common type property ] sere form h~ I~)j op (/, free, concat) syn Lnprop [ form com. or prop. Lsem r type ] L f°rm ~]ntity )hob \, bound, suffix) cat n F person none number singular possessive none syn nprop |case none /relative none Lform common !formtype &ntity] Figure 3: Lexicon entry for -lH. Jack Hoeksema and Richard D. Janda. 1988. Im- plications of process-morphology for categorial grammar. In R. T. Oehrle, E. Bach, and D. Wheeler, editors, Categorial Grammars and Nat- ural Language Structures, D. Reidel, Dordrecht, 1988. Beryl Hoffman. 1992. A CCG approach to free word order languages. In Proceedings of the 30th An- nual Meeting of the A CL, Student Session, 1992. Richard T. Oehrle. 1988. Multi-dimensional compo- sitional functions as a basis for grammatical anal- ysis. In R. T. Oehrle, E. Bach, and D. Wheeler, editors, Categorial Grammars and Natural Lan- guage Structures, D. Reidel, Dordrecht, 1988. C. Pollard and I. A. Sag. 1994. Head-driven Phrase Structure Grammar. University of Chicago Press. M. Steedman. 1985. Dependencies and coordination in the grammar of Dutch and English. Language, 61:523-568. 304 | 1995 | 44 |
Polyphony and Argumentative Semantics Jean-Michel Grandchamp* LIMSI-CNRS B.P. 133 91403 ORSAY CEDEX FRANCE [email protected] Abstract We extract from sentences a superstruc- ture made of argumentative operators and connectives applying to the remaining set of terminal sub-sentences. We found the argumentative interpretation of utterances on a semantics defined at the linguistic le- vel. We describe the computation of this particular semantics, based on the cons- traints that the superstructure impels to the argumentative power of terminal sub- sentences. 1 Introduction Certain utterance structures contain linguistic clues that constrain their interpretation on an argumen- tative basis. The following example illustrates these constraints: I was robbed yesterday... (1) ...but luckily I had little money. (2) ...but luckily I had a little money. (3) ...but unfortunately I had little money. (4) ...but unfortunately I had a little money. We describe and compute the signification of such sentences by specifying how the key words (in italics) constrain the argumentative power of the terminal sub-sentences (TSS) "I was robbed yesterday" and "I had money". They may all be interpreted in a relevant context, but hints for recognizing the need of an "odd" context are given. For instance, in (1) and (2), the robbery is considered bad because of the opposition introduced by "but", to something con- sidered happy because of "luckily". Holding money is considered good in (2) and bad in (1) because of the general structure of the sentence and the oppo- sition between "little" and "a little". In (3) and (4), the robbery is considered good, while in (3) money is normally considered good too, and in (4) (the od- dest) it is considered bad (imagine a speaker who * This research is supported by SNCF, Direction de la Recherche, D6partement RP, 45 rue de Londres, 75379 Paris Cedex 08 France. usually likes to be robbed just to see the disappoint- ment because he holds no money). We see on these examples that TSS's are argumentatively ambiguous and modifiers constrain them. In this paper we propose, for a given utterance, the construction of the signification of the under- lying sentence, which captures its polyphonic and argumentative aspects. The signification of a sent- ence is viewed as the application of an argumentative super-structure to the signification of TSS's, free of operators or connectives. The signification must fi- nally be interpreted in the context of the utterance. 2 Linguistic Background Our model rests on a framework inspired by Ducrot (1980). He defines an utterance as a concrete occur- rence of an abstract entity called a sentence. Under- standing an utterance means computing its meaning, which may be formalized in different contexts (such as speech acts or beliefs). The meaning is built from the context and from the signification of the sent- ence which :lescribes all potential uses of the lin- guistic matter. Ducrot's integrated pragmafics also claims that many phenomena usually described at the pragmatic level, must be described in the signi- fication (such as argumentation). Within Ducrot's framework, we use his theory of polyphony, topoi and modifiers. Polyphony is a theory that models utterances with three levels of characters. The talking subject refers to the per- son who pronounced the words. The talking subject introduces the speaker to whom the words are at- tributed (different from the talking subject in some cases such as indirect speech). Sentences contain li- teral semantic contents, each one being under the responsibility of an utterer. The relation between the speaker of a sentence and the utterer of a con- tent defines the commitment of the speaker to such a semantic content. This commitment takes one of the following values: identification (ID), opposition (OPP) and partial commitment (PC) (Ducrot, 1984; Grandchamp, 1994). Sentences are chained under linguistic warrants 305 called topoi (plural of topos). Topoi are found in words. In a sentence or a combination of sentences, some topoi are selected, others are not relevant to the discourse context. In the interpretative process, still others will be eliminated because of irrelevance to the situation. A topos is selected under one of its topical forms, made up of a direction (positive or negative) and other parameters. The topical form is selected with a given strength. For instance, there is a topos T linking wealth to acquisitions. The word "rich" may be seen as the positive form of T that says "when you are rich you may buy a lot of things". The word "poor" contains the negative form of the same topos T, that is "when you are not rich you may not buy a lot of things". Unlike the warrants of Toulmin (1958), topoi are not logic warrants. They may give some support for inferences, but do not have to. The strength is ruled by a subclass of operators called modifiers, whose semantics is described pre- cisely as modifying the strength of a selected topos. Such words include "very", "little" or "a little". Mo- differs combine with each other and with argument sentences. The strength is specified by a lexical- based partial ordering, producing non-quantitative degrees similar to Klein's (1982) . 3 Computational Framework 3.1 Signification of sentences We have discarded the utterance/sentence level of polyphony in order to simplify the presentation. Gi- ven a set of topoi T, a set of strength markers F, the set D={positive, negative} of directions, and the set V={ID,PC,OPP} of polyphonic values, we define the set C=TxFxDxV of argumentative cells: the topos, its direction, the strength and the polyphonic com- mitment. The signification of a sentence is defined as a disjunction of subsets of C. 3.2 Syntax Given a sentence, we identify operators, connectives and modifiers, and build the A-structure of the sent- ence linking these linguistic clues to the TSS's. A sample A-structure is given in Figure 1. Connecti- ves constrain a pair of sentences or a sentence and a discursive environment, operators constrain argu- mentative power, and modifiers constrain only ar- gumentative orientation and strength. In addition, connectives and operators also specify the commit- ment of the speaker to semantic contents, by means of the theory of polyphony. 3.3 Lexical contributions A TSS has a semantics that is described in terms of predications, all but one being marked by pre- supposition. The semantics of each predication is described as a set of argumentative cells. Connecti- ves and operators contribute to the computation of Connective but I [ Unfortunately ] [ Terminal sentence I Terminal sentence [I was robbed yesterday I had money Figure 1: A-structure for "I was robbed yesterday, but unfortunately I had a little money" the signification in terms of functional transforma- tions of the signification along the four dimensions of the cells. The signification of TSS is assumed to be computed from the lexicon by a compositional process. 3.4 Argumentative structure The A-structure is then considered as the applica- tion of an argumentative structure (made of modi- fiers, operators and connectives) to a vector of TSS's. The signification of a complete sentence is computed as the application of what we call the &-structure. A &-structure is a function that takes as many argu- ments as there are TSS's, and is defined by using ba- sic functions that are also used for the description of operators and connectives. Examples of basic func- tions that operate cell by cell are the modification of the polyphonic value, the direction or the strength. Examples of basic functions that operate on a set ofcells are the selection of cells with a given poly- phonic value, topos or direction. The ~-structure is computed recursively on the A-structure. As the identification or the contribution of an operator may be ambiguous, the ~-structures may contain disjunc- tions. 3.5 Computation Given a se.atence, its (ambiguous) A- and ~- structures are computed. In the normal bottom-up process, the signification of TSS's is computed, and the ¢-structure is applied. The result is the (ambi- guous) signification of the complete sentence. If the signification of TSS's reflects their "stan- dard" or "literal" potential, the normal bottom-up process may fail. We wish to design &-structures so that they may be used for two additional tasks that may require a top-down process: (1) accept TSS de- scriptions containing free variables, and produce the sets of constraints on them that lead to a solution; (2) provide the interpretation process with a way of generating "unusual significations" of TSS's requi- red by the global effect of the ~-structure. 306 4 Sample Lexical Descriptions Connective "but": the signification of "P1 but P2" is computed from the significations of P1 and P2, with the following modifications: generate al- ternatives according to a partition of topoi of P1 and P2 (whose cells have free commitment varia- bles) with the "opposite" relation which holds in T; in each alternative, commit the corresponding cells with the value PC for P1 and ID for P2. "P1 but P2" will argue in the same way as P2 alone, based on a topos that can be opposed to one of P1. Modifier "a little": the signification of "a little P" is the one of P where the strength of all cells is attenuated. Modifier "little": the signification of "little P" changes the direction of the cells into the converse value (anti-orientation). TSS "John stopped smoking": its signification is formed of two sets of cells, the commitment value being fixed to Pc for the cells from the presupposed predication [John smoked before] and left free for the main predication [John does not smoke now]. 5 Interpretation The signification of TSS's, connectives, and opera- tors may contain instructions referring to the con- text for the attribution of values. The interpretative process must fill these holes. It also further selects in the sets of topoi those connected to the situation. It drives the top-down process for generating data corresponding to "odd" contexts. We claim that the argumentative structure of sentences is never questioned by the interpretative process, that it fully captures the argumentative po- tential of the sentence and that it is reliable. The signification is then a firm base for the computation of the meaning. 6 Related Work Most works on argumentation define links between propositions at a logical level, so that linguistic stu- dies focus on pragmatics rather than semantics (Co- hen, 1987). Some ideas of Ducrot were already used in systems: argumentative orientation (Guez, I990) and polyphony (Elhadad and McKeown, 1990). Be- sides, Itaccah (1990) develops argumentative seman- tics without the need of a theory of utterance. 7 Conclusion We have isolated a semantic module which allows the interpretation process to take into account the ar- gumentative constraints imposed by linguistic clues. We designed this module so that it starts from le- xical descriptions which we are able to provide ma- nually, and produces a structure whose interpreta- tion can be computed. Remaining difficulties lay in the linguistic theories themselves (mainly combining modifiers and cataloguing topoi), the signification of TSS's (which should be compositional) and the inte- gration of argumentative semantics with informative and illocutionary elements. References It. Cohen. 1987. AnMyzing the structure of argu- mentative discourse. Computational linguistics, 13(1-2). O. Ducrot et al. 1980. Les roots du discours, les ~ditions de Minuit. O. Ducrot. 1984. Le dire et le dit. les ~ditions de Minuit. M. Elhadad and K. It. McKeown. 1990. Generating connectives. In Proc. Coling, Helsinki, Finland. J.-M. Grandchamp. 1994. l~nonciation et dia- logue homme-machine. In Proc. Le Dialogique, Le Mans, France. S. Guez. 1990. A computational model for argu- ments understanding. In Proc. Coling, Heisinki, Finland. E. Klein. 1982. The interpretation of linguistic com- paratives. Journal of Linguistics, 18. P.-Y. Itaccah. 1990. Modelling argumentation, or modelling with argumentation. Argumentation, 4. S. Toulmin. 1958. The uses of Arguments. Cam- bridge University Press. 307 | 1995 | 45 |
Knowledge-based Automatic Topic Identification Chin-Yew Lin Department of Electrical Engineering/System University of Southern California Los Angeles, CA 90089-2562, USA chinyew~pollux.usc.edu Abstract As the first step in an automated text sum- marization algorithm, this work presents a new method for automatically identi- fying the central ideas in a text based on a knowledge-based concept counting paradigm. To represent and generalize concepts, we use the hierarchical concept taxonomy WordNet. By setting appropri- ate cutoff values for such parameters as concept generality and child-to-parent fre- quency ratio, we control the amount and level of generality of concepts extracted from the text. 1 1 Introduction As the amount of text available online keeps grow- ing, it becomes increasingly difficult for people to keep track of and locate the information of inter- est to them. To remedy the problem of information overload, a robust and automated text summarizer or information extrator is needed. Topic identifica- tion is one of two very important steps in the process of summarizing a text; the second step is summary text generation. A topic is a particular subject that we write about or discuss. (Sinclair et al., 1987). To identify the topics of texts, Information Retrieval (IR) re- searchers use word frequency, cue word, location, and title-keyword techniques (Paice, 1990). Among these techniques, only word frequency counting can be used robustly across different domains; the other techniques rely on stereotypical text structure or the functional structures of specific domains. Underlying the use of word frequency is the as- sumption that the more a word is used in a text, the more important it is in that text. This method 1This research was funded in part by ARPA under or- der number 8073, issued as Maryland Procurement Con- tract # MDA904-91-C-5224 and in part by the National Science Foundation Grant No. MIP 8902426. recognizes only the literal word forms and noth- ing else. Some morphological processing may help, but pronominalization and other forms of coreferen- tiality defeat simple word counting. Furthermore, straightforward word counting can be misleading since it misses conceptual generalizations. For exam- ple: "John bought some vegetables, fruit, bread, and milk." What would be the topic of this sentence? We can draw no conclusion by using word counting method; where the topic actually should be: "John bought some groceries." The problem is that word counting method misses the important concepts be- hind those words: vegetables, fruit, etc. relates to groceries at the deeper level of semantics. In rec- ognizing the inherent problem of the word counting method, recently people have started to use artifi- cial intelligence techniques (Jacobs and ttau, 1990; Mauldin, 1991) and statistical techniques (Salton et al., 1994; Grefenstette, 1994) to incorporate the sementic relations among words into their applica- tions. Following this trend, we have developed a new way to identify topics by counting concepts instead of words. 2 The Power of Generalization In order to count concept frequency, we employ a concept generalization taxonomy. Figure 1 shows a possible hierarchy for the concept digital computer. According to this hierarchy, if we find iaptop and hand-held computer, in a text, we can infer that the text is about portable computers, which is their par- ent concept. And if in addition, the text also men- tions workstation and mainframe, it is reasonable to say that the topic of the text is related to digital computer. Using a hierarchy, the question is now how to find the most appropriate generalization. Clearly we can- not just use the leaf concepts -- since at this level we have gained no power from generalization. On the other hand, neither can we use the very top concept -- everything is a thing. We need a method of iden- tifying the most appropriate concepts somewhere in middle of the taxonomy. Our current solution uses 308 ~ m p u t e r Workstation PC ~ er Mainframe Port~ktop computer Hand-held computer Laptop computer Figure 1: A sample hierarchy for computer concept frequency ratio and starting depth. 2.1 Branch Ratio Threshold We call the frequency of occurrence of a concept C and it's subconcepts in a text the concept's weight 2. We then define the ratio T~,at any concept C, as fol- lows: 7~ = MAX(weight of all the direct children of C) SUM(weight of all the direct children of C) 7~ is a way to identify the degree of summarization informativeness. The higher the ratio, the less con- cept C generalizes over many children, i.e., the more it reflects only one child. Consider Figure 2. In case (a) the parent concept's ratio is 0.70, and in case (b), it is 0.3 by the definition of 7~. To generate a sum- mary for case (a), we should simply choose Apple as the main idea instead of its parent concept, since it is by far the most mentioned. In contrast, in case (b), we should use the parent concept Computer Company as the concept of interest. Its small ra- tio, 0.30, tells us that if we go down to its children, we will lose too much important information. We define the branch ratio threshold (T~t) to serve as a cutoff point for the determination of interestingness, i.e., the degree of generalization. We define that if a concept's ratio T¢ is less than 7~t, it is an interesting concept. 2.2 Starting Depth We can use the ratio to find all the possible inter- esting concepts in a hierarchical concept taxonomy. If we start from the top of a hierarchy and pro- ceed downward along each child branch whenever the branch ratio is greater than or equal to 7~t, we will eventually stop with a list of interesting con- cepts. We call these interesting concepts the inter- esting wave front. We can start another exploration of interesting concepts downward from this interest- ing wavefront resulting in a second, lower, wavefront, and so on. By repeating this process until we reach the leaf concepts of the hierarchy, we can get a set of interesting wavefronts. Among these interesting 2According to this, a parent concept always has weight greater or equal to its maximum weighted direct children. A concept itself is considered as its own direct child. (io) Toshiba(0) NEC(1) Compaq(1) Apple(7) IBM(l) = ~ ( 1 0 ) Toshiba(2) NEC(2) Compaq(3) Apple(2) IBM(l) Figure 2: Ratio and degree of generalization wavefronts, which one is the most appropriate for generation of topics? It is obvious that using the concept counting technique we have suggested so far, a concept higher in the hierarchy tends to be more general. On the other hand, a concept lower in the hierarchy tends to be more specific. In order to choose an adequate wavefront with appropriate generalization, we introduce the parameter starting depth, l)~. We require that the branch ratio criterion defined in the previous section can only take effect after the wavefront exceeds the starting depth; the first subsequent interesting wavefront generated will be our collection of topic concepts. The appropri- ate ~Da is determined by experimenting with different values and choosing the best one. 3 Experiment We have implemented a prototype system to test the automatic topic identification algorithm. As the concept hierarchy, we used the noun taxonomy from WordNet 3 (Miller et al., 1990). WordNet has been used for other similar tasks, such as (Resnik, 1993) For input texts, we selected articles about informa- tion processing of average 750 words each out of Business Weck (93-94). We ran the algorithm on 50 texts, and for each text extracted eight sentences containing the most interesting concepts. How now to evaluate the results? For each text, we obtained a professional's abstract from an online service. Each abstract contains 7 to 8 sentences on average. In order to compare the system's selection with the professional's, we identified in the text the sentences that contain the main concepts mentioned in the professional's abstract. We scored how many sentences were selected by both the system and the professional abstracter. We are aware that this eval- uation scheme is not very accurate, but it serves as a rough indicator for our initital investigation. We developed three variations to score the text 3WordNet is a concept taxnonmy which consists of synonym sets instead of individual words 309 sentences on weights of the concepts in the interest- ing wavefront. 1. the weight of a sentence is equal to the sum of weights of parent concepts of words in the sentence. 2. the weight of a sentence is the sum of weights of words in the sentence. 3. similar to one, but counts only one concept in- stance per sentence. To evaluate the system's performance, we defined three counts: (1) hits, sentences identified by the algorithm and referenced by the professional's ab- stract; (2) mistakes, sentences identified by the al- gorithm but not referenced by the professional's ab- stract; (3) misses, sentences in the professional's ab- stract not identified by the algorithm. We then bor- rowed two measures from Information Retrieval re- search: Recall : hits/(hits + misses) Precision : hits/(hits + mistakes) The closer these two measures are to unity, the bet- ter the algorithm's performance. The precision mea- sure plays a central role in the text summarization problem: the higher the precision score, the higher probability that the algorithm would identify the true topics of a text. We also implemented a simple plain word counting algorithm and a random selec- tion algorithm for comparision. The average result of 50 input texts with branch ratio threshold 4 0.68 and starting depth 6. The aver- age scores 5 for the three sentence scoring variations are 0.32 recall and 0.35 precision when the system produces extracts of 8 sentences; while the random selection method has 0.18 recall and 0.22 precision in the same experimental setting and the plain word counting method has 0.23 recall and 0.28 precision. 4 Conclusion The system achieves its current performance without using linguistic tools such as a part-of-speech tag- ger, syntactic parser, pronoun resoultion algorithm, or discourse analyzer. Hence we feel that the con- cept counting paradigm is a robust method which can serve as a basis upon which to build an au- tomated text summarization system. The current system draws a performance lower bound for future systems. 4This threshold and the starting depth are deter- mined by running the system through different parame- ter setting. We test ratio = 0.95,0.68,0.45,0.25 and depth = 3,6,9,12. Among them, 7~t = 0.68 and ~D~ = 6 give the best result. 5The recall (R) and precision (P) for the three varia- tions axe: vax1(R=0.32,P=0.37), vax2(R=0.30,P=0.34), and vax3(R=0.28,P=0.33) when the system picks 8 sentences. We have not yet been able to compare the perfor- mance of our system against IR and commerically available extraction packages, but since they do not employ concept counting, we feel that our method can make a significant contribution. We plan to improve the system's extraction re- suits by incgrporating linguistic tools. Our next goal is generating a summary instead of just extract- ing sentences. Using a part-of-speech tagger and syntatic parser to distinguish different syntatic cat- egories and relations among concepts; we can find appropriate concept types on the interesting wave- front, and compose them into summary. For exam- ple, if a noun concept is selected, we can find its accompanying verb; if verb is selected, we find its subject noun. For a set of selected concepts, we then generalize their matching concepts using the taxon- omy and generate the list of {selected concepts + matching generalization} pairs as English sentences. There are other possibilities. With a robust work- ing prototype system in hand, we are encouraged to look for new interesting results. References Gregory Grefenstette. 1994. Ezplorations in Au- tomatic Thesaurus Discovery. Kluwer Academic Publishers, Boston. Paul S. Jacobs and Lisa F. Rau. 1990. SCISOR: Extracting information from on-line news. Com- munication of the A CM, 33(11):88-97, November. Michael L. Mauldin. 1991. Conceptual Information Retrieval -- A Case Study in Adaptive Partial Parsing. Kluwer Academic Publishers, Boston. George Miller, Richard Beckwith, Christiane Fell- baum, Derek Gross, and Katherine Miller. 1990. Five papers on wordnet. CSL Report 43, Congni- tive Science Labortory, Princeton University, New Haven, July. Chris D. Paice. 1990. Constructing litera- ture abstracts by computer: Techinques and prospects. Information Processing and Manage- ment, 26(1):171-186. Philip Stuart Resnik. 1993. Selection and Informa- tion: A Class-Based Approach to Lezical Relation- ships. Ph.D. thesis, University of Pennsylvania, University of Pennsylvania. Gerard Salton, James Allan, Chris Buckley, and Amit Singhal. 1994. Automatic analysis, theme generation, and summarization of machine- readable texts. Science, 264:1421-1426, June. John Sinclair, Patrick Hanks, Gwyneth Fox, Rosamuna Moon, and Penny Stock. 1987. Collins COBUILD English Language Dictionary. William Collins Sons & Co. Ltd., Glasgow, UK. 310 | 1995 | 46 |
Acquiring a Lexicon from Unsegmented Speech Carl de Marcken MIT Artificial Intelligence Laboratory 545 Technology Square, NE43-804 Cambridge, MA, 02139, USA [email protected] Abstract We present work-in-progress on the ma- chine acquisition of a lexicon from sen- tences that are each an unsegmented phone sequence paired with a primitive represen- tation of meaning. A simple exploratory algorithm is described, along with the di- rection of current work and a discussion of the relevance of the problem for child language acquisition and computer speech recognition. 1 Introduction We are interested in how a lexicon of discrete words can be acquired from continuous speech, a prob- lem fundamental both to child language acquisition and to the automated induction of computer speech recognition systems; see (Olivier, 1968; Wolff, 1982; Cartwright and Brent, 1994) for previous computa- tional work in this area. For the time being, we ap- proximate the problem as induction from phone se- quences rather than acoustic pressure, and assume that learning takes place in an environment where simple semantic representations of the speech intent are available to the acquisition mechanism. For example, we approximate the greater problem as that of learning from inputs like Phon. Input: /~raebltslne~ b~W t/ Sem. Input: { BOAT A IN RABBIT THE BE } (The rabbit's in a boat.) where the semantic input is an unordered set of iden- tifiers corresponding to word paradigms. Obviously the artificial pseudo-semantic representations make the problem much easier: we experiment with them as a first step, somewhere between learning language "from a radio" and providing an unambiguous tex- tual transcription, as might be used for training a speech recognition system. Our goal is to create a program that, after train- ing on many such pairs, can segment a new phonetic utterance into a sequence of morpheme identifiers. Such output could be used as input to many gram- mar acquisition programs. 2 A Simple Prototype We have implemented a simple algorithm as an ex- ploratory effort. It maintains a single dictionary, a set of words. Each word consists of a phone sequence and a set of sememes (semantic symbols). Initially, the dictionary is empty. When presented with an utterance, the algorithm goes through the following sequence of actions: • It attempts to cover ("parse") the utterance phones and semantic symbols with a sequence of words from the dictionary, each word offset a certain distance into the phone sequence, with words potentially overlapping. • It then creates new words that account for un- covered portions of the utterance, and adjusts words from the parse to better fit the utterance. • Finally, it reparses the utterance with the old dictionary and the new words, and adds the new words to the dictionary if the resulting parse covers the utterance well. Occasionally, the program removes rarely-used words from the dictionary, and removes words which can themselves be parsed. The general operation of the program should be made clearer by the follow- ing two examples. In the first, the program starts with an empty dictionary, early in the acquisition process, and receives the simple utterance/nina/{ NINA } (a child's name). Naturally, it is unable to parse the input. Utterance: Words: Unparsed: Mismatched: Phones Sememes /nina/ { JINA } /nina/ { NINA } From the unparsed portion of the sentence, the program creates a new word, /nina/ { NINA }. It then reparses Phones Sememes Utterance: /nina/ { NINA } Words: /nine/ { sISA } Unparsed: Mismatched: 311 Having successfully parsed the input, it adds the new word to the dictionary. Later in the acquisition process, it encounters the sentence you kicked off ~he sock, when the dictionary contains (among other words) /yu/ { YOU }, /~a/ { THE }, and /rsuk/ { SOCK }. Utterance: Words: Unparsed: Mismatched: Phones Sememes /yukIkt~f~sak/ { KiCK YOU OFF SOCK THE } /y./ { YOU } I~1 { THE } /rs~k/ { sock } kIkt~f { KICK OFF } r The program creates the new word /kIkt~f/ { KICK OFF } to account for the unparsed portion of the input, and/suk/{ SOCK} to fix the mismatched phone. It reparses, Phones Utterance: /yukIkt3f5~sak/ Words: !yu/ /klkt~f/ /a~/ /s~k/ /rs~k/ unused Unparsed: Mismatched: Sememes { KICK YOU OFF SOCK THE } { You } { KICK OFF } { THE } { SOCK } { SOCK } On this basis, it adds/kIkt~f/{ KICK OFF } and /sak/ { SOCK } to the dictionary. /rsuk/ { SOCK }, not used in this analysis, is eventually discarded from the dictionary for lack of use. /klkt~f/{ KICK OFF } is later found to be parsable into two sub- words, and also discarded. One can view this procedure as a variant of the expectation-maximization (Dempster et al., 1977) procedure, with the parse of each utterance as the hidden variables. There is currently no preference for which words are used in a parse, save to mini- mize mismatches and unparsed portions of the input, but obviously a word grammar could be learned in conjunction with this acquisition process, and used as a disambiguation step. 3 Tests and Results To test the algorithm, we used 34438 utterances from the Childes database of mothers' speech to chil- dren (MacWhinney and Snow, 1985; Suppes, 1973). These text utterances were run through a publicly available text-to-phone engine. A semantic dictio- nary was created by hand, in which each root word from the utterances was mapped to a correspond- ing sememe. Various forms of a root ("see", "saw", "seeing") all map to the same sememe, e.g., SEE . Semantic representations for a given utterance are merely unordered sets of sememes generated by tak- ing the union of the sememe for each word in the utterance. Figure 1 contains the first 6 utterances from the database. We describe the results of a single run of the al- gorithm, trained on one exposure to each of the 34438 utterances, containing a total of 2158 differ- ent stems. The final dictionary contains 1182 words, where some entries are different forms of a com- mon stem. 82 of the words in the dictionary have never been used in a good parse. We eliminate these words, leaving 1100. Figure 2 presents some entries in the final dictionary, and figure 3 presents all 21 (2%) of the dictionary entries that might be reason- ably considered mistakes. Phones /yu/ /~// /.st/ It,ll /d./ /e/ /It/ /ax/ /in/ /wi/ Sememes Phones Sememes { YOU } /bik/ { BEAK } { THE } /we/ { wAY } { WHAT } /hi/ { HEY } { TO } /brik/ { BREAK } { DO } /f, vg3/ { FINGER } { A } Ikisl { KISS } { IT } /tap/ { TOP } { I } /k~ld/ { CALL } { IS } l~gz/ { EGG } { WE } /eng/ { THING } Figure 2: Dictionary entries. The left 10 are the 10 words used most frequently in good parses. The right 10 were selected randomly from the 1100 en- tries. /iv/{ BE } /z~/ { YOU } /iv/{ DO } Hi,./{ SHE BE } /shappin/ { HAPPEN } /t I { NOT } /skatt/ { BOB SCOTT } /nidahz/ { NEEDLE BE } IsAmOl { SOMETHING } Innpi~/{ sNooPy } I*oI { WILL } I""I { AT ZOO } /don/ { DO } /sdf/{ YOU } /~/{ BE } /smAd/ { MUD } /~r~/{ BE } Idontl { DO NOT } /watarOiz/ { WHAT BE THESE } /wathappind/ { WHAT HAPPEN} /dran^63wiz/ { DROWN OTHERWISE } Figure 3: All of the significant dictionary errors. Some of them, like /J'iz/ are conglomerations that should have been divided. Others, like/t/, /wo/, and /don/ demonstrate how the system compen- sates for the morphological irregularity of English contractions. The /I~/problem is discussed in the text; misanalysis of the role of/I~/ also manifests itself on something. The most obvious error visible in figure 3 is the suffix -ing (/I~/), which should be have an empty se- meme set. Indeed, such a word is properly hypothe- sized but a special mechanism prevents semantically empty words from being added to the dictionary. Without this mechanism, the system would chance 312 Sentence this is a book. what do you see in the book? how many rabbits? how many? one rabbit. what is the rabbit doing? Phones /bIslzebuk/ /watduyusilnb~buk/ /hat~menirabhlts/ /hatlmeni/ /w^nrabblt/ /watlzb~rabbItdulD / Sememes { THIS BE A'B00K ) { WHAT DO YOU SEE IS THE BOOK } { HOW MANY RABBIT } { HOW MANY } { ONE RABBIT } { WHAT BE THE RABBIT DO } Figure 1: The first 6 utterances from the Childes database used to test the algorithm. upon a new word like ring,/rig/, use the/I~/{} to account for most of the sound, and build a new word /r/{ RINa } to cover the rest; witness something in figure 3. Most other semantically-empty affixes (plu- ral/s/for instance) are also properly hypothesized and disallowed, but the dictionary learns multiple entries to account for them (/eg/ "egg" and /egz/ "eggs"). The system learns synonyms ("is", "was", "am", ...) and homonyms ("read", "red" ; "know", "no") without difficulty. Removing the restriction on empty semantics, and also setting the semantics of the function words a, an, the, that and of to {}, the most common empty words learned are given in figure 4. The ring prob- lem surfaces: among other words learned are now /k/{ CAR } and/br/{ BRI/IG }. To fix such prob- lems, it is obvious more constraint on morpheme order must be incorporated into the parsing pro- cess, perhaps in the form of a statistical grammar acquired simultaneously with the dictionary. Word Source l~v/ {} -~,,g I~I {} the /o/{} ? /r/{} uo./yo., Is/{) plur~ -~ It/ {) is/'s Word Source /wo/{} ? /el {} a /an/{} .. /~,,/{} o/ /z/ {} plural -s Figure 4: The most common semantically empty words in the final dictionary. 4 Current Directions The algorithm described above is extremely simple, as was the input fed to it. In particular, • The input was phonetically oversimplified, each word pronounced the same way each time it oc- curred, regardless of environment. There was no phonological noise and no cross-word effects. • The semantic representations were not only noise free and unambiguous, but corresponded directly to the words in the utterance. To better investigate more realistic formulations of the acquisition problem, we are extending our coverage to actual phonetic transcriptions of speech, by allowing for various phonological processes and noise, and by building in probabilistic models of morphology and syntax. We are further reducing the information present in the semantic input by removing all function word symbols and merging various content symbols to encompass several word paradigms. We hope to transition to phonemic in- put produced by a phoneme-based speech recognizer in the near future. Finally, we are instituting an objective test mea- sure: rather than examining the dictionary directly, we will compare segmentation and morpheme- labeling to textual transcripts of the input speech. 5 Acknowledgements This research is supported by NSF grant 9217041- ASC and AR.PA under the ttPCC program. References Timothy Andrew Cartwright and Michael R. Brent. 1994. Segmenting speech without a lexicon: Evi- dence for a bootstrapping model of lexical acqui- sition. In Proc. of the 16th Annual Meeting of the Cognitive Science Society, IIillsdale, New Jersey. A. P. Dempster, N. M. Liard, and D. B. Rubin. 1977. Maximum liklihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, B(39):1-38. B. MacWhinney and C. Snow. 1985. The child lan- guage data exchange system. Journal of Child Language, 12:271-296. Donald Cort Olivier. 1968. Stochastic Grammars and Language Acquisition Mechanisms. Ph.D. thesis, Harvard University, Cambridge, Mas- sachusetts. Patrick Suppes. 1973. The semantics of children's language. American Psychologist. J. Gerald Wolff. 1982. Language acquisition, data compression and generalization. Language and Communication, 2(1):57-89. 313 | 1995 | 47 |
Semantic Information Preprocessing for Natural Language Interfaces to Databases Milan Mosny Simon Fraser University Burnaby, BC VhA 1S6, Canada [email protected] Abstract An approach is described for supplying se- lectional restrictions to parsers in natural language interfaces (NLIs) to databases by extracting the selectional restrictions from semantic descriptions of those NLIs. Au- tomating the process of finding selectional restrictions reduces NLI development time and may avoid errors introduced by hand- coding selectional restrictions. 1 Introduction An approach is described for supplying selectional restrictions to parsers in natural language interfaces (NLIs) to databases. The work is based on Linguis- tic Domain Theories (LDTs) (Rayner, 1993). In our approach, we propose a restricted version of LDTs (RLDTs), that can be normalized and in normal- ized form used to construct selectional restrictions. We assume that semantic description of NLIs is de- scribed by such an RLDT. The outline of the paper is as follows. Section 2 provides a brief summary of original LDTs, il- lustrates how Abductive Equivalential Translation (AET) (Rayner, 1993) can use them at run-time, and describes RLDTs. Sections 3 and 4 describe off- line processes - the normalization process and the extraction of selectional restrictions from normalized RLDTs respectively. Section 5 contains discussion, including related and future work. 2 LDT, AET and RLDT LDT and AET. LDT was introduced for a sys- tem, where input is a logical formula, whose predi- cates approximately correspond to the content words of the input utterance in natural language (lexical predicates). Output is a logical formula, consist- ing of predicates meaningful to the database engine (database predicates). AET provides a formalism for describing how a formula consisting of lexical predicates can be tranlsated into formula consisting of database predicates. The information used in the translation process is an LDT. A theory r contains horn clauses v(p~ A... A P,, --* Q) or universal conditional equivalences v(P1 ^... ^ P. ~ (RI ^... ^ Rz -= F)) or existential equivalences V((3Xl.- .Xm.P) -- F) where Pi, Ri denote atomic formulas, Q denotes a literal, F denotes a formula and V denotes universal closure. The LDT also contains functional relation- ships that are used for simplifications of the trans- lated formulas and assumption declarations. Given a formula Fting consisting of lexical predicates and an LDT, AET tries to find a set of permissible assump- tions A and a formula Fab consisting of the database predicates such that F u A =~ V(Fti,g = Fab) The translation of Fzi,g is done one predicate at a time. For each predicate in the formula Fting, there is a so-called conjunctive context that consists of conjuncts occurring together with the predicate in Fting, meaning postulates in the theory P, and the information stored in the database. Given an LDT, this conjunctive context determines how the predi- cate will be translated by AET. As an example, suppose that the lexical represen- tation of the sentence Is there a student who takes cmpt710 or cmpt7207 is Fzin~: :iX, E, Y, Y1 .student(X) A (take(E, X, Y) ^ unknown(Y, cmptT10) V take(E, X, Y, ) ^ unknown(Y~, erupt720)) Suppose that the theory r consists of axioms: VX.siudent(X) - db_student(X) (1) vx, E, Y, S.db_course(Y, S) ^ db_~tudent(X) (2) --~ (take(E, X, Y) =_ db_take(E, X, Y)) VX, S.acourse(S) --~ (3) (unknown(X, S) =-" db_course( X, S) ) VE, X, Y.db_take(E, X, Y) --* take(E, X, Y) (4) 314 where student, take and unknown are lexical predicates and db_student, rib_course, db_take are database predicates 1. Also suppose, that the LDT declares as an assumption aeourse(X), which can be read as "X denotes a course". Part of the conjunctive context associated with formula take(E, X, Y) in Ftlag is a formula (5). student(X) ^ unknown(Y, crept710) (5) From (1) and (3) of the theory F it follows that (5) implies the formula (6): db_student(X) A db_course(Y, crept710) (6) According to the translation rules of AET, axiom (2), and a logical consequence of a conjunctive con- text (6), the formula take( E, X, Y) can be translated into formula (7) db2ake( E, X, Y) (7) Formulas student(X), take(E, X, Y1), unknown(Y, cmpt710) and unknown(Yl, cmpt720) are translated similarly. Assuming crept710 and crept720 are courses, the input Fsi,g can be rewritten into Fdb shown below. 3X, E, Y, Y1 .db~tudent(X) ^ ( db_take( E, X, Y) A db_course(Y, crept710) V rib_take(E, X, Yz ) A db_course(Y1, crept720)) So we can claim that Fab and Fzin9 are equivalent in the theory F under an assumption that crept710 and crept720 are courses. RLDT. We shall constrain the expressive power of the LDT to suit tractability and efficiency require- ments. We assume that the input is a logical formula, whose predicates are input predicates. We assume that input predicates are not only lexical predicates, but also unresolved predicates used for, e.g., com- pound nominals (Alshawi, 1992), or for unknown words, as was demonstrated in the example above, or synonymous predicates that allow us to represent two or more different words with only one symbol. The output will be a logical formula consisting of output predicates. We do not suppose that the output formula contains pure database predicates. However, we allow further translation of the output formula into database formulae using only existen- tial conditional equivalences. The process can be implemented very efficiently, and does not affect se- lectional restrictions of the input language. We assume that each atomic formula with input predicates can be translated into an atomic formula with output predicates. An RLDT therefore also aThe predicate unknown will be discussed in the next section. contains a dictionary of atomic formulas that spec- ifies which input atomic formulas can be translated into which output atomic formulas. Existential equivalences in KLDT's logic will not be allowed. We also assume that F in the universal conditional equivalences is a conjunction of atomic formulas rather than arbitrary formula. We demand that an RLDT be nonrecursive. In- formally RLDT nonrecursivness means that for any set of facts A, if there is a Prolog-like derivation of an atomic formula F in the theory F U A, then there is a Prolog-like derivation of F without recursive calls. 3 The Normalization Process Our basic idea is to preproeess the semantic informa- tion of KLDT to create patterns of possible conjunc- tive contexts for each lexical predicate. The result of the preprocessing is a normalized KLDT: the col- lection of the lexical predicates, their meanings in terms of the database, and the patterns of the con- junctive contexts. First we introduce the term (Nontrivial) Normal Conditional Equivalence with respect to an RLDT T ((N)NCE(T)). Definition: Let T be an RLDT and F be a logi- cal part of T. The quadruple (A, C, Fim,,t, Fo,,put) is NCE(T) iff C is a conjunction of input atomic for- mulas of T, A is a conjunction of assumptions of T, and formulas V(A ^ C -. (F~.p., = Eo.,p.,)) V(A ^ Fo.,p., -* E~.p.,) are logical consequences of the theory F (we shall refer to the last condition as sound- ness of the NCE(T)). We shall call the quadruple (A, C, Fi,put, Foutv,,t) nontrivial NCE(T) (NNCE(T)) iff formula C A A does not imply truth of Foutp,,t in the theory F. Informally it means that Fi,p,,t can be rewritten to Fo,,tp,t if its conjunctive context implies A and does not imply the negation of C. (A, C) thus can be viewed as a pattern of conjunctive contexts, that justifies translation of Finput to Foutput. We allow RLDTs to form theory hierarchies, where parent theories can use results of their chil- dren's normalization process as their own logical part. Given an I~LDT T, for each pair consisting of the ground lexical atomic formula Fi,put and the ground database atomic formula Fo,,tput from the dictionary of T, we find the set S of conditions (A, C) such that (A, C, Fi,,pu,, Fo,,p,,) is NCE(T). We shall call the set of all such NCE(T)s a normalized R.LDT. If Fi,put and Fo,,tp,t contain constants that do not occur in the logic of RLDT, the generalization rule of FOL can be used to derive more general results by replacing the constants by unique variables. 315 If the T does not contain negative horn clauses of the form P ---* notQ then the following completeness property can be proven: If (A1, C1, Fi,e,~, Fox,put) is NNCE(T) and S is a resulting set for the pair Finput, Foutp~t then there are conditions (A, C) in S, such that AAC is weaker or equivalent to Ax A C1. The normalization process itself is based on SLD- resolution(Lloyd, 1987) which we have chosen be- cause it is fast, sound and complete but still provides enough reasoning power. Using the example from the previous section, the normalization algorithm when given the pairs (student(a), db_student( a ) ), ( unknown( a, b ), db_course(a, b)) and (take(e, a, b), db_take(e, a, b)) will produce the results {(true, true)}, {(aeour,e(b), true)} and {(acourse(X), student(a) A unknown(b, X)} respectively. 4 The Construction of Selectional Restrictions The normalized RLDT is used to construct selec- tional restrictions. We assign the tags "thing" or "attribute" to argu- ment positions of the lexical predicates according to what kind of restriction the predicate imposes on the referent at its argument position. If the predicate is a noun or the referent refers to an event, we assign the tag "thing". If the predicate explicitly specifies that the referent has some attribute - e.g. predicate big(X) specifies the size of the thing referenced by X and predicate take(_, X,_) specifies that the person referenced by X takes something - then we tag the argument position with "attribute". The normalized RLDT allows us to compute which "things" can be combined with which "attributes". That is, we can determine which words can be mod- ified or complemented by which other words. We assume that the normalized RLDT has cer- tain properties. Every NCE(T) that describes a translation of an "attribute" must also define a "thing" that constrains the same referent, e.g. the NCE(T) (true, person(X) A drives(E,X,Y), big(Y), db_big_car(Y)) for translation of the pred- icate big(Y) does not fulfil the requirement but NCE(T) (true, car(Y), big(Y), db_big_car(Y) ) does. We also assume that if a certain "thing" does not occur in any of the NCE(T)s that translates an "at- tribute" then the "thing" cannot be combined with the "attribute". Using the example above and the assignments student(X) X is a "thing" unknown(X,S) X is a "thing" take(E, X, Y) E is a "thing", X and Y are "attributes" we can infer that student(X) can be combined with attribute take(_, X,_) but cannot have an attribute take(_,_,X). To simplify results, we divide "attributes" into equivalence classes where two "attributes" are equiv- alent if both attributes are associated with the same set of "things" that the attributes can be combined with. We then assign a set of representatives from these classes to "things". To be able to produce more precise results, we dis- tinguish between two "attributes" that describe the same argument position of the same predicate ac- cording to the "thing" in the other "attribute" po- sition of the predicate, when needed. Consider for example the preposition "on" as used in the phrases "on the table" or "on Monday". We handle the first argument position of a predicate on(X,Y) associ- ated with the condition table(Y) as a different "at- tribute" as compared to the condition monday(Y). 5 Discussion Automating the process of finding selectional restric- tions reduces NLI development time and may avoid errors introduced by hand-coding selectional restric- tions. Althcugh the preprocessing is computation- ally intensive, it is done off-line during the delevop- ment of the NLI. A similar approach was proposed in (Alshawi, 1992) but a different method was suggested. (Al- shawi, 1992) derives selectional restrictions from the types associated with the database predicates, whereas our approach uses only the constraints that the RLDT imposes on the input language. Future work will explore other uses of normalized RLDTs: to construct a sophisticated help system, to lexicalize some small database domains, and to de- velop more complex lexical entries. We shall also consider the possible uses of our work in general NLP. Acknowledgments The author would like to thank Fred Popowich and Dan Fass for their valuable discussion and sugges- tions. This work was partially supported by the Nat- ural Sciences and Engineering Research Council of Canada under research grant OGP0041910, by the Institute for Robotics and Intelligent Systems, and by Faculty of Applied Sciences Graduate Fellowship at Simon Fr;,.ser University. References Alshawi, Hiyan, ed. 1992. The Core Language En- gine. Cambridge, Massachusetts: The MIT Press. Lloyd, John W., 1987. Foundations of Logic Pro- gramming, Second, Extended Edition, Springer- Verlag, New York. Rayner, Manny, 1993. Abductive Equivalentiai Translation and its application to Natural Language Database Interfacing. Ph.D. Thesis, Royal Institute of Technology, Stockholm, Sweden. 316 | 1995 | 48 |
Mapping Scrambled Korean Sentences into English Using Synchronous TAGs C Hyun S. Park omputer Laboratory University of Cambridge Cambridge, CB2 3QG, U.K. Hyun. Park~cl. cam. ac. uk Abstract Synchronous Tree Adjoining Grammars can be used for Machine Translation. How- ever, translating a free order language such as Korean to English is complicated. I present a mechanism to translate scram- bled Korean sentences into English by com- bining the concepts of Multi-Component TAGs (MC-TAGs) and Synchronous TAGs (STAGs). 1 Motivation Tree Adjoining Grammars (TAGs) were first devel- oped by Joshi, Levy, and Takahashi (Joshi et al., 1975). There are other variants of TAGs such as STAGs (Shieber and Schabes, 1990), and MC-TAGs (Weir, 1988). STAGs in particular can be used for machine translation and were applied to Korean- English machine translation in a military message domain (Palmer et al., 1995). Park (Park, 1995) suggested a way of handling Korean scrambling using MC-TAGs together with a priority concept. However, as scrambled argument structures in Korean were represented as sets using MC-TAGs, a mechanism to combine MC-TAGs and STAGs was necessary to translate Korean scrambled sentences into English. 2 Korean-English Machine Translation Using STAGs STAGs are a variant of TAGs introduced to charac- terize correspondences between tree adjoining lan- guages. They can be used to relate TAGs for two dif- ferent languages for machine translation (Abeill6 et al., 1990). The translation process consists of three steps. The source sentence is parsed according to the source grammar. Each elementary tree in the deriva- tion is considered with the features given from the derivation through unification. Second, the source derivation tree is transferred to a target derivation. This step maps each elementary tree in the source derivation tree to a tree in the target derivation tree by looking in the transfer lexicon. And finally, the target sentence is generated from the target deriva- tion tree obtained in the previous step. The transfer lexicon consists of pairs of trees, one from the source language and the other from the target language. Within the pair of trees, nodes may be linked. Whenever adjunction or substitution is performed on a linked node in a source tree, the corresponding operation applies to the linked node in the target tree. i "-':1 , "--', "i i "° Fibre 1: The K-E Transfer Lexicon Canonical ordering of the arguments of transitive verbs in Korean is SOV. Whereas the case marker in English is implicit in the word, case markers are explicit in Korean. This is reflected in the transfer lexicon of Figure 1. So, the pair a in Figure 1 shows that Korean has an explicit subject case marker i, and the pair/~ shows that Korean has an explicit ob- ject case marker lul. Also, the pair 7 shows the links between SOV structure of Korean to SVO structure of English. K: Tom-i Jerry-lul ccossnunta. 1 Tom-NOM Jerry-ACC chase E: Tom chases Jerry. To translate sentence (1), we start with the pair 7 in Figure 1, and we substitute the pair a on the link from the Korean node SP to the English node NP. Then, pair/~ is substituted into the NP-OP pairs in 7, thus correctly transferring sentence (1). 317 3 Handling of Scrambling in Korean Using MC-TAGs TAGs and related formalisms, due to the extended domain of locality, can combine a lexical head and all of its arguments in a single elementary structure of the grammar. However, Becker and Rambow show that TAGs that obey the co-occurrence constraint cannot handle the full range of scrambled sentences (Becket and Rainbow, 1990). As a result, non-local MC-TAG-DL (Multi-Component TAG with Dom- inance Link) was proposed as a way of handling scrambling 1. Later, by adding a priority concept to MC-TAG-DL, Park (Park, 1995) suggested a way of handling scrambling in Korean. 3.1 aAT~ & flAT~ structures I "IF ...] *" Tom, No: " ,{ I -'C,,-,, '] [1 ,o I For handling scrambling, the multi-adjunction concept in MC-TAGs can be used for combining a scrambled argument and its landing site. For exam- ple, a subject (e.g., Tom) would have two Korean structures as above. For notational convenience, call the two structures, aAT~s~, and ~AT~Gs~, re- spectively. In general, aAT~G represents a canonical NP structure and flAT~G represents a scrambled NP structure. ~.A~s~, shows a pair of structures for representing the scrambled subject argument. Call the left structure of ~AT~GsT~, flAT~s~, and the right structure, ~AT~g~,. ~A~g~s~, represents a scrambled subject, and ~.AT~G~, is used for repre- senting the place where the subject would have been in the canonical sentence. Similarly, flAT~Go~, de- notes a pair of structures for representing a scram- bled object argument. The basic idea is that whenever an argument is not in a scrambled position, it should be substituted into an available empty slot using the aAT~ struc- ture. The fiAT~G structure will be used only when the argument is in a scrambled position so that the aAT~G structure cannot be used. 3.2 An Example K: Jerry-lul Tom-i ccossnunLa. 2 Jerry-ACC Tom-NOM ehase-DECL E: Tom chases Jerry From the elementary trees in Figure 2, both sen- tences, (1) and (2) can be derived. For example, Figures 2(a), 2(b), and 2(d) can be used for sentence (1), to derive Figure 3(a). However, for sentence (2) where the order is OSV (the object argument is nAn additional constraint system called dominance links was added, thus giving rise to MC-TAG-DL. m u ° ; j o~, 0 j ' I i (a) (b) (c)~AT~OoT~ (d) ~i~ure 2: Elementary, Trees scrambled), Figures 2(a), 2(c), and 2(d) are used to derive Figure 3(b) (fl,4T~G~, is adjoined onto 5, and ~,4T~G~ is substituted into OPl ~ node.). As the trace feature is locally set within each flAT~ struc- ture, two OP nodes in Figure 3(b) are co-referenced with the same variable, < 1 >, indicating where the object should have been in the canonical sentence. S A SP Vp A A NP I OP VP N NO ~1 V I I I I (a) Canonical !l " I \ J ," ---. (b) Scrambled Fi~tre 3: Derived Trees Each elementary tree is given a priority. A higher priority is given to aAT~G structure over flAT~G. Generally, when a structure given a higher prior- ity over others can be successfully used for the final derivation of a sentence, the remaining structures will not be tried at all. Only when the highest pri- ority structure fails will the next available structure be tried 2. 4 Using MC-TAGs in STAGs For mapping Korean to English, the simple object (NP) structure of English (e.g., the right structure of /3 pair in Figure 1) can be mapped to two structures, i.e., aA~o~, and ~AT~go~,, thus generating two possible lexical pairs. ~As a way of implementing a verb-final condition in Korean,/KA'/'~s~, structure is dominated by fl.AT~s~,, and each S-type verb elementary tree will nave an A/'.A constraint on the root node, which guarantees that j3~4T~ type structure cannot be adjoined onto the par- tially derived tree unless its predicate structure (its S- type verb elementary tree) is already part of the partial derived tree up to that point. An example including long-distance scrambling is shown in (Park, 1995). 318 For translating sentence (1), the aA~Go~,-NP pair is used for Jerry (similar to the/~ pair in Figure 1). However, in sentence (2), the/~AT~Go~,-NP pair should be used instead for translating the scrambled argument Jerry (i.e., Figure 4(a)). Thus, it is nec- essary that a Korean flA:RG structure (MC-TAG) be mapped to an English NP structure (TAG) to transfer a scrambled argument in Korean. I assume that there is one head structure for each MC-TAG structure, and that the/~A~G ~ (place holder struc- ture) is the head structure for each/~AT~G struc- ture. The root node of the head structure is al- ways mapped to the root node of the target (English) structure. Usually, the nodes in the source language should be linked to each relevant node in the target lan- guage, and vice versa (in STAGs). However, in the case that it is a multi-component structure (e.g., /~AT~), an adjunction node need not necessarily be linked to any node. If it is not linked to any node of the target language, the structure can be freely adjoined onto any available node of the par- tially derived tree of the source language, which is approximately what scrambling is about. However, substitution nodes will always be linked (the differ- ence between a substitution node and an adjunction node is that an adjunction node does not introduce a new structure to the partially derived tree whereas a substitution node always does). t~"- )'.,'." l" ..... }" (a)K - E Lexicon .,::"",,~ /oP..~.-.. ,~m ., .... - "kr - -.... ~N ' ~p t " '11 " ' " -i i : ~:1 : ~) I .,~ I:! ~ ~ 'i ": . k 2 r / V . " " k ~ ] "..../ I .JL...,, ~..1 Y'am (b)K - E DerivedTrees After Applying (a) Figure 4: K-E Transfer Lexicon and Derived Tree In Figure 4(a), the root node NP of an English TAG is mapped to the OP node of/~A~G~, of a Korean TAG which is a head structure. All the other nodes are mapped to each relevant node except S~. As it is not linked, /~AT~, can be adjoined onto any available node in the partially derived Korean tree. Actually, the restriction on whether flAT, GoLf, can be adjoined onto a certain node does not come from the formalism of Syn- chronous TAGs, but purely from the grammar of Korean TAGs. Figure 4(b) shows the final derived trees for both Korean and English after applying 4(a) to the partially derived trees. 5 Conclusion and Future Direction Using MC-TAGs allows the scrambled argument structure to be represented as a single (set) struc- ture. This makes possible the mapping of Korean scrambled m'gument structures into English argu- ment structures. The application of similar mech- anisms for other languages and for mapping quasi logical forms to logical forms (Alshawi et al., 1992) using STAGs is also being investigated. References Anne Abeilld, Yves Schabes, and Aravind K. Joshi. 1990. Using Lexicalized TAGs for Machine Trans- lation. In Proceedings of the International Con- ference on Computational Linguistics (COLING '90), Helsinki, Finland. H. Alshawi, D. Carter, J. Eijck, B. Gamback, R. Moore, D. Moran, F. Pereira, S. Pulman, M. Rayner, and A. Smith. 1992. The Core Lan- guage Engine. MIT Press. Tilman Becker and Owen Rainbow. Distance Scrambling in German. port, University of Pennsylvania. 1990. Long- Technical re- Aravind K. Joshi, L. Levy, and M. Takahashi. 1975. Tree Adjunct Grammars. Journal of Computer and System Sciences. Martha Palmer, Hyun S. Park, and Dania Egedi. 1995. The Application of Korean-English Ma- chine Translation to a Military Message Domain. In Fifth Annual IEEE Dual-Use Technologies and Applications Conference. Hyun S. Park. 1995. Handling of Scrambling in Korean Using MC-TAGs. In Second Conference of Pacific Association for Computational Linguis- tics. Stuart Shieber and Yves Schabes. 1990. Syn- chronous Tree Adjoining Grammars. In Proceed- ings of the 13 th International Conference on Com- putational Linguistics (COLING'90), Helsinki, Finland. David J. Weir. 1988. Characterizing Mildly Context-Sensitive Grammar Formalisms. Ph.D. thesis, University of Pennsylvania. 319 | 1995 | 49 |
Discourse Processing of Dialogues with Multiple Threads Carolyn Penstein Ros~ t, Barbara Di Eugenio t, Lori S. Levin t, Carol Van Ess-Dykema t t Computational Linguistics Program Carnegie Mellon University Pittsburgh, PA, 15213 {cprose, dieugeni}@icl, cmu. edu Isl©cs. cmu. edu * Department of Defense Mail stop: R525 9800 Savage Road Ft. George G. Meade, MD 20755-6000 cj vanes©afterlife, ncsc. mil Abstract In this paper we will present our ongoing work on a plan-based discourse processor developed in the context of the Enthusiast Spanish to English translation system as part of the JANUS multi-lingual speech-to- speech translation system. We will demon- strate that theories of discourse which pos- tulate a strict tree structure of discourse on either the intentional or attentional level are not totally adequate for handling spontaneous dialogues. We will present our extension to this approach along with its implementation in our plan-based dis- course processor. We will demonstrate that the implementation of our approach out- performs an implementation based on the strict tree structure approach. 1 Introduction In this paper we will present our ongoing work on a plan-based discourse processor developed in the con- text of the Enthusiast Spanish to English translation system (Suhm et al. 1994) as part of the JANUS multi-lingual speech-to-speech translation system. The focus of the work reported here has been to draw upon techniques developed recently in the compu- tational discourse processing community (Lambert 1994; Lambert 1993; Hinkelman 1990), developing a discourse processor flexible enough to cover a large corpus of spontaneous dialogues in which two speak- ers attempt to schedule a meeting. There are two main contributions of the work we will discuss in this paper. From a theoretical stand- point, we will demonstrate that theories which pos- tulate a strict tree structure of discourse (henceforth, Tree Structure Theory, or TST) on either the inten- tional level or the attentional level (Grosz and Sidner 1986) are not totally adequate for covering sponta- neous dialogues, particularly negotiation dialogues which are composed of multiple threads. These are negotiation dialogues in which multiple propo- sitions are negotiated in parallel. We will discuss our proposea extension to TST which handles these structures in a perspicuous manner. From a prac- tical standpoint, our second contribution will be a description of our implemented discourse processor which makes use of this extension of TST, taking as input the imperfect result of parsing these sponta- neous dialogues. We will also present a comparison of the perfor- mance of two versions of our discourse processor, one based on strict TST, and one with our extended version of TST, demonstrating that our extension of TST yields an improvement in performance on spontaneous scheduling dialogues. A strength of our discourse processor is that because it was designed to take a language- independent meaning representation (interlingua) as its input, it runs without modification on either En- glish or Spanish input. Development of our dis- course processor was based on a corpus of 20 spon- taneous Spanish scheduling dialogues containing a total of 630 sentences. Although development and initial testing of the discourse processor was done with Spanish dialogues, the theoretical work on the model as well as the evaluation presented in this pa- per was done with spontaneous English dialogues. In section 2, we will argue that our proposed ex- tension to Standard TST is necessary for making correct predictions about patterns of referring ex- pressions found in dialogues where multiple alter- natives are argued in parallel. In section 3 we will present our implementation of Extended TST. Fi- nally, in section 4 we will present an evaluation of the performance of our discourse processor with Extended TST compared to its performance using Standard TST. 2 Discourse Structure Our discourse model is based on an analysis of nat- urally occurring scheduling dialogues. Figures 1 and 2 contain examples which are adapted from natu- rally occurring scheduling dialogues. These exam- ples contain the sorts of phenomena we have found in our corpus but have been been simplified for the 31 (1) (2) (3) $2: (4) (5) SI: (6) (7) (8) (9) $2: (lO) (11) (12) (13) (14) (15) (16) (17) (18) Figure S 1: We need to set up a schedule for the meeting. How does your schedule look for next week? Well, Monday and Tuesday both mornings are good. Wednesday afternoon is good also. It looks like it will have to be Thursday then. Or Friday would also possibly work. Do you have time between twelve and two on Thursday? Or do you think sometime Friday afternoon you could meet? No. Thursday I have a class. And Friday is really tight for me. How is the next week? If all else fails there is always video conferencing. S 1: Monday, Tuesday, and Wednesday I am out of town. But Thursday and Friday are both good. How about Thursday at twelve? $2: Sounds good. See you then. 1: Example of Deliberating Over A Meeting Time purpose of making our argument easy to follow. No- tice that in both of these examples, the speakers negotiate over multiple alternatives in parallel. We challenge an assumption underlying the best known theories of discourse structure (Grosz and Sidner 1986; Scha and Polanyi 1988; Polanyi 1988; Mann and Thompson 1986), namely that discourse has a recursive, tree-like structure. Webber (1991) points out that Attentional State i is modeled equiv- alently as a stack, as in Grosz and Sidner's approach, or by constraining the current discourse segment to attach on the rightmost frontier of the discourse structure, as in Polanyi and Scha's approach. This is because attaching a leaf node corresponds to push- ing a new element on the stack; adjoining a node Di to a node Dj corresponds to popping all the stack elements through the one corresponding to Dj and pushing Di on the stack. Grosz and Sider (1986), and more recently Lochbaum (1994), do not for- mally constrain their intentional structure to a strict tree structure, but they effectively impose this lim- itation in cases where an anaphoric link must be made between an expression inside of the current discourse segment and an entity evoked in a different 1Attentional State is the representation which is used for computing which discourse entities are most salient. segment. If the expression can only refer to an entity on the stack, then the discourse segment purpose 2 of the current discourse segment must be attached to the rightmost frontier of the intentional structure. Otherwise the entity which the expression refers to would have already been popped from the stack by the time the reference would need to be resolved. We develop our theory of discourse structure in the spirit of (Grosz and Sidner 1986) which has played an influential role in the analysis of discourse entity saliency and in the development of dialogue processing systems. Before we make our argument, we will argue for our approach to discourse segmen- tation. In a recent extension to Grosz and Sidner's original theory, described in (Lochbaum 1994), each discourse segment purpose corresponds to a partial or full shared plan 3 (Grosz and Kraus 1993). These discourse segment purposes are expressed in terms of the two intention operators described in (Grosz and Kraus 1993), namely Int. To which represents an agent's intention to perform some action and 2A discourse segment purpose denotes the goal which the speaker(s) attempt to accomplish in engaging in the associated segment of talk. 3A Shared Plan is a plan which a group of two or more participants intend to accomplish together. 32 Sl: S2: SI: DS 0 1. When can you meet next week? SI: DS 1 2. Tuesday afternoon looks good. S2: .... DS2 3. I could do it Wednesday morning too. DS 3 4. Tuesday I have a class from 12:00-1:30. Sl: . DS 4 5. But the other day sounds good. DSA 1. When can you meet next week? r--- DSB ! ' 2. Tuesday afternoon looks good. ! i DS C ....! ...... 3. I could do it Wednesday morning too. DS D ~--- , 4. Tuesday I have aclass from 12:00-1:30. .- .... DSE i 5. But the other day sounds good. Simple Stack based Structure Proposed Structure Figure 2: Sample Analysis Int. That which represents an agent's intention that some proposition hold. Potential intentions are used to account for an agent's process of weighing differ- ent means for accomplishing an action he is com- mitted to performing (Bratman, Israel, & Pollack 1988). These potential intentions, Pot.Int. To and Pot.Int. That, are not discourse segment purposes in Lochbaum's theory since they cannot form the ba- sis for a shared plan having not been decided upon yet and being associated with only one agent. It is not until they have been decided upon that they be- come Int. To's and Int. That's which can then become discourse segment purposes. We argue that poten- tial intentions must be able to be discourse segment purposes. Potential intentions are expressed within portions of dialogues where speakers negotiate over how to accomplish a task which they are committed to com- pleting together. For example, deliberation over how to accomplish a shared plan can be repre- sented as an expression of multiple Pot.Int. To's and Pot.Int. That's, each corresponding to different alter- natives. As we understand Lochbaum's theory, for each factor distinguishing these alternatives, the po- tential intentions are all discussed inside of a single discourse segment whose purpose is to explore the options so that the decision can be made. The stipulation that Int. To's and Int. That's can be discourse segment purposes but Pot.Int. To's and Pot.Int. That's cannot has a major impact on the analysis of scheduling dialogues such as the one in Figure 1 since the majority of the exchanges in scheduling dialogues are devoted to deliberating over which date and at which time to schedule a meet- ing. This would seem to leave all of the delibera- tion over meeting times within a single monolithic discourse segment, leaving the vast majority of the dialogue with no segmentation. As a result, we are left with the question of how to account for shifts in focus which seem to occur within the deliberation segment as evidenced by the types of pronominal ref- erences which occur within it. For example, in the dialogue presented in Figure 1, how would it be pos- sible to account for the differences in interpretation of "Monday" and "Tuesday" in (3) with "Monday" and "Tuesday" in (14)? It cannot simply be a matter of immediate focus since the week is never mentioned in (13). And there are no semantic clues in the sen- tences themselves to let the hearer know which week is intended. Either there is some sort of structure in this segment more fine grained than would be ob- tained if Pot.Int. To's and Pot.Int. That's cannot be discourse segment purposes, or another mechanism must be proposed to account for the shift in focus which occurs within the single segment. We argue that rather than propose an additional mechanism, it is more perspicuous to lift the restriction that Pot.Int. To's and Pot.Int. That's cannot be discourse segment purposes. In our approach a separate dis- course segment is allocated for every potential plan discussed in the dialogue, one corresponding to each parallel potential intention expressed. Assuming that potential intentions form the ba- sis for discourse segment purposes just as intentions 33 do, we present two alternative analyses for an ex- ample dialogue in Figure 2. The one on the left is the one which would be obtained if Attentional State were modeled as a stack. It has two shortcom- ings. The first is that the suggestion for meeting on Wednesday in DS 2 is treated like an interruption. Its focus space is pushed onto the stack and then popped off when the focus space for the response to the suggestion for Tuesday in DS 3 is pushed 4. Clearly, this suggestion is not an interruption how- ever. Furthermore, since the focus space for DS 2 is popped off when the focus space for DS 4 is pushed on, 'Wednesday is nowhere on the focus stack when "the other day", from sentence 5, must be resolved. The only time expression on the focus stack at that point would be "next week". But clearly this ex- pression refers to Wednesday. So the other problem is that it makes it impossible to resolve anaphoric referring expressions adequately in the case where there are multiple threads, as in the case of parallel suggestions negotiated at once. We approach this problem by modeling Atten- tional State as a graph structured stack rather than as a simple stack. A graph structured stack is a stack which can have multiple top elements at any point. Because it is possible to maintain more than one top element, it is possible to separate multiple threads in discourse by allowing the stack to branch out, keeping one branch for each thread, with the one most recently referred to more strongly in fo- cus than the others. The analysis on the right hand side of Figure 2 shows the two branches in different patterns. In this case, it is possible to resolve the reference for "the other day" since it would still be on the stack when the reference would need to be resolved. Implications of this model of Attentional State are explored more fully in (Rosd 1995). 3 Discourse Processing We evaluated the effectiveness of our theory of dis- course structure in the context of our implemented discourse processor which is part of the Enthusiast Speech translation system. Traditionally machine translation systems have processed sentences in iso- lation. Recently, however, beginning with work at ATR, there has been an interest in making use of dis- course information in machine translation. In (Iida and Arita 1990; Kogura et al. 1990), researchers at ATR advocate an approach to machine transla- tion called illocutionary act based translation, argu- ing that equivalent sentence forms do not necessar- ily carry the same illocutionary force between lan- guages. Our implementation is described more fully in (Rosd 1994). See Figure 4 for the discourse rep- 4Alternatively, DS 2 could not be treated like an in- terruption, in which case DS 1 would be popped before DS 2 would be pushed. The result would be the same. DS 2 would be popped before DS 3 would be pushed. ((when ((frame *simple-time) (day-of-week wednesday) (time-of-day morning))) (a-speech-act (*multiple* *suggest *accept)) (who ((frame *i))) (frame *free) (sentence-type *state))) Sentence: I could do it Wednesday morning too. Figure 3: Sample Interlingua Representation with Possible Speech Acts Noted resentation our discourse processor obtains for the example dialogue in Figure 2. Note that although a complete tripartite structure (Lambert 1993) is com- puted, only the discourse level is displayed here. Development of our discourse processor was based on a corpus of 20 spontaneous Spanish scheduling di- alogues containing a total of 630 sentences. These dialogues were transcribed and then parsed with the GLR* skipping parser (Lavie and Tomita 1993). The resulting interlingua structures (See Figure 3 for an example) were then processed by a set of matching rules which assigned a set of possible speech acts based on the interlingua representation returned by the parser similar to those described in (Hinkelman 1990). Notice that the list of possible speech acts resulting from the pattern matching process are in- serted in the a-speech-act slot ('a' for ambiguous). It is the structure resulting from this pattern match- ing process which forms the input to the discourse processor. Our goals for the discourse processor in- clude recognizing speech acts and resolving ellipsis and anaphora. In this paper we focus on the task of selecting the correct speech act. Our discourse processor is an extension of Lam- bert's implementation (Lambert 1994; Lambert 1993; Lambert and Carberry 1991). We have chosen to pattern our discourse processor after Lambert's recent work because of its relatively broad coverage in comparison with other computational discourse models and because of the way it represents rela- tionships between sentences, making it possible to recognize actions expressed over multiple sentences. We have left out aspects of Lambert's model which are too knowledge intensive to get the kind of cov- erage we need. We have also extended the set of structures recognized on the discourse level in order to identify speech acts such as Suggest, Accept, and Reject which are common in negotiation discourse. There are a total of thirteen possible speech acts which we identify with our discourse processor. See Figure 5 for a complete list. 34 Request- Suggt Suggestlon(S2,S 1,...) Request- Suggestion- Form(S1,S2,...) Argument-Segment(S2,S 1,...) Suggest- Suggest- Response(S 1,$2,...) Form(S2,S1,...) Form(S2,S1,...) / Ask_ief(S 1,$2,...) InfoT(S2,S1,...) Infon~(S2,S 1,...) Ref-Request(S1,S2,...) Tell(S2,S1,...) Tell(S2,S1,...) I I / Surface- Surface- Surface- Query- State(S2,S 1,...) State(s2,s 1,...) Ref(S 1,$2,...) Respon]e(S 1 ,$2,...) l ReJect(S 1,$2,...) I Accelt(S 1'$2'"') Reject- Accept- Form/S1,S2,...) Fo7(S1,$2,...) / / Inform(S1,S2,...) Inform(S1,S2,...) J I Tell(S 1 ,S2,...) Tell(Si 1 ,$2,...) Surface- Surface- State(S 1 ,S2,...) State(S 1 ,S2,...) (1) When can... (2) Tuesday... (3) I could... (4) Tuesday... Figure 4: Sample Discourse Structure (5) But the other... It is commonly impossible to tell out of context which speech act might be performed by some ut- terances since without the disambiguating context they could perform multiple speech acts. For exam- ple, "I'm free Tuesday." could be either a Suggest or an Accept. "Tuesday I have a class." could be a State-Constraint or a Reject. And "So we can meet Tuesday at 5:00." could be a Suggest or a Confirm- Appointment. That is why it is important to con- struct a discourse model which makes it possible to make use of contextual information for the purpose of disambiguating. Some speech acts have weaker forms associated with them in our model. Weaker and stronger forms very roughly correspond to direct and indirect speech acts. Because every suggestion, rejection, ac- ceptance, or appointment confirmation is also giv- ing information about the schedule of the speaker, State-Constraint is considered to be a weaker form of Suggest, Reject, Accept, and Confirm-Appointment. Also, since every acceptance expressed as "yes" is also an affirmative answer, Affirm is considered to be a weaker form of Accept. Likewise Negate is con- sidered a weaker form of Reject. This will come into play in the next section when we discuss our evalu- ation. When the discourse processor computes a chain of inference for the current input sentence, it attaches it to the current plan tree. Where it attaches de- termines which speech act is assigned to the input sentence. For example, notice than in Figure 4, be- cause sentences 4 and 5 attach as responses, they are assigned speech acts which are responses (i.e. either Accept or Reject). Since sentence 4 chains up to an instantiation of the Response operator from an in- stantiation of the Reject operator, it is assigned the speech act Reject. Similarly, sentence 5 chains up to an instantiation of the Response operator from an instantiation of the Accept operator, sentence 5 is assigned the speech act Accept. After the discourse 35 Speech Act Opening Closing Suggest Reject Accept State-Constraint Confirm-Appointment Negate Affirm Request-Response Request-Suggestion Request-Clarification Request-Confirmation Example Hi, Cindy. See you then. Are you free on the morning of the eighth? Tuesday I have a class. Thursday I'm free the whole day. This week looks pretty busy for me. So Wednesday at 3:00 then? no. yes. What do you think? What looks good for you? What did you say about Wednesday? You said Monday was free? Figure 5: Speech Acts covered by the system processor attaches the current sentence to the plan tree thereby selecting the correct speech act in con- text, it inserts the correct speech act in the speech- act slot in the interlingua structure. Some speech acts are not recognized by attaching them to the previous plan tree. These are speech acts such as Suggest which are not responses to previous speech acts. These are recognized in cases where the plan inferencer chooses not to attach the current inference chain to the previous plan tree. When the chain of inference for the current sen- tence is attached to the plan tree, not only is the speech act selected, but the meaning representation for the current sentence is augmented from context. Currently we have only a limited version of this pro- cess implemented, namely one which augments the time expressions between previous time expressions and current time expressions. For example, consider the case where Tuesday, April eleventh has been sug- gested, and then the response only makes reference to Tuesday. When the response is attached to the suggestion, the rest of the time expression can be filled in. The decision of which chain of inference to select and where to attach the chosen chain, if anywhere, is made by the focusing heuristic which is a version of the one described in (Lambert 1993) which has been modified to reflect our theory of discourse structure. In Lambert's model, the focus stack is represented implicitly in the rightmost frontier of the plan tree called the active path. In order to have a focus stack which can branch out like a graph structured stack in this framework, we have extended Lambert's plan operator formalism to include annotations on the ac- tions in the body of decomposition plan operators which indicate whether that action should appear 0 or 1 times, 0 or more times, 1 or more times, or ex- actly 1 time. When an attachment to the active path is attempted, a regular expression evaluator checks to see that it is acceptable to make that at- tachment according to the annotations in the plan operator of which this new action would become a child. If an action on the active path is a repeat- ing action, rather than only the rightmost instance being included on the active path, all adjacent in- stances of this repeating action would be included. For example, in Figure 4, after sentence 3, not only is the second, rightmost suggestion in focus, along with its corresponding inference chain, but both sug- gestions are in focus, with the rightmost one being slightly more accessible than the previous one. So when the first response is processed, it can attach to the first suggestion. And when the second response is processed, it can be attached to the second sug- gestion. Both suggestions remain in focus as long as the node which immediately dominates the parallel suggestions is on the rightmost frontier of the plan tree. Our version of Lambert's focusing heuristic is described in more detail in (Ros~ 1994). 4 Evaluation The evaluation was conducted on a corpus of 8 pre- viously unseen spontaneous English dialogues con- taining a total of 223 sentences. Because spoken language is imperfect to begin with, and because the parsing process is imperfect as well, the input to the discourse processor was far from ideal. We are en- couraged by the promising results presented in figure 6, indicating both that it is possible to successfully process a good measure of spontaneous dialogues in a restricted domain with current technology, 5 and that our extension of TST yields an improvement in performance. The performance of the discourse processor was evaluated primarily on its ability to assign the cor- rect speech act to each sentence. We are not claim- ing that speech act recognition is the best way to evaluate the validity of a theory of discourse, but because speech act recognition is the main aspect of the discourse processor which we have implemented, and because recognizing the discourse structure is part of the process of identifying the correct speech act, we believe it was the best way to evaluate the difference between the two different focusing mech- anisms in our implementation at this time. Prior to the evaluatic.n, the dialogues were analyzed by hand sit should be noted that we do not claim to have solved the problem of discourse processing of spon- taneous dialogues. Our approach is coursely grained and leaves much room for future development in every respect. 36 Version Good Acceptable Incorrect Extended TST Standard TST 171 total (77%) 144 based plan-inference on 161 total (72%) 116 based on plan inference 27 total (12%) 22 based on plan inference 33 total (15%) 25 based on plan inference 25 total (11%) 20 based on plan inference 28 total (13%) 23 based on plan inference Figure 6: Performance Evaluation Results and sentences were assigned their correct speech act for comparison with those eventually selected by the discourse processor. Because the speech acts for the test dialogues were coded by one of the authors and we do not have reliability statistics for this encoding, we would draw the attention of the readers more to the difference in performance between the two focus- ing mechanisms rather than to the absolute perfor- mance in either case. For each sentence, if the correct speech act, or ei- ther of two equally preferred best speech acts were recognized, it was counted as correct. If a weaker form of a correct speech act was recognized, it was counted as acceptable. See the previous section for more discussion about weaker forms of speech acts. Note that if a stronger form is recognized when only the weaker one is correct, it is counted as wrong. And all other cases were counted as wrong as well, for example recognizing a suggestion as an accep- tance. In each category, the number of speech acts de- termined based on plan inference is noted. In some cases, the discourse processor is not able to assign a speech act based on plan inference. In these cases, it randomly picks a speech act from the list of possible speech acts returned from the matching rules. The number of sentences which the discourse processor was able to assign a speech act based on plan infer- ence increases from 164 (74%) with Standard TST to 186 (83%) with Extended TST. As Figure 6 indi- cates, in many of these cases, the discourse processor guesses correctly. It should be noted that although the correct speech act can be identified without plan inference in many cases, it is far better to recognize the speech act by first recognizing the role the sen- tence plays in the dialogue with the discourse pro- cessor since this makes it possible for further pro- cessing to take place, such as ellipsis and anaphora resolution. 6 You will notice that Figure 6 indicates that the 6Ellipsis and anaphora resolution are areas for future development. biggest difference in terms of speech act recognition between the two mechanisms is that Extended TST got more correct where Standard TST got more ac- ceptable. This is largely because of cases like the one in Figure 4. Sentence 5 is an acceptance to the sug- gestion made in sentence 3. With Standard TST, the inference chain for sentence 3 would no longer be on the active path when sentence 5 is processed. There- fore, the inference chain for sentence 5 cannot attach to the inference chain for sentence 3. This makes it impossible for the discourse processor to recognize sentence 5 as an acceptance. It will try to attach it to the active path. Since it is a statement informing the listener of the speaker's schedule, a possible speech act is State-Constraint. And any State-Constraint can attach to the active path as a confirmation be- cause the constraints on confirmation attachments are very weak. Since State-Constraint is weaker than Accept, it is counted as acceptable. While this is ac- ceptable for the purposes of speech act recognition, and while it is better than failing completely, it is not the correct discourse structure. If the reply, sen- tence 5 in this example, contains an abbreviated or anaphoric expression referring to the date and time in question, and if the chain of inference attaches to the wrong place on the plan tree as in this case, the normal procedure for augmenting the shortened re- ferring expression from context could not take place correctly as the attachment is made. In a separate evaluation with the same set of di- alogues, performance in terms of attaching the cur- rent chain of inference to the correct place in the plan tree for the purpose of augmenting temporal expressions from context was evaluated. The results were consistent with what would have been expected given the results on speech act recognition. Stan- dard TST achieved 64.3% accuracy while Extended TST achieved 70.4%. While the results are less than perfect, they indi- cate that Extended TST outperforms Standard TST on spontaneous scheduling dialogues. In summary, Figure 6 makes clear, with the extended version of TST, the number of speech acts identified correctly 37 increases from 161 (72%) to 171 (77%), and the num- ber of sentences which the discourse processor was able to assign a speech act based on plan inference increases from 164 (74%) to 186 (83%). 5 Conclusions and Future Directions In this paper we have demonstrated one way in which TST is not adequate for describing the struc- ture of discourses with multiple threads in a per- spicuous manner. While this study only explores the structure of negotiation dialogues, its results have implications for other types of discourse as well. This study indicates that it is not a struc- tural property of discourse that Attentional State is constrained to exhibit stack like behavior. We in- tend to extend this research by exploring more fully the implications of our extension to TST in terms of discourse focus more generally. It is clear that it will need to be limited by a model of resource bounds of attentional capacity (Walker 1993) in order to avoid overgenerating. We have also described our extension to TST in terms of a practical application of it in our imple- mented discourse processor. We demonstrated that our extension to TST yields an increase in perfor- mance in our implementation. 6 Acknowledgements This work was made possible in part by funding from the U.S. Department of Defense. References M. E. Bratman, D. J. Israel and M. E. Pollack. 1988. Plans and resource-bounded practical reasoning. Computational Intelligence, 3, pp 117-136. B. J. Grosz and S. Kraus. 1993. Collaborative plans for group activities. In Proceedings of IJCAI-93, pp 367-373, Chambery, Savoie, France. B. Grosz and C. Sidner. 1986. Attention, Intentions, and the Structure of Discourse. Computational Linguistics 12, 175-204. E. A. Hinkelman. 1990. Linguistic and Pragmatic Constraints on Utterance Interpretation. PhD dissertation, University of Rochester, Department of Computer Science. Technical Report 288. Hitoshi Iida and Hidekazu Arita. 1990. Natural Language Dialog Understanding on a Four-Layer Plan Recognition Model. Transactions of IPSJ 31:6, pp 810-821. K. Kogura, M. Kume, and H. Iida. 1990. II- locutionary Act Based Translation of Dialogues. The Third International Conference on Theoreti- cal and Methodological Issues in Machine Trans- lation of Natural Language. L. Lambert and S. Carberry. 1994. A Process Model for Recognizing Communicative Acts and Model- ing Negotiation Subdialogues. under review for journal publication. L. Lambert. Recognizing Complex Discourse Acts: A Tripartite Plan-Based Model of Dialogue. PhD dissertation. Tech. Rep. 93-19, Department of Computer and Information Sciences, University of Delaware. 1993. L. Lambert and S. Carberry. A Tripartite, Plan Recognition Model for Structuring Discourse. Discourse Structure in NL Understanding and Generation. AAAI Fall Symposium. Nov 1991. A. Lavie and M. Tomita. 1993. GLR* - An Effi- cient Noise Skipping Parsing Algorithm for Con- text Free Grammars. in the Proceedings of the Third International Workshop on Parsing Tech- nologies. Tilburg, The Netherlands. K. E. Lochbaum. 1994. Using Collaborative Plans to Model the Intentional Structure of Discourse. PhD dissertation, Harvard University. Technical Report TR-25-94.. W. C. Mann and S. A. Thompson. 1986. Rela- tional Propositions in Discourse. Technical Re- port RR-83-115. Information Sciences Institute, Marina del Rey, CA, November. L. Polanyi. 1988. A Formal Model of the Structure of Discourse. Journal of Pragmatics 12, pp 601- 638. C. P. Ros& 1994. Plan-Based Discourse Pro- cessor for Negotiation Dialogues. unpublished manuscript C. P. Ros~. 1995. The Structure of Multiple Headed Negotiations. unpublished manuscript. R. Scha and L. Polanyi. 1988. An Augmented Con- text Free Grammar for Discourse. Proceedings of the 12th International Conference on Computa- tional Linguistics, Budapest. B. Suhm, L. Levin, N. Coccaro, J. Carbonell, K. Horiguchi, R. Isotani, A. Lavie, L. Mayfield, C. P. Ros~, C. Van Ess-Dykema, A. Waibel. 1994. Speech-language integration in multi- lingual speech translation system, in Working Notes of the Workshop on Integration of Natural Language and Speech Processing, Paul McKevitt (chair). AAAI-94, Seattle. M. A. Walker. Information Redundancy and Re- source Bounds in Dialogue. PhD Dissertation, Computer and Information Science, University of Pennsylvania. B. L. Webber. 1991. Structure and Ostension in the Interpretation of Discourse Deixis. Language and Cognitive Prvcesses, 6(2), pp 107-135. 38 | 1995 | 5 |
Identifying Word Translations in Non-Parallel Texts Reinhard Rapp ISSCO, Universit6 de Gen~ve 54 route des Acacias Gen~ve, Switzerland [email protected] Abstract Common algorithms for sentence and word-alignment allow the automatic iden- tification of word translations from paxalhl texts. This study suggests that the identi- fication of word translations should also be possible with non-paxMlel and even unre- lated texts. The method proposed is based on the assumption that there is a corre- lation between the patterns of word co- occurrences in texts of different languages. 1 Introduction In a number of recent studies it has been shown that word translations can be automatically derived from the statistical distribution of words in bilingual pax- allel texts (e. g. Catizone, Russell & Warwick, 1989; Brown et al., 1990; Dagan, Church & Gale, 1993; Kay & Rbscheisen, 1993). Most of the proposed algorithms first conduct an alignment of sentences, i. e. those palxs of sentences axe located that are translations of each other. In a second step a word alignment is performed by analyzing the correspon- dences of words in each pair of sentences. The results achieved with these algorithms have been found useful for the compilation of dictionaries, for checking the consistency of terminological usage in translations, and for assisting the terminological work of translators and interpreters. However, despite serious efforts in the compilation of corpora (Church & Mercer, 1993; Armstrong & Thompson, 1995) the availability of a large enough paxallel corpus in a specific field and for a given pair of languages will always be the exception, not the rule. Since the acquisition of non-paxallel texts is usually much easier, it would be desirable to have a program that can determine the translations of words from comparable or even unrelated texts. 2 Approach It is assumed that there is a correlation between the co-occurrences of words which are translations of each other. If - for example - in a text of one language two words A and B co-occur more often than expected from chance, then in a text of an- other language those words which axe translations of A and B should also co-occur more frequently than expected. This assumption is reasonable for parallel texts. However, in this paper it is further assumed that the co-occurrence patterns in original texts axe not fundamentally different from those in translated texts. Starting from an English vocabulary of six words and the corresponding German translations, table la and b show an English and a German co-occurrence mat~x. In these matrices the entries belonging to those pairs of words that in texts co-occur more fre- quently than expected have been marked with a dot. In general, word order in the lines and columns of a co-occurrence matrix is independent of each other, but for the purpose of this paper can always be as- sumed to be equal without loss of generality. If now the word order of the English matrix is per- muted until the resulting pattern of dots is most sim- ilar to that of the German matrix (see table lc), then this increases the likelihood that the English and German words axe in corresponding order. Word n in the English matrix is then the translation of word n in the German matrix. 3 Simulation A simulation experiment was conducted in order to see whether the above assumptions concerning the similarity of co-occurrence patterns actually hold. In this experiment, for an equivalent English and German vocabulary two co-occurrence matrices were computed and then compared. As the English vo- cabulary a list of 100 words was used, which h~l been suggested by Kent & Rosanoff (1910) for asso- ciation experiments. The German vocabulary con- sisted of one by one translations of these words as chosen by Russell (1970). The word co-occurrences were computed on the basis of an English corpus of 33 and a German corpus of 46 million words. The English corpus consists of 320 Table 1: When the word orders of the English and the German matrix correspond, the dot patterns of the two matrices are identical. (a) II1 n213141s161 blue 1 • • green 2 • • plant 3 • school 4 • sky 5 • teacher 6 • (b) (c) 11112131415181 blau 1 • • grfin 2 • • Himmel 3 • Lehrer 4 • Pflanze 5 • Schule 6 s 1 2 5 6 3 4 blue 1 * • green 2 • • 5 • 6 • 3 • 4 • sky teacher plant school the Brown Corpus, texts from the Wall Street Your- hal, Grolier's Electronic Encyclopedia and scientific abstracts from different fields. The German cor- pus is a compilation of mainly newspaper texts from Frankfurter Rundschau, Die Zei~ and Mannl~eimer Morgen. To the knowledge of the author, the English and German corpora contain no parallel passages. For each pair of words in the English vocabulary its frequency of common occurrence in the English corpus was counted. The common occurrence of two words was defined as both words being separated by at most 11 other words. The co-occurrence fre- quencies obtained in this way were used to build up the English matrix. Equivalently, the German co-occurrence matrix was created by counting the co-occurrences of German word pairs in the German corpus. As a starting point, word order in the two matrices was chosen such that word n in the German matrix was the translation of word n in the English matrix. Co-occurrence studies like that conducted by Wettler & Rapp (1993) have shown that for many purposes it is desirable to reduce the influence of word frequency on the co-occurrence counts. For the prediction of word associations they achieved best results when modifying each entry in the co- occurrence matrix using the following formula: ('f(i~J))' (1) A,j -- f(i). f(j) Hereby f(i&j) is the frequency of common occur- rence of the two words i and j, and f(i) is the corpus frequency of word i. However, for comparison, the simulations described below were also conducted us- ing the original co-occurrence matrices (formula 2) and a measure similar to mutual information (for- mula 3). 1 A,,j = f(i&j) (2) f(i&j) (3) ai,i f(i). f(j) Regardless of the formula applied, the English and the German matrix where both normalized. 2 Start- ing from the normalized English and German matri- ces, the aim was to determine how far the similarity of the two matrices depends on the correspondence of word order. As a measure for matrix similarity the sum of the absolute differences of the values at corresponding matrix positions was used. N N s = ~ ~ [E, a - G,,jl (4) i=1 ./=1 This similarity measure leads to a value of zero for identical matrices, and to a value of 20 000 in the case that a non-zero entry in one of the 100 * 100 matrices always corresponds to a zero-value in the other. 4 Results The simulation was conducted by randomly permut- ing the word order of the German matrix and then computing the similarity s to the English matrix. For each permutation it was determined how many words c had been shifted to positions different from those in the original German matrix. The simulation was continued until for each value of c a set of 1000 similarity values was available. 8 Figure 1 shows for the three formulas how the average similarity J be- tween the English and the German matrix depends on the number of non-corresponding word positions c. Each of the curves increases monotonically, with formula 1 having the steepest, i. e. best discriminat- ing characteristic. The dotted curves in figure 1 are the minimum and maximum values in each set of 1000 similarity values for formula 1. X The logarithm has been removed from the mutual information measure since it is not defined for zero co- occurrences. =Normalization was conducted in such a way that the suxn of all matrix entries adds up to the number of fields in the matrix. Sc ---- 1 is not possible and was not taken into account. 321 mooo 20 ..................................................... -,. O) 18 " :"--': <.... 16 j .. 14 ' • -,~ 12 E "~/ ..... 10 -I C '0 10 2"0 3"0 40 5"0 6"0 7"0 8"0 90 100 Figure 1: Dependency between the mean similarity i of the English and the German matrix and the num- ber of non-corresponding word positions c for 3 for- mulas. The dotted lines are the minimum and max- imum values of each sample of 1000 for formula 1. 5 Discussion and prospects It could be shown that even for unrelated Eng- lish and German texts the patterns of word co- occurrences strongly correlate. The monotonically increasing chaxacter of the curves in figure 1 indi- cates that in principle it should be possible to find word correspondences in two matrices of ditferent languages by randomly permuting one of the ma- trices until the similarity function s reaches a mini- mum and thus indicates maximum similarity. How- ever, the minimum-curve in figure 1 suggests that there are some deep minima of the similarity func- tion even in cases when many word correspondences axe incorrect. An algorithm currently under con- sttuction therefore searches for many local minima, and tries to find out what word correspondences axe the most reliable ones. In order to limit the seaxch space, translations that axe known beforehand can be used as anchor points. Future work will deal with the following as yet unresolved problems: • Computational limitations require the vocabu- laxies to be limited to subsets of all word types in large corpora. With criteria like the corpus frequency of a word, its specificity for a given domain, and the salience of its co-occurrence patterns, it should be possible to make a selec- tion of corresponding vocabularies in the two languages. If morphological tools and disv~m- biguators axe available, preliminaxy lemmatiz~ tion of the corpora would be desirable. • Ambiguities in word translations can be taken into account by working with continuous prob- abilities to judge whether a word translation is correct instead of making a binary decision. Thereby, different sizes of the two matrices could be allowed for. It can be expected that with such a method the qual- ity of the results depends on the thematic compara- bility of the corpora, but not on their degree of paz- allelism. As a further step, even with non parallel corpora it should be possible to locate comparable passages of text. Acknowledgements I thank Susan Armstrong and Manfred Wettler for their support of this project. Thanks also to Graham Russell and three anonymous reviewers for valuable comments on the manuscript. References Armstrong, Susan; Thompson, Henry (1995). A presentation of MLCC: Multilingual Corpora for Cooperation. Linguistic Database Workshop, Groningen. Brown, Peter; Cocke, John; Della Pietra, Stephen A.; Della Pietra, Vincent J.; Jelinek, Fredrick; Lstferty, John D.; Mercer, Robert L.; Rossin, Paul S. (1990). A statistical approach to machine trans- lation. Computational Linguistics, 16(2), 79-85. Catizone, Roberta; Russell, Graham; Waxwick, Su- san (1989). Deriving translation data from bilin- gual texts. In: U. Zernik (ed.): Proceedings of the First International Lezical Acquisition Workshop, Detroit. Church, Kenneth W.; Mercer, Robert L. (1993). Introduction to the special issue on Computa- tional Linguistics using large corpora. Computa- tional Linguistics, 19(1), 1-24. Dagan, Ido; Church, Kenneth W.; Gale, William A. (1993). Robust bilingual word alignment for ms- chine aided translation. Proceedings of the Work- shop on Very Large Corpora: Academic and In- dustrial Perspectives. Columbus, Ohio, 1-8. Kay, Maxtin; l~Sscheisen, Maxtin (1993). Text- Translation Alignment. Computational Linguis- tics, 19(1), 121-142. Kent, G.H.; R~sanoff, A.J. (1910). A study of asso- ciation in insanity. American Journal of Insanity, 67, 37-96, 317-390. Russell, Wallace A. (1970). The complete German language norms for responses to 100 words from the Kent-Rosanoff word association test. In: L. Postman, G. Keppel (eds.): Norms of Word As- sociation. New York: Academic Press, 53-94. Wettler, Manfred; Rapp, Reinhaxd (1993). Com- putation of word associations based on the co- occurrences of words in large corpora. In: Pro- ceedings of the Workshop on Very Large Corpora: Academic and Industrial Perspectives, Columbus, Ohio, 84-93. 322 | 1995 | 50 |
Towards a Cognitively Plausible Model for Quantification Walid S. Saba AT&T Bell Laboratories 480 Red Hill Rd., Middletown, NJ 07748 USA and Carelton University, School of Computer Science Ottawa, Ontario, KIS-5B6 CANADA [email protected] Abstract The purpose of this paper is to suggest that quantifiers in natural languages do not have a fixed truth functional meaning as has long been held in logical semantics. Instead we suggest that quantifiers can best be modeled as complex inference procedures that are highly dynamic and sensitive to the linguistic context, as well as time and memory constraints 1. 1 Introduction Virtually all computational models of quantification are based one some variation of the theory of generalized quantifiers (Barwise and Perry, 1981), and Montague's (1974) (henceforth, PTQ). Using the tools of intensional logic and possible- worlds semantics, PTQ models were able to cope with certain context-sensitive aspects of natural language by devising interpretation relative to a context, where the context was taken to be an "index" denoting a possible- world and a point in time. In this framework, the intension (meaning) of an expression is taken to be a function from contexts to extensions (denotations). In what later became known as "indexical semantics", Kaplan (1979) suggested adding other coordinates defining a speaker, a listener, a location, etc. As such, an utterance such as "I called you yesterday" expressed a different content whenever the speaker, the listener, or the time of the utterance changed. While model-theoretic semantics were able to cope with certain context-sensitive aspects of natural language, the intensions (meanings) of quantJfiers, however, as well as other functional words, such as sentential connectives, are taken to be constant. That is, such words have the same meaning regardless of the context (Forbes, 1989). In such a framework, all natural language quantifiers have their meaning grounded in terms of two logical operators: V (for all), and q (there exists). Consequently, all natural language quantifiers ! The support and guidance of Dr. Jean-Pierre Corriveau of Carleton University is greatly appreciated. are, indirectly, modeled by two logical connectives: negation and either conjunction or disjunction. In such an oversimplified model, quantifier ambiguity has often been translated to scoping ambiguity, and elaborate models were developed to remedy the problem, by semanticists (Cooper, 1983; Le Pore et al, 1983; Partee, 1984) as well as computational linguists (Harper, 1992; Alshawi, 1990; Pereira, 1990; Moran, 1988). The problem can be illustrated by the following examples: (la) Every student in CS404 received a grade. (lb) Every student in CS404 received a course outline. The syntactic structures of (la) and (lb) are identical, and thus according to Montague's PTQ would have the same translation. Hence, the translation of (lb) would incorrectly state that students in CS404 received different course outlines. Instead, the desired reading is one in which "a" has a wider scope than "every" stating that there is a single course outline for the course CS404, an outline that all students received. Clearly, such resolution depends on general knowledge of the domain: typically students in the same class receive the same course outline, but different grades. Due to the compositionality requirement, PTQ models can not cope with such inferences. Consequently a number of syntactically motivated rules that suggest an ad hoc semantic ordering between functional words are typically suggested. See, for example, (Moran, 1988) 2 . What we suggest, instead, is that quantifiers in natural language be treated as ambiguous words whose meaning is dependent on the linguistic context, as well as time and memory constraints. 2 Disambiguation of Quantifiers Disambiguation of quantifiers, in our opinion, falls under the general problem of "lexical disambiguation', which is essentially an inferencing problem (Corriveau, 1995). 2 In recent years a number of suggestions have been made, such as discourse representation theory (DRT) (Kamp, 1981), and the use of what Cooper (1995) calls the "background situation ~. However, in beth approaches the available context is still "syntactic ~ in nature, and no suggestion is made on how relevant background knowledge can be made available for use in a model-theoretic model. 323 Briefly, the disambiguation of "a" in (la) and (lb) is determined in an interactive manner by considering all possible knferences between the underlying concepts. What we suggest is that the inferencing involved in the disambiguation of "a" in (la) proceeds as follows: l. A path from grade and student, s, in addition to disambiguating grade, determines that grade, g, is a feature of student. 2. Having established this relationship between students and grades, we assume the fact this relationship is many-to-many is known. 3. "a grade" now refers to "a student grade", and thus there is "a grade" for "every student". What is important to note here is that, by discovering that grade is a feature of student, we essentially determined that "grade" is a (skolem) function of "student", which is the effect of having "a" fall under the scope of "every'. However, in contrast to syntactic approaches that rely on devising ad hoc rules, such a relation is discovered here by performing inferences using the properties that hold between the underlying concepts, resulting in a truly context-sensitive account of scope ambiguities. The inferencing involved in the disambiguation of "a" in (lb), proceeds as follows: 1. A path from course and outline disambiguates outline, and determines outline to be a feature of course. 2. The relationship between course and outline is determined to be a one-to-one relationship. 3. A path from course to CS404 determines that CS404 is a course. 4. Since there is one course, namely CS404, "a course outline" refers to "the" course outline. 3 Time and Memory Constraints In addition to the lingusitic context, we claim that the meaning of quantifiers is also dependent on time and memory constraints. For example, consider (2a) Cubans prefer rum over vodka. (21)) Students in CS404 work in groups. Our intuitive reading of (2a) suggests that we have an implicit "most", while in (2b) we have an implicit "all". We argue that such inferences are dependent on time constraints and constraints on working memory. For example, since the set of students in CS404 is a much smaller set than the set of "Cubans", it is conceivable that we are able to perform an exhaustive search over the set of all students in CS404 to verify the proposition in (2b) within some time and memory constraints. In (2a), however, we are most likely performing a "generalization" based on few examples that are currently activated in short-term memory (STlVi). Our suggestion of the role of time and memory constraints is based on our view of properties and their negation We suggest that there are three ways to conceive of properties and their negation, as shown in Figure 1 below. (a) (b) (c) F'~gure I. Three models of negation. In (a), we take the view that if we have no information regarding P(x), then, we cannot decide on -~P(x). In (b), we take the view that if P can not be confirmed of some entity x, then P(x) is assumed to be false 3. In (c), however, we take the view that if there is no evidence to negate P(x), then assume P(x). Note that model (c) essentially allows one to "generalize", given no evidence to the contrary - or, given an overwhelming positive evidence. Of course, formally speaking, we are interested in defining the exact circumstances under which models (a) through (c) might be appropriate. We believe that the three models are used, depending on the context, time, and memory constraints. In model (c), we believe the truth (or falsity) of a certain property P(x) is a function of the following: np(P#) number of positive instances satisfying P(x) nn(P#) number of negative instances satisfying P(x) cf(P#) the degree to which P is ~gencrally" believed of x. It is assumed here that cfis a value v ~ {J.} u [0,1]. That is, a value that is either undefined, or a real value between 0 and 1. We also suggest that this value is constantly modified (re-enforced) through a feedback mechanism, as more examples are experienced 4. 4 Role of Cognitive Constraints The basic problem is one of interpreting statements of the form every C P (the set-theoretic counterpart of the wff Vx(C(x)---)P(x)), where C has an indeterminate cardinality. Verifying every C P is depicted graphically in Figure 2. It is assumed that the property P is generally attributed to members of the concept C with certainty cf(C,P), where cf(C,P)--O represents the fact that P is not generally assumed of objects in C. On the other hand, a value of cf near 1, represents a strong bias towards believing P of C at face value. In the former case, the processing will depend little, if at all, on our general belief, but more on the actual instances. In the latter case, and especially when faced with time and memory constraints, more weight might be given to prior stereotyped knowledge that we might have accumulated. More precisely: 3 This is the Closed World Assumption. 4 . . . . . . Thin Is similar to the dynamm reasoning process suggested by Wang (1995). 324 1. An attempt at an exhaustive verification of all the elements in the set C is first made (this is the default meaning of "every"). 2. If time and memory capacity allow the processing of all the elements in C, then the result is "true" if np= ICI (that is, if every C P), and "false" otherwise. 3. If time and/or memory constraints do not allow an exhaustive verification, then we will attempt making a decision based on the evidence at hand, where the evidence is based on of, nn, np (a suggested function is given below). 4. In 3, ef is computed from C elements that are currently active in short-term memory (if any), otherwise cf is the current value associated with C the KB. 5. The result is used to update our certainty factor, ef, based on the current evidence ~. "c m np nn F'~ure 2. Quantification with time and memory constraints. In the case of 3, the final output is determined as a function F, that could be defined as follows: (13) Frca,)(nn, np, e, cf, o9 =(np > &nn) ^ (cf(C,P) >= co) where e and co are quantifier-specific parameters. In the case of "every", the function in (13) states that, in the absence of time and memory resources to process every C P exhaustively, the result of the process is ~-ue" if there is an overwhelming positive evidence (high value for e), and if the there is some prior stereotyped belief supporting this inference (i.e., if cf > co > 0). This essentially amounts to processing every C P as most C P (example (2a)). ff "most" was the quantifier we started with, then the function in (13) and the above procedure can be applied, although smaller values for G and co will be assigned. At this point it should be noted that the above function is a generalization of the theory of generalized quantifiers, where quantifiers can be interpreted using this function as shown in the table below. 5 The nature of this feedback mechanism is quite involved, and will not be discussed be discussed here. quantifier np np- ICI nn np- 0 every nn - 0 some np> 0 nn < ICl no nn- ICI ~>0 s>O s<O We are currently in the process of formalizing our model, and hope to define a context-sensitive model for quantification that is also dependent on time and memory constraints. In addition to the "cognitive plausibility' requirement, we require that the model preserve formal properties that are generally attributed to quantifiers in natural language. References Alshawi, H. (1990). Resolving Quasi Logical Forms, Computational Linguistics, 6(13), pp. 133-144. Barwise, J. and Cooper, R. (1981). Generalized Quantifiers and Natural Language, Linguistics and Philosophy, 4, pp. 159-219. Cooper, 1L (1995), The Role of Situations in Generalized Quantifiers, In L Shalom (Ed.), Handbook of Contemporary Semantic Theory, Blackwell. Cooper, R. (1983). Quantification and Syntactic Theory, D. Reidel, Dordrecht, Netherlands. Corriveau, J.-P. (1995). Time-Constrained Memory, to appear, Lawrence Erlbaum Associates, NJ. Forbes, G, (1989). Indexicals, In D. Gabby et al (Eds.), Handbook of Phil. Logic: IV, D. Reidel. Harper, M. P. (1992). Ambiguous Noun Phrases in Logical Form, COmp. Linguistics, 18(4), pp. 419-465. Kamp, H. (1981), A Theory of Truth and Semantic Representation, In Groenendijk, et al (Eds.), Formal Methods in the Study of Language, Mathematisch Centrum, Amsterdam. Kaplan, D. (1979). On the Logic of Demonstratives, Journal of Philosophical Logic, 8, pp. 81-98. Le Pore, E. and Garson, J. (1983). Pronouns and Quantifier-Scope in English,J. of Phil. Logic, 12. Montague, 1L (1974). Formal Philosophy: Selected Papers of Richard Montague. R. Thomason (ed.). Yale University Press. Moran, D. B. (1988). Quantifier Scoping in the SRI Core Language, In Proceedings of 26th Annual Meeting of the ACL, pp. 3,340. Partee, B. (1984). Quantification, Pronouns, and VP- Anaphora, In J. Groenedijk et al reds.), Truth, Interpretation and Information, Dordrecht: Foils. Pereira, F. C. N. and Pollack, M. E. (1991). Incremental Interpretation, Artificial Intelligence, 50. Wang, P. (1994), From Inheritance Relation to Non- Axiomatic Logic, International Journal of Approximate Reasoning, (accepted June 1994 - to appear). Zeevat, H. (1989). A Compositional Approach to Discourse Representation theory, Linguistics and Philosophy, 12, pp. 95-131. 325 | 1995 | 51 |
Aspect and Discourse Structure: Is a Neutral Viewpoint Required?* Frank Schilder Centre for Cognitive Science 2 Buccleuch Place Edinburgh EH8 9LW, Scotland, U.K. Internet: schilder@cogsc±, ed. ac.uk Abstract We apply Smith's theory of aspect (1991) to German - a language without any as- pectual markers. In particular, we try to shed more light on the effects aspect can have on discourse structure and show how English and German behave dif- ferently in this respect. We furthermore describe how Smith's notion of a neutral viewpoint can be helpful for the anal- ysis of discourse in German. It turned out that proposals claiming that the Ger- man Preterite covers the progressive as well as the simple aspect can not suffi- ciently explain the data presented in this paper (B~iuerle, 1988). Finally we give a sltuatlon-theoretic approach to for- malize Smith's intuitions following Glas- bey (1994) incorporating Allen's interval- calculus (Allen, 1984). 1 Viewpoint and Situation Aspect Smith (1991) presents two terms which are assigned to two distinct phenomena in language: viewpoint and situation aspect. This two-level theory gives an explanation for the difference between aspectual information understood as a view on a situation and temporal features of a situation. The former can be gained after applying a certain viewpoint chosen by the speaker and the latter one is stored in the lexical entry of a lexeme.l " The author gratefully acknowledges the helpful comments of Sheila Glasbey, Lex Holt and the three anonymous reviewers of this paper. This research was supported by a PhD-scholarship HSPII/AUFE awarded by the German Academic Exchange Service (DAAD). 1Besides the situation aspect described by the seman- tic entry of the verb many other sentential constituents (e.g. object or subject NPs) may have an influence on it (Krif~, 1992). Situation Aspects Smith introduces three so- called "conceptual features" of situation aspects which have binary values [-4-], namely stative, dura- tire and telic. Five different situation aspects have emerged which are distinguished using these features and certain temporal schemata? Examples: - Sam owned three peach orchards. (State) - Lily swam in the pond. (Activity) - Mrs Ramsey wrote a letter. (Accomplishment) - Lily knocked at the door. (Semelfactive) - Mr Ramsey reached the lighthouse. (Achievement) Viewpoints Smith postulates three different viewpoints. Schematically she uses an idealised time line where the initial and finishing points of a situ- ation are indicated by I and F respectively. The duration of the situation can be drawn in two differ- ent ways: as an unstructured (--) and a structured (...) phase which has internal stages. The view- point is understood in this representation as a focus on parts or on the whole situation (///) (figure 1). a) i..IIIIIIIIIIIIIIII..F b) I F IIIIIIIIIIIIIIII c) I. /// Figure 1: The a) imperfective b) perfective and c) neutral viewpoint Two viewpoints correspond mainly to the well- known opposition perfective/imperfective. However, additionally Smith assumes a so-called neutral view- point which contains the initial point and at least one internal stage. Aspectually vague sentences which provide either an open or a closed reading back up Smith's consid- ZSee Smith (1991) for a detailed discussion. 326 erations (Smith, 1991:120). 3 However, she restricts her analysis to single sen- tences and neglects the effects viewpoints can have in a discourse. We will therefore focus on this issue in the next section. 2 Discourse Structure We investigate here which viewpoint is appropriate for the German Preterite. B~uerle (1988:131), for instance, claims that this tense in German is am- biguous w.r.t, the perfective/imperfective view on a situation and gives the following evidence for it: (1) a. Der Angeklagte fuhr nach Hause. Dort trank er ein Glas Trollinger. The defendant drove home. There he drank a glass of Trollinger. b. Der Angeklagte fuhr nach Hause. Am Lustnauer Tor hatte er einen schweren Unfall und musste ins Krankenhaus eingeliefert werden. The defendant was driving home. At the Lustnauer tower he had a serious accident and had to be admitted to the hospital. In (la) the VP fuhr nach Hause refers to a com- pleted event and therefore contains an end point. In (lb) this end point is denied by the second sentence. Note that the English translation of (lb) is therefore only correct if an imperfective view is used. This data shows that the use of the Preterite in German does not commit the speaker to saying any- thing about the end point. Every inference regarding the ending of a situation is due to the context or our world knowledge. It may be concluded from (1) that we cannot as- sume a perfective viewpoint, because this view in- cludes the end point of a situation. The follow- ing discourse will furthermore show that also the imperfective view is not applicable to the German Preterite. It is commonly supposed that the imperfective viewpoint which refers to the middle of a situation omitting the initial as well as the final point can be used for describing a background within a discourse (cf. Smith, 1991:130): (2) The defendant had an accident. He was driving home (at this time). 3FoUowing Smith (1991) we applied two tests to Ger- man data regarding the temporal properties of the end point of a situation which are discussed in Schilder (1995). A direct German translation, however, expresses two subsequent events. At first the defendant had an accident and then he drove home: (3) Der Angeklagte hatte einen Unfall. Er fuhr nach Hause (??zu der Zeit). Adding the PP zu der Zeit ('at this time') the sentence functions as a background for the event described by the first sentence, but this discourse sounds awkward and the continuation with a state in (4) is clearly preferred. 4 (4) Er war auf dem Weg nach Hause. Discourse (3) shows that for the German Preterite the initial point is focussed by the viewpoint. This observation proves therefore that this tense is not ambiguous w.r.t the progressive and the simple as- pect as B~uerle (1988) claims. To sum up, these two discourses can be seen to show that the German aspect system for the Preterite offers only a neutral view on every situ- ation. Moreover, this data disproves B/~uerle's explana- tion of (1), clarifies Smith's definition of a viewpoint and motivates the need for a neutral viewpoint in German. It is obviously a shortcoming of Smith's descrip- tion to define the viewpoint merely as a focus on parts or on the whole situation. It emerged from the discourse examples that a crucial function of the viewpoint is the commitment the speaker gives as to whether the end point has been reached or not. In English, the perfective view sets the end point 5 and no cancellation is allowed afterwards. A neu- tral view on a situation gives only a confirmation of the initial point. It leaves open whether the end has been reached or not. Only the temporal knowledge derived from the situation aspect can provide further information which, however, may be overridden by the context. 3 A Situation-theoretic Formalisation We follow Glasbey (1994:15) in her criticism of Smith's formalisation within Discourse Represen- tation Theory (DRT) (Kamp & Reyle, 1993).6 4Note that the PP at this time is not required for the English discourse to be fully understood. 5Provided that the situation aspect provides an in- herent end point which is not the case for states. 6A new account presented by Asher (1993) to de- scribe types of eventualities is currently being investi- gated. Note that the standard definition of DRT does not provide any description of types or other abstract entities. 327 Unlike DRT, STDRT (Cooper, 1992) has the no- tion of an event type which can be used for the in- formation given by the situation aspect. Note that this event type does not have to be instantiated with a situation of this type; it will therefore not be in- troduced like a discourse referent in a discourse representation structure. 7 s_LJ fahren(X,Y,T) al narned(X,'Der Angeklagte') named(Y,'nach Hause') sl ACCOMPLISHMENT Figure 2: The complete event type ¢ The first sentence of (1) refers to a situation s,, where sn is of a type ¢. Type ¢ can be seen as the part of an episode of the complete event type ¢ which is focussed by the neutral viewpoint. We have therefore to define the initial point and the first stage. <3i,itial ~ iff: Ve, e'[[e : ^ e' : e <3 e'A [Ve"[e" e' _ e"] t {BEFORE, MEETS} t"]] O~ <3first_stage ~ iff: Ve, e',e"[[e : a A e' : j3 A e" : 7 A 7 <3initial fl] "+ e <3 e' A t" {MEETS} t] t, t ~, t" are the occurrence times of e, e ~ and e H re- spectively, <3 is the PART-OF relation between sit- uations and BEFORE and MEETS are Allen's interval- relations as defined in Allen (1984). 4 Conclusion We showed that Smith's notion of a neutral view- point is crucial for German. In particular, we in- vestigated the effects this viewpoint has on a dis- course level and compared it with English. It may be concluded from this analysis that discourse struc- ture differs depending on the language. A discourse grammar developed for English cannot easily be ap- plied to German. This cross-linguistic account gives prominence to the underlying concepts instead of fo- cussing only on the surface structure which is unal- terably bound to the peculiarity of a single object language. 7Figure 2 shows a simplified representation of the ac- complishment event type. No account will be given of the treatment of PPs like nach Hause for the time being. In our analysis for German, we therefore high- lighted the following two properties which can be stipulated regarding the neutral viewpoint: * The end point of a situation is beyond the focus of this viewpoint. Default information given by the situation aspect may be overridden by the context. 8 • The neutral viewpoint contains the initial point of the situation. Backgrounding - a typical function of the imperfective view where the ini- tim point is not included - is therefore not ap- plicable for this viewpoint. Furthermore, the proposed formalisation provides an account which can handle the discussed phenom- ena within an implementation; this is ongoing work. References James Allen. 1984. Towards a general theory of action and time. Artificial Intelligence, 23:123-154. Nicholas Asher. 1993. Reference to abstract Ob- jects in Discourse, Kluwer, Dordrecht. Rainer B~iuerle. 1988. Ereignisse und Repr~entationen. LILOG-REPORT 43, IBM Deutschland, Stuttgart. Robin Cooper. 1992. Discourse representation in situation theory. Reading material for the 4 th Eu- ropean Summer School in Logic, Language and In- formation, University of Essex, Colchester, England, August. Sheila Glasbey. 1994. Progressives, events and states. In Paul Dekker and Martin Stokhof, ed- itors, Pro(:. of the 9 th Amsterdam Colloquium. ILLC/Department of Philosophy, University of Am- sterdam. Hans Kamp and Uwe Reyle. 1993. From Dis- course to Logic: Introduction to Modeltheoretic Se- mantics of Natural Language. Kluwer, Dordrecht. Manfred Krifka. 1992. Thematic Relations as Links between Nominal Reference and Temporal Constitution. In Ivan A. Sag and Anna Szabolcsi, editors, Lexical Matters. CSLI. Frank Schilder. 1995. A neutral view on German. To appear in Proc. of the 5 th International Toulouse Workshop on Time, Space and Movement, Toulouse. IRIT. Carlota S. Smith. 1991. The Parameter of Aspect. Kluwer, Dordrecht. 8 A formalisation of this intuition by a non-monotonic reasoning mechanism is described in Schilder (1995). 328 | 1995 | 52 |
Conciseness through Aggregation in Text Generation James Shaw Dept. of Computer Science Columbia University New York, NY 10027, USA shaw~cs, columbia, edu Abstract Aggregating different pieces of similar in- formation is necessary to generate concise and easy to understand reports in techni- cal domains. This paper presents a general algorithm that combines similar messages in order to generate one or more coherent sentences for them. The process is not as trivial as might be expected. Problems en- countered are briefly described. 1 Motivation Aggregation is any syntactic process that allows the expression of concise and tightly constructed text such as coordination or subordination. By using the parallelism of syntactic structure to express similar information, writers can convey the same amount of information in a shorter space. Coordination has been the object of considerable research (for an overview, see (van Oirsouw87)). In contrast to lin- guistic approaches, which are generally analytic, the treatment of coordination in this paper is from a synthetic point of view -- text generation. It raises issues such as deciding when and how to coordinate. An algorithm for generating coordinated sentences is implemented in PLANDoc (Kukich et al.93; McKe- own et ah94), an automated documentation system. PLANDoc generates natural language reports based on the interaction between telephone planning engineers and LEIS-PLAN 1, a knowledge based sys- tem. Input to PLANDoc is a series of messages, or semantic functional descriptions (FD, Fig. 1). Each FD is an atomic decision about telephone equipment installation chosen by a planning engineer. The do- main of discourse is currently limited to 31 mes- sage types, but user interactions include many vari- ations and combinations of these messages. Instead of generating four separate messages as in Fig. 2, PLANDoc combines them and generates the follow- ing two sentences: "This refinement activated DLC for CSAs 3122 and 3130 in the first quarter of 1994 1LEIS is a registered trademark of Bell Communica- tions Research, Piscataway, NJ. and ALL-DLC for CSA 3134 in 1994 Q3. It also activated DSS-DLC for CSA 3208 in 1994 Q3." 2 System Architecture Fig. 3 is an overview of PLANDoc's architecture. Input to the message generator comes from LEIS- PLAN tracking files which record user's actions dur- ing a planning session. The ontologizer adds hier- archical structure to messages to facilitate further processing. The content planner organizes the over- all narrative and determines the linear order of the messages. This includes combining atomic messages into aggregated messages, choosing cue words, and determining paraphrases that maintain focus and ensure coherence. Finally the FUF/SURGE pack- age (Elhadad91; Robin94) lexicalizes the messages and maps case roles into syntactic roles, builds the constituent structure of the sentence, ensures agree- ment, and generates the surface sentences. 3 Combining Strategy Because PLANDoc can produce many paraphrases for a single message, aggregation during the syntac- tic phase of generation would be difficult; semanti- cally similar messages would already have different surface forms. As a result, aggregation in PLANDoc is carried out at the content planning level using se- mantic FDs. Three main criteria were used to design the combining strategy: 1. domain independence: the algorithm should be applicable in other domains. 2. generating the most concise text: it should avoid repetition of phrases to generate shortest text. ((cat message) (admin ((PLANDoc-message-name RDA) (runid r-regl))) (class refinement) (action activation) (equipment-type all-dlc) (csa-site 3134) (date ((year 1994) (quarter 3)))) Figure h Output of the Message Generator 329 This refinement activated ALL-DLC for CSA 3134 in 1994 Q3. This refinement activated DLC for CSA 3130 in 1994 Q1. This refinement activated DSS-DLC for CSA 3208 in 1994 Q3. This refinement activated DLC for CSA 3122 in 1994 Q1. Equipment: El= ALL-DLC, E2= DLC, E3= DSS-DLC Site: SI= CSA 3122, $2= CSA 3130, $3= CSA 3134, $4= CSA 3208 Date: DI= 1994 Q1, D2= 1994 Q3 Figure 2: Unaggregated Text Output (El $3 D2) (E2 $2 D1) (E3 S4 D2) (E2 S1D1) LEIS- [ Message PLAN , Generator (C) (C) Ontologizer(FUF) ~ Contentplanner(Lisp) , Lexica/izer(FUF) Figure 3: PLANDoc System Architecture Surface Generator (SURGE) PLANDoc Narrative (text) 3. avoidance of overly-complex sentences: it should not generate sentences that are too com- plex or ambiguous for readers. The first aggregation step is to identify semantically related messages. This is done by grouping messages with the same action attribute. Then the system at- tempts to generate concise and unambiguous text for each action group separately. This reduces the problem size from tens of messages into much smaller sizes. Though this heuristic disallows the combina- tion of messages with different actions, the messages in each action group already contain enough infor- mation to produce quite complex sentences. The system combines the maximum number of re- lated messages to meet the second design criterion- generating the most concise text. But such combi- nation is blocked when a sentence becomes too com- plex. A bottom-up 4-step algorithm was developed: 1. Sorting: putting similar messages right next to each other. 2. Merging Same Attribute: combining adja- cent messages that only have one distinct at- tribute. 3. Identity Deletion: deletion of identical com- ponents across messages. 4. Sentence Breaking: determining sentence breaks. 3.1 Step h Sorting The system first ranks the attributes to determine which are most similar across messages with the same action. For each potential distinct attribute, the system calculates its rank using the formula m - d, where m is the number of messages and d is the number of distinct attributes for that par- ticular attribute. The rank is an indicator of how similar an attribute is across the messages. Com- bining messages according to the highest ranking attribute ensures that minimum text will be gen- erated for these messages. Based on the ranking, the system reorders the messages by sorting, which (E2 S1D1) (El S3 D2) (E2 S1D1) (E2 $2 D1) (E2 S1D1) (E2 S2 D1) (El $3 D2) --> (E2 $2 D1) --> (El $3 D2) (E3 $4 D2) (E3 $4 D2) (E3 $4 D2) by Site by Equipment by Date Figure 4: Step 1. Sorting puts the messages that have the same attribute right next to each other. In Fig. 2, equipment has rank 1 because it has 3 distinct equipment values - ALL- DLC, DLC, and DSS-DLC; date has rank 2 because it has two distinct date values - 1994 Q1 and 1994 Q3; site has rank 0. Attribute class and action (Fig. 1) are ignored because they are always the same at this stage. When two attributes have the same rank, the system breaks the tie based on a priority hierar- chy determined by the domain experts. Because the final sorting operation dominates the order of the resulting messages, PLANDoc sorts the message list from the lowest rank attribute to the highest. In this case, the ordering for sorting is site, equipment, and then date. The resulting message list after sorting each attribute is shown in Fig. 4. 3.2 Step 2: Merging Same Attribute The list of sorted messages is traversed. When- ever there is only one distinct attribute between two adjacent messages, they are merged into one message with a conjoined attribute, which is a list of the distinct attributes from both messages. What about messages with two or more distinct at- tributes? Merging two messages with two or more distinct attributes will result in a syntactically valid sentence but with an undesirable meaning: "*This refinement activated ALL-DLC and DSS-DLC for CSAs 3122 and 3130 in the third quarter of 1993." By tracking which attribute is compound, a third message can be merged into the aggregate message if it also has the same distinct attribute. Continue from Step 1, (E2 S1 D1) and (E2 $2 D1) are merged because they have only one distinct attribute, site. A new FD, (E2 (S1 $2) D1), is assembled to replace 330 those two messages. Note that although (El $3 D2) and (E3 $4 D2) have the date in common, they are not combined because they have more than one dis- tinct attribute, site and equipment. Step 2 is applied to the message list recursively to generate possible crossing conjunction, as in the following output which merges four messages: "This refinement activated ALL-DLC and DSS-DLC for CSAs 3122 and 3130 in the third quarter of 1993." Though on the outset this phenomenon seems un- likely, it does happen in our domain. 3.3 Step 3: Identity Deletion After merging at step 2, the message list left in an action group either has only one message, or it has more than one message with at least two distinct attributes between them. Instead of generating two separate sentences for (E2 (S1 $2) D1) and (El $3 D2), the system realizes that both the subject and verb are the same, thus it uses deletion on identity to generate "This refinement activated DLC for CSAs 3122 and 3130 in 1994 Q1 and [this refinement ac- tivated] ALL-DLC for CSA 3134 in 1994 Q3." For identical attributes across two messages (as shown in the bracketed phrase), a "deletion" feature is in- serted into the semantic FD, so that SURGE will suppress the output. 3.4 Step 4: Sentence Break Applying deletion on identity blindly to the whole message list might make the generated text incom- prehensible because readers might have to recover too much implicit information from the sentence. As a result, the combining algorithm must have a way to determine when to break the messages into separate sentences that are easy to understand and unambiguous. How much information to pack into a sentence does not depend on grammaticality, but on coher- ence, comprehensibility, and aesthetics which are hard to formalize. PLANDoc uses a heuristic that always joins the first and second messages, and con- tinues to do so for third and more if the distinct attributes between the messages are the same. This heuristics results in parallel syntactic structure and the underlying semantics can be easily recovered. Once the distinct attributes are different from the combined messages, the system starts a new sen- tence. Using the same example, (E2 (S1 $2) D1) and (El $3 D2) have three distinct attributes. They are combined because they are the first two messages. Comparing the third message (E3 $4 D2) to (El $3 D2), they have different equipment and site, but not date, so a sentence break will take place between them. Aggregating all three messages together will results in questionable output. Because of the par- allel structure created between the first 2 messages, readers are expecting a different date when reading the third clause. The second occurrence of "1994 Q3" in the same sentence does not agree with read- ers' expectation thus potentially confusing. 4 Future Directions In this paper, I have described a general algorithm which not only reduces the amount of the text pro- duced, but also increases the fluency of the text. While other systems do generate conjunctions, they deal~vith restricted cases such as conjunction of sub- jects and predicates(Dalianis~zHovy93). There are other interesting problems in aggregations. Gener- ating marker words to indicate relationships in con- joined structures, such as "respectively", is another short term goal. Extending the current aggregation algorithm to be more general is currently being in- vestigated, such as combining related messages with different actions. 5 Acknowledgements The author thanks Prof. Kathleen McKeown, and Dr. Karen Kukich at Bellcore for their advice and support. This research was conducted while sup- ported by Bellcore project ~CU01403301A1, and under the auspices of the Columbia University CAT in High Performance Computing and Communica- tions in Healthcare, a New York State Center for Advanced Technology supported by the New York State Science and Technology Foundation. References Dalianis, Hercules, and Hovy, Edward. 1993. Ag- gregation in Natural Language Generation. In Proceedings of the Fourth European Workshop on Natural Language Generation, Pisa, Italy. Elhadad, Michael. 1991. FUF: The universal unifier - user manual, version 5.0. Tech Report CUCS- 038-91, Columbia Univ. Robin, Jacques. 1994. Revision-Based Generation of Natural Language Summaries Providing Histor- ical Background: Corpus-based analysis, design, implementation and evaluation. Ph.D. thesis, Computer Science Department, Columbia Univ. Kukieh, K., McKeown, K., Morgan, N., Phillips, J., Robin, J., Shaw, J., and Lim, :I. 1993. User-Needs Analysis and Design Methodology for an Auto- mated Documentation Generator. In Proceedings of the Fourth Bellcore/BCC Symposium on User- Centered Design, Piseataway, NJ. McKeown, Kathleen, Kukich, Karen, and Shaw, James. 1994. Practical Issues in Automatic Doc- umentation Generation. In Proceedings of the ~,th Conference on Applied Natural Language Process- ing, Stuttgart, p.7-14. van Oirsouw, Robert. 1987. The Syntax of Coordi- nation Beckenham: Croom Helm. 331 | 1995 | 53 |
Quantifying lexical influence: Giving direction to context V Krip~sundar kripa~cs, buffalo, edu CEDAR & Dept. of Computer Science SUNY at Buffalo Buffalo NY 14260, USA Abstract The relevance of context in disambiguat- ing natural language input has been widely acknowledged in the literature. However, most attempts at formalising the intuitive notion of context tend to treat the word and its context symmetrically. We demonstrate here that traditional measures such as mu- tual information score are likely to overlook a significant fraction of all co-occurrence phenomena in natural language. We also propose metrics for measuring directed lex- ical influence and compare performances. Keywords: contextual post-processing, defining context, lexical influence, direc- tionality of context 1 Introduction It is widely accepted that context plays a significant role in shaping all aspects of language. Indeed, com- prehension would be utterly impossible without the extensive application of contextual information. Ev- idence from psycholinguistic and cognitive psycho- logical studies also demonstrates that contextual in- formation affects the activation levels of lexical can- didates during the process of perception (Weinreich, 1980; McClelland, 1987). Garvin (1972) describes the role of context as follows: [The meaning of] a particular text [is] not the system-derived meaning as a whole, but that part of it which is included in the con- textually and situationally derived mean- ing proper to the text in question. (p. 69- 70) In effect, this means that the context of a word serves to restrict its sense. The problem addressed in this research is that of improving the performance of a natural-language recogniser (such as a recognition system for hand- written or spoken language). The recogniser out- put typically consists of an ordered set of candidate words (word-choices) for each word position in the input stream. Since natural language abounds in contextual information, it is reasonable to utilise this in improving the performance of the recogniser (by disambiguating among the word-choices). The word-choices (together with their confidence values) constitute a confusion set. The recogniser may further associate a confidence-value with each of its word choices to communicate finer resolution in its output. The language module must update these confidence values to reflect contextual knowledge. 2 Linguistic post-processing The language module can, in principle, perform several types of "post-processing" on the word- candidate lists that the recogniser outputs for the different word-positions. The most promising possi- bilities are: • re-ranking the confusion set (and assigning new confidence-values to its entries), and, • deleting low-confidence entries from the confu- sion set (after applying contextual knowledge) Several researchers in NLP have acknowledged the relevance of context in disambiguating natural lan- guage input ((Evett et al., 1991); (Zernik, 1991); (Hindle & Rooth, 1993); (Rosenfeld, 1994)). In fact, the recent revival of interest in statistical language processing is partly because of its (comparative) suc- cess in modelling context. However, a theoretically sound definition of context is needed to ensure that such re-ranking and deleting of word-choices helps and not hinders (Gale & Church, 1990). Researchers in information theory have come up with many inter-related formalisations of the ideas of context and contextual influence, such as mutual in- formation and joint entropy. However, to our knowl- edge, all attempts at arriving at a theoretical basis for formalising the intuitive notion of context have treated the word and its context symmetrically. Many researchers ((Smadja, 1991); (Srihari & Bal- tus, 1993)) have suggested that the information- theoretic notion of mutual information score (MIS) directly captures the idea of context. However, MIS 332 is deficient in its ability to detect one-sided correla- tions (cf. Table 1), and our research indicates that asymmetric influence measures are required to prop- erly handle them (Krip£sundar, 1994). For example, it seems quite unlikely that any symmetric information measure can accurately cap- ture the co-occurrence relationship between the two words 'Paleolithic' and 'age' in the phrase 'Pale- olithic age'. The suggestion that 'age' exerts as much influence on 'Paleolithic' as vice versa seems ridicu- lous, to say the least. What is needed here is a di- rected (ie, one-sided)influence measure (DIM), some- thing that serves as a measure of influence of one word on another, rather than as a simple, symmet- ric, "co-existence probability" of two words. Table 1 illustrates how a DIM can be effective in detecting lexical and lexico-semantic associations. 3 Comparing measures of lexical influence We used a section of the Wall Street Journal (WSJ) corpus containing 102K sentences (over two million words) as the training corpus for the partial results described here. The lexicon used was a simple 30K- word superset of the vocabulary of the training cor- pus. The results shown here serve to strengthen our hypothesis that non-standard information measures are needed for the proper utilisation of linguistic context. Table 1 shows some pairs of words that exhibit differing degrees of influence on each other. It also demonstrates very effectively that one-sided information measures are much better than sym- metric measures at utilising context properly. The arrow between each pair of words in the table in- dicates the direction of influence (or flow of infor- mation). The preponderance of word-pairs that ex- hibit only one direction of significant influence (eg, 'according'---~'to') shows that no symmetric score could have captured the correlations in all of these phrases. Our formulation of directed influence is still evolv- ing. The word-pairs in Table 1 have been selected randomly from the test-set with the criterion that they scored "significantly" (ie, > 0.9) on at least one of the three measures D1, D2 and D3. The four measures (including MIS) are defined as follows: • ," P(w,w2) MIS(wlw2) = log[e(,$,)e(w2) j Dl(wl/w2) = P(w~) = #~2 D2(wl/w2) = ste~l ( w/w1~ ~ nl r" k~Cmax] "" D3(wl/w2) = ote,,O¢ ~--v-x_~--z~ ,, r~l ~''\ #Cmax] . . . . In these definitions, #wlw2 denotes the frequency of co-occurrence of the words wl and w2,1 while 1Note that the exact word order of wl and w2 is ir- relevant here. #Wl, and #w~ represent (respectively) the frequen- cies of their (unconditional) occurrence. #Cmax a~--! max(@wlw2) is defined to be the Wlt~2 maximum co-occurrence frequency in the corpus, and appears to be a better normalisation factor than the size of the corpus itself. The definition of MIS implicitly incorporates the size of the corpus, since it has two P0 terms in the denominator, and only one in the numerator. The DIM's, on the other hand, have balanced fractions. Therefore, we have not included a log-term in the definitions of D1, D2, and D3 above. D1 is a straightforward estimation of the condi- tional probability of co-occurrence. It forms a base- line for performance evaluations, but is prone to sparse data problems (Dunning, 1993). The step() functions in D2 and D3 represent two attempts at minimising such errors. These functions are piecewise-linear mappings of the normalised co- occurrence frequency, and are used as scaling factors. Their effect is apparent in Table 1, especially in the bottom third of the table, where the low frequency of the primer pushes D3 down to insignificant levels. The metrics D2 and D3 can and should be nor- mMised, perhaps to the 0-1 range, in order to fa- cilitate integration with other metrics such as the recogniser's confidence value. Similarly, the lack of normalisation of MIS hampers direct comparison of scores with the three DIM's. 4 Discussion Of the several different types of word-level associ- ations, lexical and lexico-semantic associations are among the most significant local associations. Lexi- cal (or associative) context is characterised by rigid word order, and usually implies that the primer and the primed together act as one lexical unit. Lexico- semantic associations are exemplified by phrasal verbs (eg, 'fix up'), and are characterised by morphological complexity in the verb part and spatial flexibility in the phrase as a whole. It is noteworthy that all the three DIM's capture the notions of lexical (ie, fixed) and lexico-semantic associations in one formula (albeit to differing de- grees of success). Thus we have 'staff' and 're- porter' influencing each other almost equally, while the asymmetric influence on 'in' from its right con- text ('addition') is also detected by the DIM's. It is our contention that symmetric measures constrain the re-ranking/proposing process signifi- cantly, since they are essentially blind to a signif- icant fraction (perhaps more than ha/f) of all co- occurrence phenomena in natural language. 5 Summary and Future Work The preliminary results described in this work es- tablish clearly that non-standard metrics of lexical 333 Word-pa~r WL WR new *-- yor-b-~ according --* to staff *- reporter staff --* reporter new ~ york on -* the vice --* president at *-- least compared --* with -~6927,2697,2338"~ 5.5510.8663.4633.463 (1084, 54580, 1083) II 3"62910"99912.99612.996 II (1613, 1205, 1157) II 7.111 10.96012.87912.879 II (1613, 1205, 1157) II 7"11101"71712"15012-150 II (6927, 2697, 2338) II 5.551 10.3371 1.3481 1.348 II (13025, 116356, 3483) [I 1-554 I 0-267 I 1-3341 1.334 II (1017, 2678, 784) II 6"38410"7701 1.5401 1.285 II (11158, 795, 665) II 5.03910.8361 1.6711 1.247 II 585, 11362, 551) Table 1: Asymmetry in co-occurrence relationships: Word-pairs with "significant" influence in either direction have been selected randomly from the test-set. Note that very few of these pairs exhibit comparable influence on each other. The arrows indicate the direction of lexical influence (or information flow). A DIM score of 1 or more implies a significant association, whereas an MIS below 4 is considered a chance association. influence bear much promise. In fact, what we re- ally need is a generalised information score, a measure that takes into account several factors, such as: • directionality in correlation • multiple words participating in a lexical rela- tionship • different (morphological) forms of words, and, • spatial flexibility in the components of a collo- cation The generalised information score would capture all the variations that are introduced by the above fac- tors, and allow for the variants so as to reflect a "normalised" measure of contextual influence. We have also been working with experimental measures which attach higher significance to the collocation frequency, (measures which, in essence, "trust" the recogniser more often). Our future work will involve bringing these various factors together into one integrated formalism. References Max Coltheart, editor. 1987. Attention and Perfor- mance XII: The Psychology of Reading. Lawrence Erlbaum. Ted Dunning. 1993. Accurate methods for the statistics of surprise and coincidence. Computa- tional Linguistics, 19:1:61-74. LJ Evett, CJ Wells, FG Keenan, T Rose, and Pd Whitrow. 1991. Using linguistic information to aid handwriting recognition. Proceedings of the International Workshop on Frontiers in Hand- writing Recognition, pages 303-311. WilliamA Gale and Kenneth W Church. 1990. Poor estimates of context are worse than none. In Pro- ceedings of the DARPA Speech and Natural Lan- guage Workshop, pages 283-287. Paul L Garvin. 1972. On Machine Translation. Mouton. Donald ttindle and Mats Rooth. 1993. Structural ambiguity and lexical relations. Computational Linguistics, 19:1:103-120. V Kriphsundar. 1994. Drawing on Linguistic Con- text to Resolve Ambiguities oR How to imrove re- congition in noisy domains. Ph.D. thesis, Com- puter Science, SUNY@Buffalo. (proposal). James L McClelland. 1987. The case for interaction- ism in language processing. In (Coltheart, 1987). Lawrence Erlbaum. Ronald Rosenfeld. 1994. A hybrid approach to adaptive statistical language modeling. Proceed- ings of the ARPA workshop on human language technology, pages 76-81. Frank Smadja. 1991. Macrocoding the lexicon with co-occurrence knowledge, in (Zernik, 1991), pages 165-190. RShi .ni K Srihari and Charlotte M Baltus. 1993. Use of language models in on-line recognition of hand- written sentences. Proceedings of the Third Inter- national Workshop on Frontiers in Handwriting Recognition (IWFIIR III). SN Srihari, JJ IIull, and R Chaudhari. 1983. In- tegrating diverse knowledge sources in text recog- nition. ACM Transactions on Office Information Systems, 1:1:68-87. RM Warren. 1970. Perceptual restoration of missing speech sounds. Science, 167:392-393. Uriel Weinreich. 1980. On Semantics. University of Pennsylvania Press. Uri Zernik, editor. 1991. Lezical Acquisition: Ex- ploiting On-line Resources to Build a Lexicon. Lawrence Erlbaum. 334 | 1995 | 54 |
Acquisition of a Lexicon from Semantic Representations of Sentences* Cynthia A. Thompson Department of Computer Sciences University of Texas 2.124 Taylor Hall Austin, TX 78712 [email protected] Abstract A system, WOLFIE, that acquires a map- ping of words to their semantic representa- tion is presented and a preliminary evalua- tion is performed. Tree least general gener- alizations (TLGGs) of the representations of input sentences are performed to assist in determining the representations of indi- vidual words in the sentences. The best guess for a meaning of a word is the TLGG which overlaps with the highest percentage of sentence representations in which that word appears. Some promising experimen- tal results on a non-artificial data set are presented. 1 Introduction Computer language learning is an area of much po- tential and recent research. One goal is to learn to map surface sentences to a deeper semantic mean- ing. In the long term, we would like to communi- cate with computers as easily as we do with peo- ple. Learning word meanings is an important step in this direction. Some other approaches to the lexi- cal acquisition problem depend on knowledge of syn- tax to assist in lexical learning (Berwick and Pilato, 1987). Also, most of these have not demonstrated the ability to tie in to the rest of a language learning system (Hastings and Lytinen, 1994; Kazman, 1990; Siskind, 1994). Finally, unnatural data is sometimes needed (Siskind, 1994). We present a lexicM acquisition system that learns a mapping of words to their semantic representa- tion, and which overcomes the above problems. Our system, WOLFIE (WOrd Learning From Interpreted Examples), learns this mapping from training ex- amples consisting of sentences paired with their se- mantic representation. The representation used here is based on Conceptual Dependency (CD) (Schank, 1975). The results of our system can be used to *This research was supported by the National Science Foundation under grant IRI-9310819 assist a larger language acquisition system; in par- ticular, we use the results as part of the input to CHILL (Zelle and Mooney, 1993). CHILL learns to parse sentences into case-role representations by an- Myzing a sample of sentence/case-role pairings. By extending the representation of each word to a CD representation, the problem faced by CHILL is made more difficult. Our hypothesis is that the output from WOLFIE can ease the difficulty. In the long run, a system such as WOLFIE could be used to help learn to process natural language queries and translate them into a database query language. Also, WOLFIE could possibly assist in translation from one natural language to another. 2 Problem Definition and Algorithm 2.1 The Lexical Learning Problem Given: A set of sentences, S paired with represen- tations, R. Find: A pairing of a subset of the words, W in S with representations of those words. Some sentences can have multiple representations because of ambiguity, both at the word and sentence level. The representations for a word are formed from subsets of the representations of input sen- tences in which that word occurred. This assumes that a representation for some or all of the words in a sentence is contained in the representation for that sentence. This may not be true with all forms of sentence representation, but is a reasonable as- sumption. Tree least general generalizations (TLGGs) plus statistics are used together to solve the problem. We make no assumption that each word has a single meaning (i.e., homonymy is allowed), or that each meaning is associated with one word only (i.e., syn- onymy is allowed). Also, some words in S may not have a meaning associated with them. 2.2 Background: Tree Least General Generalizations The input to a TLGG is two trees, and the outputs returned are common subtrees of the two input trees. 335 Our trees have labels on their arcs; thus a tree with root p, one child c, and an arc label to that child 1 is denoted [p,l:c]. TLGGs are related to the LGGs of (Plotkin, 1970). Summarizing that work, the LGG of two clauses is the least general clause that subsumes both clauses. For example, given the trees [ate, agt : [person, sex: male, age : adult], pat : [food, type : cheese] ] and [hit, inst : [inst ,type :ball], pat : [person, sex : male, age : child] ] the TLGGs are [person,sex:male] and [male]. Notice that the result is not unique, since the al- gorithm searches all subtrees to find commonalities. 2.3 Algorithm Description Our approach to the lexical learning problem uses TLGGs to assist in finding the most likely mean- ing representation for a word. First, a table, T is built from the training input. Each word, W in S is entered into T, along with the representa- tions, R of the sentences W appeared in. We call this the representation set, WR. If a word occurs twice in the same sentence, the representation of that sentence is entered twice into Wn. Next, for each word, several TLGGs of pairs from WR are per- formed and entered into T. These TLGGs are the possible meaning representations for a word. For example, [person, sex :male, age : adult] is a pos- sible meaning representation for man. More than one of these TLGGs could be the correct meaning, if the word has multiple meanings in R. Also, the word may have no associated meaning representation in R. "The" plays such a role in our data set. Next, the main loop is entered, and greedy hill climbing on the best TLGG for a word is performed. A TLGG is a good candidate for a word meaning if it is part of the representation of a large percentage of sentences in which the word appears. The best word- TLGG pair in T, denoted (w, t) is the one with the highest percentage of this overlap. At each iteration, the first step is to find and add to the output this best (w,t) pair. Note that t can also be part of the representation of a large percentage of sentences in which another word appears, since we can have synonyms in our input. Second, one copy of each sentence representation that has t somewhere in it is removed from w's entry in T. The reason for this is that the meaning of w for those sentences has been learned, and we can gain no more information from those sentences. If t occurs n times in one of these sentence representations, the sentence representation is removed n times, since we add one copy of the representation to wR for each occurrence of w in a sentence. Finally, for each word E T, if word and w appear in one or more sentences together, the sentence rep- resentations in word's entry that correspond to such sentences are modified by eliminating the portion of the sentence representation that matches t, thus shortening that sentence representation for the next iteration. This prevents us from mistakenly choos- ing the same meaning for two different words in the same sentence. This elimination might not always succeed since w can have multiple meanings, and it might be used in a different way than that indicated by t in the sentence with both w and word in it. But if it does succeed the TLGG list for wordis modified or recomputed as needed, so as to still accurately re- flect the (now modified) sentence representations for word. Loop iteration continues until all W E T have no associated representations. 2.4 Example Let us illustrate the workings of WOLFIE with an example. Consider the following input: 1. The boy hit the window. [prop el, agt: [person, sex :m ale, age :child], pat: [obj ,type: window]] 2. The hammer hit the window. [propel,inst: [obj ,type :hammer], pat:[obj,type:window]] 3. The hammer moved. [ptrans,pat: [obj ,type :hammer]] 4. The boy ate the pasta with the cheese. [ingest, agt: [p erson,sex:m ale, age :child], pat: [food, type: past a, accomp: [food ,type :cheese]]] 5. The boy ate the pasta with the fork. [ingest,agt:[person,sex:male,age:child], pat: [food ,type :pasta] ,inst: [inst ,type :fork]] A portion of the initial T follows. The TLGGs for boy are [ingest, agt:[person, sex:male, age:child], pat:[food, type:pasta]l, [person, sex:male, age:child], [male], [child], [food, type:pasta], [food], and [pasta]. The TLGGs for pasta are the same as for boy. The TLGGs for hammer are [obj, type:hammer] and [hammer]. In the first iteration, all the above words have a TLGG which covers 100% of the sen- tence representations. For clarity, let us choose [person, sex : male, age : child] as the meaning for boy. Since each sentence representation for boy has this TLGG in it, we remove all of them, and boy's en- try will be empty. Next, since boy and pasta appear in some sentences together, we modify the sentence representations for pasta. They are now as follows: [ingest,pat:[food,type:pasta,accomp:[food,type: cheese]]] and [ingest,pat:[food,type:pasta],inst:[inst, type:fork]]. We also have to modify the TLGGs, resulting in the list: [ingest,pat:[food,type:pasta]], [food,type:pasta], [food], and [pasta]. Since all of these have 100% coverage in this example set, any of them could be chosen as the meaning representation for pasta. Again, for clarity, we choose the correct one, and the final meaning representations for these examples would be: (boy, [person, sex : male, 336 age:child] ), (pasta, [food,type :pasta] ), (hammer, [obj,type :hammer] ), (ate, [ingest] ), (fork, [inst,type:fork]), (cheese, [food, type : cheese] ), and (window, [obj, type : window]). As noted above, in this example, there are some alternatives for the meanings for pasta, and also for window and cheese. In a larger exam- ple, some of these ambiguities would be eliminated, but those remaining are an area for future research. 3 Experimental Evaluation Our hypothesis is that useful meaning representa- tions can be learned by WOLFIE. One way to test this is by examining the results by hand. Another way to test this is to use the results to assist a larger learning system. The corpus used is based on that of (McClelland and Kawamoto, 1986). That corpus is a set of 1475 sentence/case-structure pairs, produced from a set of 19 sentence templates. We modified only the case- structure portion of these pairs. There is still the basic case-structure representation, but instead of a single word for each filler, there is a semantic repre- sentation, as in the previous section. The system is implemented in prolog. We chose a random set of training examples, starting with 50 examples, and incrementing by 100 for each of three trials. To measure the success of the sys- tem, the percentage of correct word meanings ob- tained was measured. This climbed to 94% correct after 450 examples, then went down to around 83% thereafter, with training going up to 650 examples. In one case, in going from 350 to 450 training ex- amples, the number of word-meaning pairs learned went down by ten while the accuracy went up by 31%. This happened, in part, because the incor- rect pair (broke, [inst]) was hypothesized early in the loop with 350 examples, causing many of the instruments to have an incomplete representation, such as (hatchet, [hatchet] ), instead of the cor- rect (hatchet, [inst,type:hatchet] ). This er- ror was not made in cases where a higher percent of the correct word meanings were learned. It is an area for future research to discover why this error is being made in some cases but not in others. We have only preliminary results on the task of using WOLFIE to assist CHILL. Those results in- dicate that CHILL, without WOLFIE's help cannot learn to parse sentences into the deeper semantic representation, but that with 450 examples, assisted by WOLFIE, it can learn parse up to 55% correct on a testing set. 4 Future Work This research is still in its early stages. Many ex- tensions and further tests would be useful. More ex- tensive testing with CHILL is needed, including using larger training sets to improve the results. We would also like to get results on a larger, real world data set. Currently, there is no interaction between lex- ical and syntactic/parsing acquisition, which could be an area for exploration. For example, just learn- ing (ate, [ingest] ) does not tell us about the case roles of ate (i.e., agent and optional patient), but this information would help CHILL with its learning process. Many acquisition processes are more incre- mental than our system. This is also an area of cur- rent research. In the longer term, there are problems such as adding the ability to: acquire one definition for multiple morphological forms of a word; work with an already existing lexicon, to revise mistakes and add new entries; map a multi-word phrase to one meaning; and many more. Finally, we have not tested the system on noisy input. 5 Conclusion In conclusion, we have described a new system for lexical acquisition. We use a novel approach to learn semantic representations for words. Though in its early stages, this approach shows promise for many future applications, including assisting another sys- tem in learning to understand entire sentences. References Berwick, Robert C., and Pilato, S. (1987). Learning syntax by automata induction. Machine Learning, 2(1):9-38. Hastings, Peter, and Lytinen, Steven (1994). The ups and downs of lexical acquisition. In Proceedings of the Twelfth National Conference on Artificial Intelligence, 754-759. Kazman, Rick (1990). Babel: A psychologically plausi- ble cross-linguistic model of lexical and syntactic ac- quisition. In Proceedings of the Eighth International Workshop on Machine Learning, 75-79. Evanston, IL. McClelland, James L., and Kawamoto, A. H. (1986). Mechanisms of sentence processing: Assigning roles to constituents of sentences. In Rumelhart, D. E., and McClelland, J. L., editors, Parallel Distributed Processing, Vol. II, 318-362. Cambridge, MA: MIT Press. Plotkin, Gordon D. (1970). A note on inductive gener- alization. In Meltzer, B., and Michie, D., editors, Ma- chine Intelligence (Vol. 5). New York: Elsevier North- Holland. Schank, Roger C. (1975). Conceptual Information Pro- cessing. Oxford: North-Holland. Siskind, Jeffrey M. (1994). Lexical acquisition in the presence of noise and homonymy. In Proceedings of the Twelfth National Conference on Artificial Intelligence, 760-766. Zelle, John M., and Mooney, Raymond J. (1993). Learn- ing semantic grammars with constructive inductive logic programming. In Proceedings of the Eleventh Na- tional Conference on Artificial Intelligence, 817-822. Washington, D.C. 337 | 1995 | 55 |
A Minimalist Head-Corner Parser Mettina Veenstra vakgroep Alfa-informatica, University of Groningen Postbus 716 NL-9700 AS Groningen [email protected] Abstract In the Minimalist Program (Chomsky, 1992) it is assumed that there are different types of projections (lexical and functional) and therefore different types of heads. This paper explains why functional heads are not treated as head-corners by the mini- realist head-corner parser described here. 1 Introduction In the Minimalist Program (Chomsky, 1992) 'sur- face' word order is determined in a very indirect way. Word order is no longer a property of phrase struc- ture, because phrase structure is universal. Fur- thermore movements are universal. This implies in principle that when we parse comparable sen- tences in different languages, we always build the same tree. Word order differences are distinguished by the choice of the moment of Spell Out (SO). SO is the point in the derivation where instructions are given to an interface level called PF (Phonetic Form). Thus SO yields what was formerly called surface structure. SO determines in which position in the tree a certain constituent becomes visible and consequently it determines the relative order of the constituents of a sentence. This is illustrated in the simplified tree in figure 1. Note that each cluster of co-indexed positions (i.e. a chain) in the figure has only one visible constituent. This is the position in which the constituent is represented at the moment of SO. This moment is not universal. The verb chain of our English example gives instructions to the in- terface level PF when the verb is adjoined to AgrS (head of the agreement phrase of the subject). The verb chain of a comparable sentence in Dutch 'spells out' when the verb is in V. Thus in Dutch subor- dinate clauses the movement of the verb to AgrO (head of the agreement phrase of the object) and CP I u /\ C AgrSP I /\ that~ DP AgrS I /\ she~ AgrS AgrOP /\ V AgrS ej AorO I /\ likesk AgrO VP o(; V AgrO I /\ ek V DP [ L ek catsj Figure 1: A simplified tree for a transitive subordinate clause in English subsequently AgrS happens 'covertly'. The motiva- tion for covert movement can be found in (Chomsky, 1992, pages 38-40). In the following sections we will show that the structure building operations of the Minimalist Pro- gram are bidirectional operations. Because head- corner parsing is a bidirectional strategy, this type of parser seems more favorable for minimalist pars- ing, than the usual left to right parsing algorithms. 2 GT and Move-c~ The central operations of the Minimalist Program are Generalized Transformation (GT) and Move- ~. GT is a structure-building operation that builds trees in a bottom-up way as is illustrated in figure 2. 338 w V V /\ /\ V e V DP YYl II ~e her ~e see her Figure 2: GT applied to V and DP yielding ~'. Two phrase markers (V and DP) are combined into one. One of these two is called the target (V). A pro- jection of the target (V) is added to the target. The projection of the target has two daughters: the tar- get itself and an empty position. The empty posi- tion is substituted for by the second phrase marker (DP). This second phrase marker is itself built up in other applications of GT and/or Move-a. Move-(~ is a special kind of GT. It is an opera- tion that combines a target with a moved phrase marker. It is assumed that movement is always left- ward (Kayne, 1994) and that in the universal trees of the Minimalist Program heads and specifiers, which are the only positions to move to, are always to the left of the projection line. These two assumptions in combination with the fact that GT and Move-a are bottom-up operations, effect that the moved phrase marker has to be contained in the tree that was built so far 1 The tree in figure 1 illustrates different kinds of movement. In the Minimalist Program movement occurs to check features. Elements move from the lexical domain (VP) to the functional domain (e.g. AgrOP, AgrSP) to compare their features with the features that are present in the functional domain. 3 Head-corner parsing The main idea behind head-driven parsing (Kay, 1989) is that the lexical entries functioning as heads contain valuable information for the parsing process. For example, if a verb is intransitive it will not re- quire a complement, if it is transitive it will require a complement. Therefore the head is parsed before its sisters in a head-driven parser. A head-corner parser (Kay, 1989; Bouma and van Noord, 1993) is a spe- cial type of head-driven parser. Its main character- istic is that it does not work from left to right but in- stead works bidirectionally. That is, first a poten- tial head of a phrase is located and next the sisters of the head are parsed. The head can be in any po- sition in the string and its sisters can either be to the right or to the left. A head-corner parser starts the parsing process with a prediction step. This step is completed when iSee (Veenstra, 1994) for further details. a lexical head is found that is the head-corner of the goal (i.e. the type of constituent that is parsed). The head-corner relation is the reflexive and transitive closure of the head relation. A is the head of B if there is a rule with B as left hand side (LHS) and A as the head daughter on the right hand side (RHS). When a (lexical) head-corner is found an X rule is selected in which the (lexical) head is on the RHS. The sisters of the head are parsed recursively. The LHS of the rule contains the mother of the head. If this mother is a head-corner of the goal, and the mother and the goal are not equal the whole process is repeated by selecting a rule with the new head- corner (i.e. the mother of the first head-corner) on its RHS. In section 2 it is assumed that movement is invari- ably leftward and that GT and Move-a are bottom- up mechanisms. GT builds the VP before other pro- jections. Constituents of VP are moved to higher projections by Move-a, which is a special kind of GT. Suppose that the parser should consider AgrS as the head-corner of AgrSP, which accords with X- Theory. Then the head (AgrS) that should be filled with an adjoined verb by movement from AgrO (in a transitive sentence) or V (in an intransitive sen- tence) is created before AgrO and V. To avoid mov- ing constituents from a part of the tree that has not been built yet, the head-corner table for the min- imalist head-corner parser is not constructed com- pletely according to X-Theory (see (1)). (1) hc(AgrS,AgrSP), hc(V,VP). hc(AgrOP, AgrS). hc(V,V). hc(AgrO,AgrOP), hc(N,NP). hc(VP, AgrO). hc(N,~). For example, instead of AgrO, VP is the head- corner of AgrO. This solution is compatible with the Minimalist Program in the sense that in this way the tree is built up in an absolute bottom-up way (i.e. starting from V) so that a position that should be filled by movement is always created after the position from which the moved element comes. The head-corner table in (1) illustrates that func- tional heads like AgrO and AgrS are not processed as heads. Lexical proj_.ections like VP and NP are treated according to X-Theory. If we follow (1) in combination with the tree in figure 1 we establish the fact that the parser searches its way down to the verb as soon as possible. The top-down prediction step moves from thegoal AgrSP to AgrS to AgrOP to AgrO to VP to V and finally to the lexical head- corner V where the bottom-up process starts as the Minimalist Program requires. The head-corner parsing algorithm and the 339 structure-building operations of the Minimalist Pro- gram (GT and Move-a) have much in common. In both cases a tree is built up in a bottom-up way by starting with a head (lexical head-corner in the pars- ing algorithm, target in the structure building op- erations) and creating the sister of the head recur- sively, etc. 2 By treating only lexical heads as head- corners we achieved that our parsing algorithm com- pletely represents GT. Only for Move-a we need an extra predicate to accomplish a movement if there is a possible movement to the node that has just been created. 4 Parsing vs. Generation In section 3 we chose not to consider functional heads as head-corners. This choice was made because it allows GT and Move-a to start constructing a VP before the projections to which constituents from VP are moved are constructed. Another motivation to start with VP is that V contains information that is useful for the remainder of the structure building process. For example, if the verb is intransitive we know that V does not require a complement sister, and we know that we do not need an AgrOP on top of VP. The fact that V contains lexical information and functional heads like AgrO and AgrS do not, could be used as a justification for the fact that the latter are not head-corners. The main idea of head- driven parsing is, as was stated before, that heads contain relevant information for the parsing process, and that they therefore should be parsed before their sisters. Functional heads obtain their contents via movement of elements from positions lower in the tree. This special status makes them less useful for the parsing process. The Minimalist Program is a generation-oriented framework. Because we are dealing with parsing (as opposed to generation) in this paper there are cer- tain discrepancies between the parser and the frame- work it is based on. In the minimalist framework, lexical information belonging to a chain is available from the moment that the first position of the chain is created, because that is the moment when the lex- icon is consulted. When parsing a sentence the lexi- con is not by definition consulted at the beginning of the chain. Figure 1 shows a tree that contains traces and visible constituents. The position containing a visible constituent is the SO position of that chain. The parser consults the lexicon at the moment in which the SO position of a chain is reached. Conse- Sin the minimalist head-corner parser that is de- scribed here a head always has only one sister because minimalist trees are at most binary branching. quently, when a trace is created before SO, the fea- tures belonging to that trace are unknown. The fea- tures of the traces of a certain chain are known as soon as the SO position is reached, because all posi- tions in a chain are linked. It can be concluded that the absolute bottom-up approach for the building of trees is more useful for generation than for parsing. In generation, lexical information can be used as soon as a position that is the beginning of a chain is created. In parsing we will have to wait until the SO position is reached. In spite of this, we chose not to consider functional heads as heads in order to accomplish an absolute bottom-up process. The reason for this is that, as was mentioned before, otherwise we would be rea- soning backwards with relation to movement. This could be inefficient and it is too far removed from the ideas of the minimalist framework. 5 Future Plans The parser described here can judge the grammat- icality of simple declarative transitive and intransi- tive sentences, and of subordinate clauses. We will extend the parser in such a way that it will cover more advanced linguistic phenomena like anaphors and wh-questions. Furthermore other types of parsers will be built to determine if this 'lexical' head-corner parser is indeed more efficient. 6 Acknowledgements I would like to thank Gosse Bouma, John Nerbonne, Gertjan van Noord and Jan-Wouter Zwart for their helpful comments on earlier versions of this paper. References Gosse Bouma and Gertjan van Noord. 1993. Head- driven parsing for lexicalist grammars: Experi- mental results. In 6th Meeting of the European chapter of the Association for Computational Lin- guistics, Utrecht. Noam Chomsky. 1992. A minimalist program for linguistic theory. MIT Occasional Papers in Lin- guistics. Martin Kay. 1989. Head driven parsing. In Proceed- ings of Workshop on Parsing Technologies, Pitts- burg. Richard S. Kayne. 1994. The antisymmetry of syn- tax. MIT Press, Cambridge. Mettina J.A. Veenstra. 1994. Towards a formaliza- tion of generalized transformation. In H. de Hoop A. de Boer and Henriette de Swart, editors, Lan- guage and Cognition ~, Groningen. 340 | 1995 | 56 |
Robust Parsing Based on Discourse Information: Completing partial parses of ill-formed sentences on the basis of discourse information Tetsuya Nasukawa IBM Research, Tokyo Research Laboratory 1623-14, Shimotsurmna, Yamato-shi, Kanagawa-ken 242, Japan nasukawaOtrl, vnet. ibm. com Abstract In a consistent text, many words and phrases are repeatedly used in more than one sentence. When an identical phrase (a set of consecutive words) is repeated in different sentences, the constituent words of those sentences tend to be associated in identical modification patterns with identi- cal parts of speech and identical modifiee- modifier relationships. Thus, when a syntactic parser cannot parse a sentence as a unified structure, parts of speech and modifiee-modifier relationships among morphologically identical words in com- plete parses of other sentences within the same text provide useful information for obtaining partial parses of the sentence. In this paper, we describe a method for completing partial parses by maintaining consistency among morphologically identi- cal words within the same text as regards their part of speech and their modifiee- modifier relationship. The experimental results obtained by using this method with technical documents offer good prospects for improving the accuracy of sentence analysis in a broad-coverage natural lan- guage processing system such as a machine translation system. 1 Introduction In order to develop a practical natural language pro- cessing (NLP) system, it is essential to deal with ill-formed sentences that cannot be parsed correctly according to the grammar rules in the system. In this paper, an "ill-formed sentence" means one that cannot be parsed as a unified structure. A syntac- tic parser with general grammar rules is often un- able to analyze not only sentences with grammati- cal errors and ellipses, but also long sentences, ow- ing to their complexity. Thus, ill-formed sentences include not only ungrammatical sentences, but also some grammatical sentences that cannot be parsed as unified structures owing to the presence of un- known words or to a lack of completeness in the syntactic parser. In texts from a restricted domain, such as computer manuals, most sentences are gram- matically correct. However, even a well-established syntactic parser usually fails to generate a unified parsed structure for about 10 to 20 percent of all the sentences in such texts, and the failure to generate a unified parsed structure in syntactic analysis leads to a failure in the output of a NLP system. Thus, it is indispensable to establish a correct analysis for such a sentence. To handle such sentences, most previous ap- proaches apply various heuristic rules (Jensen et al., 1992; Douglas and Dale, 1992; Richardson and Braden-Harder, 1988), including • Relaxing constraints in the condition part of a grammatical rule, such as number and gender constraints • Joining partial parses by using meta rules. Either way, the output reflects the general plausibil- ity of an analysis that can be obtained from infor- mation in the sentence; however, the interpretation of a sentence depends on its discourse, and incon- sistency with recovered parses that contain different analyses of the same phrase in other sentences in the discourse often results in odd outputs of the natural language processing system. Starting from the viewpoint that an interpretation of a sentence must be consistent in its discourse, we worked on completing incomplete parses by using information extracted from complete parses in the discourse. The results were encouraging. Since most words in a sentence are repeatedly used in other sen- tences in the discourse, the complete parses of well- formed sentences usually provided some useful infor- mation for completing incomplete parses in the same discourse. Thus, rather than trying to enhance a syntactic parser's grammar rules in order to support ill-formed sentences, which seems to be an endless task after the parser has obtained enough coverage to parse general grammatical sentences, we treat the 39 syntactic parser as a black box and complete incom- plete parses, in the form of partially parsed chunks that a bottom-up parser outputs for ill-formed sen- tences, by using information extracted from the dis- course. In the next section, the effectiveness of using in- formation extracted from the discourse to complete syntactic analysis of ill-formed sentences. After that, we propose an algorithm for completing incomplete parses by using discourse information, and give the results of an experiment on completing incomplete parses in technical documents. 2 Discourse information for completing incomplete parses In this section, we use the word "discourse" to denote a set of sentences that forms a text con- cerning related topics. Gale (Gale et al., 1992) and Nasukawa (Nasukawa, 1993) reported that polyse- mous words within the same discourse have the same word sense with a high probability (98% accord- ing to (Gale et al., 1992),) and the results of our analysis indicate that most content words are fre- quently repeated in the discourse, as is shown in Table 1; moreover, collocation (modifier-modifiee re- lationship) patterns are also repeated frequently in the same discourse, as is shown in Figure 1. This figure reflects the analysis of structurally ambiguous phrases in a computer manual consisting of 791 con- secutive sentences for discourse sizes ranging from 10 to 791 sentences. For each structurally ambigu- ous phrase, more than one candidate collocation pat- tern was formed by associating the structurally am- biguous phrase with its candidate modifiees 1 and a collocation pattern identical with or similar to each of these candidate collocation patterns was searched for in the discourse. An identical collocation pattern is one in which both modifiee and modifier sides con- sist of words that are morphologically identical with those in the sentence being analyzed, and that stand in an identical relationship. A similar collocation pattern is one in which either the modifiee or modi- tier side has a word that is morphologically identical with the corresponding word in the sentence being analyzed, while the other has a synonym. Again, the relationship of the two sides is identical with that in the sentence being analyzed. Except in the case where all 791 sentences were referred to as a discourse, the results indicate the averages obtained by referring to each of several sample areas as a dis- course. For example, to obtain data for the case in which the size of a discourse was 20 sentences, we examined 32 areas each consisting of 20 sentences, 1 For example, in the sentence You can use the folder on the desktop, the ambiguous phrase, on the desktop, forms two candi- date collocation patterns: "use -(on)- desktop" and '%lder -(on)- desktop." such as the 1st sentence to the 20th, the 51st to the 70th, and the 701st to the 720th. Thus, Figure 1 indicates that a collocation pattern either identical with or similar to at least one of the candidate collo- cation patterns of a structurally ambiguous phrase was found within the discourse in more than 70% of cases, provided the discourse contained more than 300 consecutive sentences. On the assumption that this feature of words in a discourse provides a clue to improving the accuracy of sentence analysis, we conducted an experiment on sentences for which a syntactic parser generated more than one parse tree, owing to the presence of words that can be assigned to more than one part of speech, or to the presence of complicated coor- dinate structures, or for various other reasons. If the constituent words tend to be associated in iden- tical modification patterns with an identical part of speech and identical modifiee-modifier relation- ship when an identical phrase (a set of consecutive words) is repeated in different sentences within the discourse, the candidate parse that shares the most collocation patterns with other sentences in the dis- course should be selected as the correct analysis. Out of 736 consecutive sentences in a computer man- ual, the ESG parser (McCord, 1991) generated mul- tiple parses for 150 sentences. In this experiment, we divided the original 736 sentences into two texts, one a discourse of 400 sentences and the other a discourse of 336 sentences. Of the 150 sentences with multiple parses, 24 were incorrectly analyzed in all candidate parses or had identical candidate parses; we there- fore focused on the other 126 sentences. In each candidate parse of these sentences, we assigned a score for each collocation that was repeated in other sentences in the discourse (in the form of either an identical collocation or a similar collocation), and added up the collocation scores to assign a prefer- ence value to the candidate parse. Out of the 126 sentences, different preference values were assigned to candidate parses in 54 sentences, and the highest value was assigned to a correct parse in 48 (88.9%) of the 54 sentences. Thus, there is a strong tendency for identical collocations to be actually repeated in the discourse, and when an identical phrase (a set of consecutive words) is repeated in different sen- tences, their constituent words tend to be associated in identical modification patterns. Figure 2 shows the output of the PEG parser (Jensen, 1992) for the following sentence: (2.1) As you can see, you can choose from many topics to find out what information is available about the AS/400 system. This is the 53rd sentence in Chapter 6 of a computer manual (IBM, 1992), mid every word of it is repeat- edly used in other sentences in the same chapter, as shown in Table 2. For example, the 39th sentence in the same chapter contains "As you can see," as 40 Table 1: Frequency of morphologically identical words in computer manuaJs Part Freq. of morph, identical words Proportion of all content words of Two or more Five or more Total number of Proportion speech times (%) times (%) appearances (words) (%) Noun 90.7 76.2 99047 59.8 Verb 94.9 83.6 35622 21.5 Adjective 88.9 71.0 16941 10.2 Adverb 68.8 4993 3.0 Pronoun 85.9 98.0 94.8 8911 5.4 Total [ 91.6 78.0 165514 I -- Rate of repetition (%) 100.00 -- 80.00 -- 60.00- 40.00 - 20.00- 0.00- J 0 200 400 600 Size of discourse 800 (Number of sentences) Figure 1: Rate of finding identical or similar collocation patterns in relation to the size of the discourse shown in Figure 3. The sentences that contain some words in common with sentence (2.1) provide infor- mation that is very useful for deriving a correct parse of the sentence. Table 2 also shows that the parts of speech (POS) for most words in sentence (2.1) can be derived from words repeated in other sen- tences in the same chapter. In this table, the up- percase letters below the top sentence indicate the parts of speech that can be assigned to the words above. Underneath the candidate part of speech, re- peated phases in other sentences are presented along with the part of speech of each word in those sen- tences; thus, the first word of sentence (2.1), "As," can be a conjunction, an adverb, or a preposition, but complete parses of the 39th and 175th sentences indicate that in this discourse the word is used as a conjunction when it is used in the phrase "As you ca~ see." Furthermore, information on the dependencies among most words in sentence (2.1) can be extracted from phrases repeated in other sentences in the same chapter, as shown in Figure 4. ~ 2Thick arrows indicate dependencies extracted fl'om the discourse information. 3 Implementation 3.1 Algorithm As we showed in the previous section, information that is very useful for obtaining correct parses of ill- formed sentences is provided by complete parses of other sentences in the same discourse in cases where a parser cannot construct a parse tree by using its grammar rules. In this section, we describe an al- gorithm for completing incomplete parses by using this information. The first step of the procedure is to extract fi'om an input text discourse information that the system can refer to in the next step in order to complete in- complete parses. The procedure for extracting dis- course information is as follows: 1. Each sentence in the whole text given as a dis- course is processed by a syntactic parser. Then, except for sentences with incomplete parses and multiple parses, the results of each parse are stored as discourse information. To be pre- cise, the position and the part of speech of each instance of every lemma are stored along with the lemma's modifiee-modifier relation- ships with other content words extracted from 41 ((XXXX (COMMENT(CONJ (NP (AUXP (VERB* (PUNC ",") (VP (NP (AUXP (VERB* (PP (VP* (INFCL (NP (VERB* (AJP ? (PUNC ". ") ) "as") (PRON* "you" ("you" (SG PL)))) (VERB* "can" ("can" PS))) "see" ("see" PS))) (PRON* "you" ("you" (SG PL)))) (VERB* "can" ("can" PS))) "choose" ("choose" PS)) (PP (PREP* "from")) (QUANP (ADJ* "many" ("many" BS))) (NOUN* "topics" ("topic" PL)))) (INFT0 (PREP* "to") ) (VERB* "find" ("find" PS)) (COMPCL (COMPL "") (VERB* "out" ("out" PS)) (NP (PRON* "vhat" ("what" (SG PL)))))) (NOUN* "information" ("information" SG))) "is" ("be" PS)) (ADJ* "available" ("available" BS)) (PP (PP (PREP* "about") ) (DETP (ADJ* "the" ("the" BS))) (NP (NOUN* "AS/400" ("AS/400" (SG PL)))) (NOUN* "system" ("system" SG))))) 0) Figure 2: Example of an incomplete parse obtained by the PEG parser As you can see, the help display provides additional information about the menu options ava/lable, as well as a list of related topics. ((DECL (SUBCL (NP (VERB* (CONJ "as") (NP (PRON* "you" ("you" (SG PL)))) (AUXP (VERB* "can" ("can" PS))) (VERB* "see" ("see" PS)) (PUNC ,,,,,)) (DETP (ADJ* "the" ("the" BS))) (NP (NOUN* "help" ("help" SG))) (NOUN* "display" ("display" SG))) "provides" ("provide" PS)) Figure 3: Thirty-ninth sentence of Chapter 6 and a part of its parse the parse data. Table 3 shows an example of such information. In this table, CFRAMEuuuuuu indicates an instance of cursor in the discourse; information on the position and on the whole sentence can be extracted from each occurrence of CFRAME. In accumulating discourse informa- tion, a score of 1.0 is awarded for each definite modifiee-modifier relationship. A lower score, 0.1, is awarded for each ambiguous modifiee- modifier relationship, since such relationships are less reliable. 2. When all the sentences have been parsed, the discourse information is used to select the most preferable candidate for sentences with multi- ple possible parses, and the data of the selected parse are added to the discourse information. After all the sentences except the ill-formed sen- tences that caused incomplete parses have provided data for use as discourse information, the parse com- pletion procedure begins. The initial data used in the completion procedure are a set of partial parses generated by a bottom-up parser as an incomplete parse tree. For example, the PEG parser generated three partial parses for sen- tence (2.1), consisting of "As you can see," "you can choose from many topics," and "to find out what information is available about the AS/400 system," as shown in Figure 2. Since partial parses are gen- erated by means of grammar rules in a parser, we decided to restructure each partial parse and unify them according to the discourse information, rather than construct the whole parse tree from discourse information. The completion procedure consists of two steps: Step 1: Inspecting each partial parse and restructuring it on the basis of the discourse information For each word in a partial parse, the part of speech and the rood,flee-modifier relationships with other words are inspected. If they are different from those 42 Table 2: Selecting POS candidates on the basis of discourse information As you can see, you can choose from many topics to find out Candidates CJ PN N N PN N V PP AJ N PP N PP for the POS AV V V V N V N of each word PP PN AV PP V As you can see, appears in sentences 39, 175. Phrases repeated within the discourse CJ PN V V you can choose appears in sentences 179. PN V V many appears in sentences 49. AJ I topics find out what appears in sentences 39, 140 , 145 , 160, 161 167 169... N to find [ appears in sentences 236. PP V 1 appears in sentences 32. V PP (PN) POS CJ PN V V PN V V PP AJ N PP V PP what information is available about the AS/400 system. Candidates AJ N V AJ AJ DET N N for the POS AV AV of each word PN PP Phrases what information is available about the appears in sentences 49. repeated AJ N V AJ PP DET within the the AS/400 system. discourse appears in sentences 6, 109, 115. DET N N POS PN N V AJ PP DET N N AJ N=noun PN= ~ronoun V=verb A J----adjective AV=adverb CJ=conjunction PP=preposition DET=determiner ".°, Figure 4: Constructing a dependency structure by combining dependencies existing within phrases that occur in other sentences of the same chapter in the discourse information, the partial parse is re- structured according to the discourse information. For example, Figure 5 shows an incomplete parse of the following sentence, which is the 43rd sentence in a technical text that consists of 175 sentences. 3 (3.1) Fig. 3 is an isometric view of the magazine taken from the operator's side with one car- tridge shown in an unprocessed position and two cartridges shown in a processed position. In the second partial parse, the word "side" is an- alyzed as a verb. The same word appears fifteen times in the discourse information extracted from well-formed sentences, and is analyzed as a noun ev- ery time it appears in complete parses; furthermore, there are no data on the noun "operator" modify- ing the verb "take" through the preposition "from," while there is information on the noun "operator's" modifying the noun "side," as in sentence (3.2), and on the noun "side" modifying the verb "take," as in sentence (3.3). (3.2) In the operation of the invention, an oper- ator loads cartridges into the magazine from 3This structure resulting from an incomplete parse does not indicate that the grammar of the parser lacks a rule for handling a possessive case indicated by an apos- trophe and an s. When the parser fails to generate a unified parse, it outputs partial parses in such a manner that fewer partial parses cover every word in the input sentence. 43 Table 3: Discourse information on modifiees and modifiers of a noun "cursor" Modifiers POS Relation Word (CFRAMEs preference value) Noun of display (CFRAME106873 0.1) in protected area (CFRAME106872 1) to left (CFRAME106407 0.1) right(CFRAME106338 0.1) DIRECT position (CFRAME106405 1) Adjective up line (CFRAME106295 0.1) DIRECT your (CFRAMEI06690 CFRAMEI06550 2) POS Relation Verb with up SUBJ OBJ RECIPIENT Modifiees Word (CFRAMEs preference value) play (CFRAME106928 0.1) be (CFRAMEI06927 0.1) move (CFRAME106688 1) stop (CFRAME106572 1) reach (CFRAME106346 1) move (CFRAME106248 1) move (CFRAME106402 CFKAME106335 CFRAME106292 3) confuse (CFRAME106548 1) move (CFRAME106304 1) isometric view (n) I ~"~f.':~,~ magazine (n) l taken Ivll ~:: ~o.:~o':q operator (n) ] . ......... and (conj) ] q one cartridge (n) J ~[" shown (v) l ~':!n'~q unprocessed position (n) ] two cartridges (n) I J shown (v) l ~,':!ni-- [ processed position (n) ] Figure 5: Example of an incomplete parse by the ESG parser the operator's side as seen in Figs. 3 and 12. (151st sentence) (3.3) Fig. 4 is an isometric view of the magazine taken from the machine side with one cartridge shown in the unprocessed position and two car- tridges shown in the processed position. (44th sentence) Therefore, these two partial parses are restructured by changing the part of speech of the word "side" to noun, and the modifiee of the noun "operator" to otric view (n)J ~.~ :~'f.':~.~ magazine (n)l i from ! " ~ operator (n)] ..! ...... with [ and (conj) ] I one cartridge (n)] ho.n,v,J -4:.u -Z-oce,sed, pos,,onCn)] two cartridges (n) J # ~,~ shown (v)J ~:!n:}--[ processed position (n) ] Figure 6: Example of a completed parse the noun "side," while at the same time changing the modifiee of the noun "side" to the verb "take." As a result, a unifed parse is obtained, as shown in Figure 6. Step 2: Joining partial parses on the basis of the discourse information If the partial parses are not unified into a single structure in the previous step, they are joined to- gether on the basis of the discourse information until a unified parse is obtained. 44 Partial parses are joined as follows: First, the possibility of joining the first two partiM parses is examined, then, either the unification of the first two parses or the second parse is examined to determine whether it can be joined to the third parse, then the examination moves to the next parse, and so on. Two partial parses are joined if the root (head node) of either parse tree can modify a node in the other parse without crossing the modification of other nodes. To examine the possibility of modification, dis- course information is applied at three different lev- els. First, for a candidate modifier and modifiee, an identical pattern containing the modifier word and the modifiee word in the same part of speech and in the same relationship is searched for in the discourse information. Next, if there is no identi- cal pattern, a modification pattern with a synonym (Collins, 1984) of the node on one side is searched for in the discourse information. Then, if this also fails, a modification pattern containing a word that has the same part of speech as the word on one side of the node is searched for. Since the discourse information consists of mod- ification patterns extracted from complete parses, it reflects the grammar rules of the parser, and a matching pattern with a part of speech rather than an actual word on one side can be regarded as a relaxation rule, in the sense that syntactic and se- mantic constraints are less restrictive than the cor- responding grammar rule in the parser. These matching conditions at different levels are applied in such a manner that partial parses are joined through the most preferable nodes. 3.2 Results We have implemented this method on an English-to- Japanese machine translation system called Shalt2 (Takeda et al., 1992), and conducted experiments to evaluate the effectiveness of this method. Ta- ble 4 gives the result of our experiments on two technical documents of different kinds, one a patent document (text 1), and the other a computer man- ual (text 2). Since text 1 contained longer and more complex sentences thml text 2, our ESG parser failed to generate unified parses more often in text 1; on the other hand, the frequency of morpholog- ically identical words and collocation patterns was higher in text 1, and our method was more effec- tive in text 1. In both texts, the discourse infor- mation provided enough information to unify par- tial parses of an incomplete parse in more than half of the cases. However, the resulting unified parses were not always correct. Since sentences with in- complete parses are usually quite long and contain complicated structures, it is hard to obtain a per- fect analysis for those sentences. Thus, in order to evaluate the improvement in the output translation rather than the improvement in the rate of success in syntactic analysis, in which only perfect analy- ses are counted, we compared output translations generated with and without the application of our method. When our method was not applied, partial parses of an incomplete parse were joined by means of some heuristic rules such as the one that joins a partial parse with "NP" ill its root node to a partial parse with "VP" in its root node, and the root node of the second partial parse was joined to the last node of the first partial parse by default. When the discourse information did not provide enough infor- mation to unify partial parses with the application of our method, the heuristic rules were applied. In such cases the default rule of joining the root node of the second partial parse to the last node of the first partial parse was mostly applied, since the least re- strictive matching patterns in our method were sim- ilar to the heuristic rules. Thus, the system gen- erated a unified parse for each sentence regardless of the discourse information, and we compared the output translations generated with and without the application of our method. The results are shown in Table 4. The translations were compared by check- ing how well the output Japanese sentence conveyed the meaning of the input English sentence. Since most unified parses contained various errors, such as incorrect modification patterns and incorrect parts of speech assigned to some words, fewer errors gen- erally resulted in better translations, but incorrect parts of speech resulted in worse translations. 4 Conclusion We have proposed a method for completing partial parses of ill-formed sentences on the basis of informa- tion extracted from complete parses of well-formed sentences in the discourse. Our approach to han- dling ill-formed sentences is fundamentally different from previous ones in that it reanalyzes the part of speech and modifiee-modifier relationships of each word in an ill-formed sentence by using information extracted from analyses of other sentences in the same text, thus, attempting to generate the analy- sis most appropriate to the discourse. The results of our experiments show the effectiveness of this method; moreover, implementation of this method on a machine translation system improved the accu- racy of its translation. Since this method has a sim- ple framework that does not require any extra knowl- edge resources or inference mechanisms, it is robust and suitable for a practical natural language pro- cessing system. Furthermore, in terms of the turn- around time (TAT) of the whole translation pro- cedure, the improvement in the parses achieved by using this method along with other disambiguation methods involving discourse information, as shown in another paper (Nasukawa, 1995), shortened the TAT in the late stages of the translation procedure, 45 Table 4: Results of completing incomplete parses on the basis of discourse information Text i Text 2 Number of sentences in discourse 175 354 Incomplete parses 32 31 Unified into a single parse 18 (56.3%) 17 (54.8%) Improvement in translation Better Even 10 7 Worse 1 3 Partially joined or restructured '" Improvement Better in Even translation Worse 12 (37.5%) 8 (25.8%) 4 2 7 3 1 3 Not changed 2 (6.3%) 6 (19.4%) and compensated for the extra TAT required as a result of using the discourse information, provided the size of the discourse was kept to between 100 and 300 sentences. In this paper, the term "discourse" is used as a set of words in a text together with the usage of each of those words in that text - namely, a part of speech and modifiee-modifier relationships with other words. The basic idea of our method is to im- prove the accuracy of sentence analysis simply by maintaining consistency in the usage of morphologi- cally identical words within the same text. Thus, the effectiveness of this method is highly dependent on the source text, since it presupposes that morpholog- ically identical words are likely to be repeated in the same text. However, the results have been encourag- ing at least with technical documents such as com- puter manuals, where words with the same lemma are frequently repeated in a small area of text. More- over, our method improves the translation accuracy, especially for frequently repeated phrases, which are usually considered to be important, and leads to an improvement in the overall accuracy of the natural language processing system. Acknowledgements I would like to thank Michael McDonald for in- valuable help in proofreading this paper. I would also like to thank Taijiro Tsutsumi, Masayuki Mo- rohashi, Koichi Takeda, Hiroshi Maruyama, Hiroshi Nomiyama, Hideo Watanabe, Shiho Ogino, and the anonymous reviewers for their comments and sug- gestions. Gale, W.A., Church, K.W., and Yarowsky, D. 1992. One Sense per Discourse. In Proceedings o/the 4th DARPA Speech and Natural Language Workshop. Jensen, K., Heidorn, G.E., Miller, L.A. and Ravin, Y. 1983. Parse Fitting and Prose Fixing: Getting a Hold on Ill-Formedness. Computational Linguis- tics, Vol. 9, Nos. 3-4. Jensen, K. 1992. PEG: The PLNLP English Gram- mar. Natural Language Processing: The PLNLP Approach, K. Jensen, G. Heidorn, and S. Richard- son, eds., Boston, Mass.: Kluwer Academic Pub- lishers. McCord, M. 1991. The Slot Grammar System. IBM Research Report, RC17313. Nasukawa, T. 1993. Discourse Constraint in Com- puter Manuals. In Proceedings of TMI-93. Nasukawa, T. 1995. Shallow and Robust Context Processing for a Practical MT System. To appear in Proceedings of IJCAI-95 Workshop on "Context in Natural Language Processing." Richardson, S.D. and Braden-Harder, L.C. 1988. The Experience of Developing a Large-Scale Nat- ural Language Text Processing System: CRI- TIQUE. In Proceedings o/ ANLP-88. Takeda, K., Uramoto, N., Nasukawa, T., and Tsut- sumi, T. 1992. Shalt2 - A Symmetric Machine Translation System with Conceptual Transfer. In Proceedings of COLING-92. IBM 1992. IBM Application System/400 New User's Guide Version 2. IBM Corp. COLLINS 1984. The New Collins Thesaurus. Collins Publishers, Glasgow. References Douglas, S. and Dale, R. 1992. Towards Robust PATR. In Proceedings of COLING-92. 46 | 1995 | 6 |
Corpus Statistics Meet the Noun Compound: Some Empirical Results Mark Lauer Microsoft Institute 65 Epping Road, North Ryde NSW 2113 Australia t-markl©microsoft, com Abstract A variety of statistical methods for noun compound anMysis are implemented and compared. The results support two main conclusions. First, the use of conceptual association not only enables a broad cove- rage, but also improves the accuracy. Se- cond, an analysis model based on depen- dency grammar is substantially more accu- rate than one based on deepest constitu- ents, even though the latter is more preva- lent in the literature. 1 Background 1.1 Compound Nouns If parsing is taken to be the first step in taming the natural language understanding task, then broad co- verage NLP remains a jungle inhabited by wild be- asts. For instance, parsing noun compounds appears to require detailed world knowledge that is unavaila- ble outside a limited domain (Sparek Jones, 1983). Yet, far from being an obscure, endangered species, the noun compound is flourishing in modern lan- guage. It has already made five appearances in this paragraph and at least one diachronic study shows a veritable population explosion (Leonard, 1984). While substantial work on noun compounds exists in both linguistics (e.g. Levi, 1978; Ryder, 1994) and computational linguistics (Finin, 1980; McDonald, 1982; Isabelle, 1984), techniques suitable for broad coverage parsing remain unavailable. This paper ex- plores the application of corpus statistics (Charniak, 1993) to noun compound parsing (other computa- tional problems are addressed in Arens el al, 1987; Vanderwende, 1993 and Sproat, 1994). The task is illustrated in example 1: Example 1 (a) [womanN [aidN workerN]] (b) [[hydrogenN ionN] exchangeN] The parses assigned to these two compounds dif- fer, even though the sequence of parts of speech are identical. The problem is analogous to the prepo- sitional phrase attachment task explored in Hindle and Rooth (1993). The approach they propose in- volves computing lexical associations from a corpus and using these to select the correct parse. A similar architecture may be applied to noun compounds. In the experiments below the accuracy of such a system is measured. Comparisons are made across five dimensions: • Each of two analysis models are applied: adja- cency and dependency. • Each of a range of training schemes are em- ployed. • Results are computed with and without tuning factors suggested in the literature. • Each of two parameterisations are used: asso- ciations between words and associations bet- ween concepts. • Results are collected with and without machine tagging of the corpus. 1.2 Training Schemes While Hindle and Rooth (1993) use a partial par- ser to acquire training data, such machinery appears unnecessary for noun compounds. Brent (1993) has proposed the use of simple word patterns for the ac- quisition of verb subcategorisation information. An analogous approach to compounds is used in Lauer (1994) and constitutes one scheme evaluated below. While such patterns produce false training examp- les, the resulting noise often only introduces minor distortions. A more liberal alternative is the use of a co- occurrence window. Yarowsky (1992) uses a fixed 100 word window to collect information used for sense disambiguation. Similarly, Smadja (1993) uses a six content word window to extract significant col- locations. A range of windowed training schemes are employed below. Importantly, the use of a window provides a natural means of trading off the amount of data against its quality. When data sparseness un- dermines the system accuracy, a wider window may 47 admit a sufficient volume of extra accurate data to outweigh the additional noise. 1.3 Noun Compound Analysis There are at least four existing corpus-based al- gorithms proposed for syntactically analysing noun compounds. Only two of these have been subjected to evaluation, and in each case, no comparison to any of the other three was performed. In fact all au- thors appear unaware of the other three proposals. I will therefore briefly describe these algorithms. Three of the algorithms use what I will call the ADJACENCY MODEL, an analysis procedure that goes back to Marcus (1980, p253). Therein, the proce- dure is stated in terms of calls to an oracle which can determine if a noun compound is acceptable. It is reproduced here for reference: Given three nouns nl, n2 and nz: • If either [nl n2] or In2 n~] is not semantically acceptable then build the alternative structure; • otherwise, if [n2 n3] is semantically preferable to [nl n2] then build In2 nz]; • otherwise, build [nl n2]. Only more recently has it been suggested that cor- pus statistics might provide the oracle, and this idea is the basis of the three algorithms which use the adjacency model. The simplest of these is repor- ted in Pustejovsky et al (1993). Given a three word compound, a search is conducted elsewhere in the corpus for each of the two possible subcomponents. Whichever is found is then chosen as the more closely bracketed pair. For example, when backup compiler disk is encountered, the analysis will be: Example 2 (a) [backupN [compilerN diskN]] when compiler disk appears elsewhere (b) [[backupN compilerN] diskN] when backup compiler appears elsewhere Since this is proposed merely as a rough heuristic, it is not stated what the outcome is to be if neither or both subcomponents appear. Nor is there any evaluation of the algorithm. The proposal of Liberman and Sproat (1992) is more sophisticated and allows for the frequency of the words in the compound. Their proposal invol- ves comparing the mutual information between the two pairs of adjacent words and bracketing together whichever pair exhibits the highest. Again, there is no evaluation of the method other than a demon- stration that four examples work correctly. The third proposal based on the adjacency model appears in Resnik (1993) and is rather more complex again. The SELECTIONAL ASSOCIATION between a predicate and a word is defined based on the con- tribution of the word to the conditional entropy of the predicate. The association between each pair of words in the compound is then computed by ta- king the maximum selectional association from all possible ways of regarding the pair as predicate and argument. Whilst this association metric is compli- cated, the decision procedure still follows the out- line devised by Marcus (1980) above. Resnik (1993) used unambiguous noun compounds from the parsed Wall Stree~ Journal (WSJ) corpus to estimate the association ~alues and analysed a test set of around 160 compounds. After some tuning, the accuracy was about 73%, as compared with a baseline of 64% achieved by always bracketing the first two nouns together. The fourth algorithm, first described in Lauer (1994), differs in one striking manner from the other three. It uses what I will call the DEPENDENCY MO- DEL. This model utilises the following procedure when given three nouns at, n2 and n3: • Determine how acceptable the structures [nl n2] and [nl n3] are; • if the latter is more acceptable, build [n2 nz] first; • otherwise, build In1 rig.] first. Figure 1 shows a graphical comparison of the two analysis models. In Lauer (1994), the degree of acceptability is again provided by statistical measures over a cor- pus. The metric used is a mutual information-like measure based on probabilities of modification rela- tionships. This is derived from the idea that parse trees capture the structure of semantic relationships within a noun compound. 1 The dependency model attempts to choose a parse which makes the resulting relationships as accepta- ble as possible. For example, when backup compiler disk is encountered, the analysis will be: Example 3 (a) [backupN [compilerN diskN]] when backup disk is more acceptable (b) [[backupN compilerN] diskN] when backup compiler is more acceptable I claim that the dependency model makes more intuitive sense for the following reason. Consider the compound calcium ion exchange, which is typi- cally left-branching (that is, the first two words are bracketed together). There does not seem to be any reason why calcium ion should be any more frequent than ion exchange. Both are plausible compounds and regardless of the bracketing, ions are the object of an exchange. Instead, the correct parse depends on whether calcium characterises the ions or media- tes the exchange. Another significant difference between the models is the predictions they make about the proportion 1Lauer and Dras (1994) give a formal construction motivating the algorithm given in Lauer (1994). 48 L N2 t R Adjacency N3 t Prefer left-branching ig L is more acceptable than R L N1 N2 N3 t t R Dependency Figure 1: Two analysis models and the associations they compare of left and right-branching compounds. Lauer and Dras (1994) show that under a dependency mo- del, left-branching compounds should occur twice as often as right-branching compounds (that is two- thirds of the time). In the test set used here and in that of Resnik (1993), the proportion of left- branching compounds is 67% and 64% respectively. In contrast, the adjacency model appears to predict a proportion of 50%. The dependency model has also been proposed by Kobayasi et al (1994) for analysing Japanese noun compounds, apparently independently. Using a cor- pus to acquire associations, they bracket sequences of Kanji with lengths four to six (roughly equiva- lent to two or three words). A simple calculation shows that using their own preprocessing hueristics to guess a bracketing provides a higher accuracy on their test set than their statistical model does. This renders their experiment inconclusive. 2 Method 2.1 Extracting a Test Set A test set of syntactically ambiguous noun com- pounds was extracted from our 8 million word Gro- lier's encyclopedia corpus in the following way. 2 Be- cause the corpus is not tagged or parsed, a some- what conservative strategy of looking for unambi- guous sequences of nouns was used. To distinguish nouns from other words, the University of Penn- sylvania morphological analyser (described in Karp et al, 1992) was used to generate the set of words that can only be used as nouns (I shall henceforth call this set AZ). All consecutive sequences of these words were extracted, and the three word sequences used to form the test set. For reasons made clear below, only sequences consisting entirely of words from Roget's thesaurus were retained, giving a total of 308 test triples. 3 These triples were manually analysed using as context the entire article in which they appeared. In 2We would like to thank Grolier's for permission to use this material for research purposes. 3The 1911 version of Roget's used is available on-line and is in the public domain. some cases, the sequence was not a noun compound (nouns can appear adjacent to one another across various constituent boundaries) and was marked as an error. Other compounds exhibited what Hin- die and Rooth (1993) have termed SEMANTIC INDE- TERMINACY where the two possible bracketings can- not be distinguished in the context. The remaining compounds were assigned either a left-branching or right-branching analysis. Table 1 shows the number of each kind and an example of each. Accuracy figures in all the results reported be- low were computed using only those 244 compounds which received a parse. 2.2 Conceptual Association One problem with applying lexical association to noun compounds is the enormous number of para- meters required, one for every possible pair of nouns. Not only does this require a vast amount of memory space, it creates a severe data sparseness problem since we require at least some data about each pa- rameter. Resnik and Hearst (1993) coined the term CONCEPTUAL ASSOCIATION to refer to association values computed between groups of words. By assu- ming that all words within a group behave similarly, the parameter space can be built in terms of the groups rather than in terms of the words. In this study, conceptual association is used with groups consisting of all categories from the 1911 ver- sion of Roget's thesaurus. 4 Given two thesaurus ca- tegories tl and t~, there is a parameter which re- presents the degree of acceptability of the structure [nine] where nl is a noun appearing in tl and n2 appears in t2. By the assumption that words within a group behave similarly, this is constant given the two categories. Following Lauer and Dras (1994) we can formally write this parameter as Pr(tl ~ t2) where the event tl ~ t2 denotes the modification of a noun in t2 by a noun in tl. 2.3 Training To ensure that the test set is disjoint from the trai- ning data, all occurrences of the test noun com- pounds have been removed from the training corpus. 4It contains 1043 categories. 49 Type Error Indeterminate Left-branching Right-branching Number 29 35 163 81 Proportion 9% 11% 53% 26% Example In monsoon regions rainfall does not ... Most advanced aircraft have precision navigation systems. ...escaped punishment by the Allied war crimes tribunals. Ronald Reagan, who won two landslide election victories, ... Table 1: Test set distribution Two types of training scheme are explored in this study, both unsupervised. The first employs a pat- tern that follows Pustejovsky (1993) in counting the occurrences of subcomponents. A training instance is any sequence of four words WlW2W3W 4 where wl, w4 ~ .h/and w2, w3 E A/'. Let county(n1, n2) be the number of times a sequence wlnln2w4 occurs in the training corpus with wl, w4 ~ At'. The second type uses a window to collect training instances by observing how often a pair of nouns co- occur within some fixed number of words. In this study, a variety of window sizes are used. For n > 2, let countn(nl, n2) be the number of times a sequence nlwl...wins occurs in the training corpus where i < n - 2. Note that windowed counts are asym- metric. In the case of a window two words wide, this yields the mutual information metric proposed by Liberman and Sproat (1992). Using each of these different training schemes to arrive at appropriate counts it is then possible to estimate the parameters. Since these are expressed in terms of categories rather than words, it is ne- cessary to combine the counts of words to arrive at estimates. In all cases the estimates used are: 1 count(wl, w2) Vr(tl --, t2) = ~ ambig(wl) ambig(w2) wlfitl w2qt2 count(wl, w2) where ~ = • ~,~j¢ ambig(wl)ambig(w~) w2Et2 Here ambig(w) is the number of categories in which w appears. It has the effect of dividing the evidence from a training instance across all possi- ble categories for the words. The normaliser ensures that all parameters for a head noun sum to unity. 2.4 Analysing the Test Set Given the high level descriptions in section 1.3 it remains only to formalise the decision process used to analyse a noun compound. Each test compound presents a set of possible analyses and the goal is to choose which analysis is most likely. For three word compounds it suffices to compute the ratio of two probabilities, that of a left-branching analysis and that of a right-branching one. If this ratio is greater than unity, then the left-branching analysis is cho- sen. When it is less than unity, a right-branching analysis is chosen. ~ If the ratio is exactly unity, the analyser guesses left-branching, although this is fai- rly rare for conceptual association as shown by the experimental results below. For the adjacency model, when the given com- pound is WlW2W3, we can estimate this ratio as: Ra4i : ~-~t,~cats(..~ Pr(tl ---* t2) (1) ~-'~t,ecats(-b Pr(t2 ---* t3) For the dependency model, the ratio is: Rdep = ~-~,,ec~ts(~,) Pr(Q ---* t~) Pr(t~ ---* ta) (2) )-~t,ec~ts(~) Pr(~l ---* t3) Pr(t2 ~ ta) In both cases, we sum over all possible categories for the words in the compound. Because the de- pendency model equations have two factors, they are affected more severely by data sparseness. If the probability estimate for Pr(t2 ~ t3) is zero for all possible categories t2 and t3 then both the nu- merator and the denominator will be zero. This will conceal any preference given by the parame- ters involving Q. In such cases, we observe that the test instance itself provides the information that the event t2 --~ t3 can occur and we recalculate the ra- tio using Pr(t2 ---* t3) = k for all possible categories t2,t a where k is any non-zero constant. However, no correction is made to the probability estimates for Pr(tl --~ t2) and Pr(Q --* t3) for unseen cases, thus putting the dependency model on an equal footing with the adjacency model above. The equations presented above for the dependency model differ from those developed in Lauer and Dras (1994) in one way. There, an additional weighting factor (of 2.0) is used to favour a left-branching ana- lysis. This arises because their construction is ba- sed on the dependency model which predicts that left-branching analyses should occur twice as often. Also, the work reported in Lauer and Dras (1994) uses simplistic estimates of the probability of a word given its thesaurus category. The equations above assume these probabilities are uniformly constant. Section 3.2 below shows the result of making these two additions to the method. sit either probability estimate is zero, the other ana- lysis is chosen. If both are zero the analysis is made as if the ratio were exactly unity. 50 Accuracy (%) 85 80 75 70 65 60 55 50 I I I I I I I I Dependency Model Adjacency Model o Guess Left .... Pattern 2 3 4 5 10 50 100 Training scheme (integers denote window widths) Figure 2: Accuracy of dependency and adjacency model for various training schemes 3 Results 3.1 Dependency meets Adjacency Eight different training schemes have been used to estimate the parameters and each set of estimates used to analyse the test set under both the adjacency and the dependency model. The schemes used are: • the pattern given in section 2.3; and • windowed training schemes with window widths of 2, 3, 4, 5, 10, 50 and 100 words. The accuracy on the test set for all these expe- riments is shown in figure 2. As can be seen, the dependency model is more accurate than the adja- cency model. This is true across the whole spec- trum of training schemes. The proportion of cases in which the procedure was forced to guess, either because no data supported either analysis or because both were equally supported, is quite low. For the pattern and two-word window training schemes, the guess rate is less than 4% for both models. In the three-word window training scheme, the guess rates are less than 1%. For all larger windows, neither model is ever forced to guess. In the case of the pattern training scheme, the difference between 68.9% for adjacency and 77.5% for dependency is statistically significant at the 5% level (p = 0.0316), demonstrating the superiority of the dependency model, at least for the compounds within Grolier's encyclopedia. In no case do any of the windowed training sche- mes outperform the pattern scheme. It seems that additional instances admitted by the windowed sche- mes are too noisy to make an improvement. Initial results from applying these methods to the EMA corpus have been obtained by Wilco ter Stal (1995), and support the conclusion that the depen- dency model is superior to the adjacency model. 3.2 Tuning Lauer and Dras (1994) suggest two improvements to the method used above. These are: • a factor favouring left-branching which arises from the formal dependency construction; and • factors allowing for naive estimates of the varia- tion in the probability of categories. While these changes are motivated by the depen- dency model, I have also applied them to the adja- cency model for comparison. To implement them, equations 1 and 2 must be modified to incorporate 1 in each term of the sum and the a factor of entire ratio must be multiplied by two. Five trai- ning schemes have been applied with these extensi- ons. The accuracy results are shown in figure 3. For comparison, the untuned accuracy figures are shown with dotted lines. A marked improvement is obser- ved for the adjacency model, while the dependency model is only slightly improved. 3.3 Lexical Association To determine the difference made by conceptual as- sociation, the pattern training scheme has been re- trained using lexical counts for both the dependency and adjacency model, but only for the words in the test set. If the same system were to be app- lied across all of Af (a total of 90,000 nouns), then around 8.1 billion parameters would be required. Left-branching is favoured by a factor of two as de- scribed in the previous section, but no estimates for the category probabilities are used (these being mea- ningless for the lexical association method). Accuracy and guess rates are shown in figure 4. Conceptual association outperforms lexical associa- tion, presumably because of its ability to generalise. 3.4 Using a Tagger One problem with the training methods given in sec- tion 2.3 is the restriction of training data to nouns in .Af. Many nouns, especially common ones, have verbal or adiectival usages that preclude them from being in .Af. Yet when they occur as nouns, they still provide useful training information that the cur- rent system ignores. To test whether using tagged 51 Accuracy (%) 85 80 75 70 65 60 55 50 I I l I Tuned Dependency - .............. Tuned Adjacency o • . . . . . . . . I I I Pattern 2 3 5 10 Training scheme (integers denote window widths) Figure 3: Accuracy of tuned dependency and adjacency model for various training schemes A c c u r a c y (%) 85 80 75 70 65 60 55 50 I ! Conceptual • Lexical O _ Dependency Adjacency 30 25 2O Guess Rate (%) 15 10 I I Conceptual • Lexical [] I I Dependency Adjacency Figure 4: Accuracy and Guess Rates of Lexical and Conceptual Association 52 85 i l Accuracy (%) 80 75 70 65 60 55 50 Tagged Dependency Tagged Adjacency o I I Pattern 3 Training scheme (integers denote window widths) Figure 5: Accuracy using a tagged corpus for various training schemes data would make a difference, the freely available Brill tagger (Brill, 1993) was applied to the corpus. Since no manually tagged training data is available for our corpus, the tagger's default rules were used (these rules were produced by Brill by training on the Brown corpus). This results in rather poor tag- ging accuracy, so it is quite possible that a manually tagged corpus would produce better results. Three training schemes have been used and the tuned analysis procedures applied to the test set. Figure 5 shows the resulting accuracy, with accuracy values from figure 3 displayed with dotted lines. If anything, admitting additional training data based on the tagger introduces more noise, reducing the accuracy. However, for the pattern training scheme an improvement was made to the dependency model, producing the highest overall accuracy of 81%. 4 Conclusion The experiments above demonstrate a number of im- portant points. The most general of these is that even quite crude corpus statistics can provide infor- mation about the syntax of compound nouns. At the very least, this information can be applied in broad coverage parsing to assist in the control of search. I have also shown that with a corpus of moderate size it is possible to get reasonable results without using a tagger or parser by employing a customised trai- ning pattern. While using windowed co-occurrence did not help here, it is possible that under more data sparse conditions better performance could be achie- ved by this method. The significance of the use of conceptual associa- tion deserves some mention. I have argued that wit- hout it a broad coverage system would be impossible. This is in contrast to previous work on conceptual association where it resulted in little improvement on a task which could already be performed. In this study, not only has the technique proved its worth by supporting generality, but through generalisation of training information it outperforms the equivalent lexical association approach given the same informa- tion. Amongst all the comparisons performed in these experiments one stands out as exhibiting the grea- test contrast. In all experiments the dependency model provides a substantial advantage over the ad- jacency model, even though the latter is more pre- valent in proposals within the literature. This re- sult is in accordance with the informal reasoning gi- ven in section 1.3. The model also has the further commendation that it predicts correctly the obser- ved proportion of left-branching compounds found in two independently extracted test sets. In all, the most accurate technique achieved an ac- curacy of 81% as compared to the 67% achieved by guessing left-branching. Given the high frequency of occurrence of noun compounds in many texts, this suggests tha; the use of these techniques in proba- bilistic parsers will result in higher performance in broad coverage natural language processing. 5 Acknowledgements This work has received valuable input from people too numerous to mention. The most significant con- tributions have been made by Richard Buckland, Robert Dale and Mark Dras. I am also indebted to Vance Gledhill, Mike Johnson, Philip Resnik, Ri- chard Sproat, Wilco ter Stal, Lucy Vanderwende and Wayne Wobcke. Financial support is gratefully ack- 53 nowledged from the Microsoft Institute and the Au- stralian Government. References Arens, Y., Granacki, J. and Parker, A. 1987. Phra- sal Analysis of Long Noun Sequences. In Procee- dings of the 25th Annual Meeting of the Associa- tion for Computational Linguistics, Stanford, CA. pp59-64. Brent, Michael. 1993. From Grammar to Lexi- con: Unsupervised Learning of Lexical Syntax. In Computational Linguistics, Vol 19(2), Special Is- sue on Using Large Corpora II, pp243-62. Brill, Eric. 1993. A Corpus-based Approach to Lan- guage Learning. PhD Thesis, University of Penn- sylvania, Philadelphia, PA.. Charniak, Eugene. 1993. Statistical Language Lear- ning. MIT Press, Cambridge, MA. Finin, Tim. 1980. The Semantic Interpretation of Compound Nominals. PhD Thesis, Co-ordinated Science Laboratory, University of Illinois, Urbana, IL. Hindle, D. and Rooth, M. 1993. Structural Am- biguity and Lexical Relations. In Computational Linguistics Vol. 19(1), Special Issue on Using Large Corpora I, ppl03-20. Isabelle, Pierre. 1984. Another Look At Nominal Compounds. In Proceedings of COLING-84, Stan- ford, CA. pp509-16. Karp, D., Schabes, Y., Zaidel, M. and Egedi, D. 1992. A Freely Available Wide Coverage Mor- phological Analyzer for English. In Proceedings of COLING-92, Nantes, France, pp950-4. Kobayasi, Y., Tokunaga, T. and Tanaka, H. 1994. Analysis of Japanese Compound Nouns using Collocational Information. In Proceedings of COLING-94, Kyoto, Japan, pp865-9. Lauer, Mark. 1994. Conceptual Association for Compound Noun Analysis. In Proceedings of the 32nd Annual Meeting of the Association for Com- putational Linguistics, Student Session, Las Cru- ces, NM. pp337-9. Lauer, M. and Dras, M. 1994. A Probabilistic Mo- del of Compound Nouns. In Proceedings of the 7th Australian Joint Conference on Artificial Intelli- gence, Armidale, NSW, Australia. World Scienti- fic Press, pp474-81. Leonard, Rosemary. 1984. The Interpretation of English Noun Sequences on the Computer. North- Holland, Amsterdam. Levi, Judith. 1978. The Syntax and Semantics of Complex Nominals. Academic Press, New York. Liberman, M. and Sproat, R. 1992. The Stress and Structure of Modified Noun Phrases in English. In Sag, I. and Szabolcsi, A., editors, Lexical Mat- ters CSLI Lecture Notes No. 24. University of Chicago Press, ppl31-81. Marcus, Mit~.hell. 1980. A Theory of Syntactic Re- cognition for Natural Language. MIT Press, Cam- bridge, MA. McDonald, David B. 1982. Understanding Noun Compounds. PhD Thesis, Carnegie-Mellon Uni- versity, Pittsburgh, PA. Pustejovsky, J., Bergler, S. and Anick, P. 1993. Le- xical Semantic Techniques for Corpus Analysis. In Computational Linguistics Vol 19(2), Special Is- sue on Using Large Corpora II, pp331-58. Resnik, Philip. 1993. Selection and Informa- tion: A Class.Based Approach to Lexical Relati- onships. PhD dissertation, University of Pennsyl- vania, Philadelphia, PA. Resnik, P. and Hearst, M. 1993. Structural Ambi- guity and Conceptual Relations. In Proceedings of the Workshop on Very Large Corpora: Academic and Industrial Perspectives, June 22, Ohio State University, pp58-64. Ryder, Mary Ellen. 1994. Ordered Chaos: The In- terpretation of English Noun-Noun Compounds. University of California Press Publications in Lin- guistics, Vol 123. Smadja, Frank. 1993. Retrieving Collocations from Text: Xtract. In Computational Linguistics, Vol 19(1), Special Issue on Using Large Corpora I, pp143-177. Sparck Jones, Karen. 1983. Compound Noun Interpretation Problems. In Fallside, F. and Woods, W.A., editors, Computer Speech Proces- sing. Prentice-Hall, NJ. pp363-81. Sproat, Richard. 1994. English noun-phrase accent prediction for text-to-speech. In Computer Speech and Language, Vol 8, pp79-94. Vanderwende, Lucy. 1993. SENS: The System for Evaluating Noun Sequences. In Jensen, K., Hei- dorn, G. and Richardson, S., editors, Natural Lan- guage Processing: The PLNLP Approach. Kluwer Academic, pp161-73. ter Stal, Wilco. 1995. Syntactic Disambiguation of Nominal Compounds Using Lexical and Concep- tual Association. Memorandum UT-KBS-95-002, University of Twente, Enschede, Netherlands. Yarowsky, David. 1992. Word-Sense Disambigua- tion Using Statistical Models of Roget's Catego- ries Trained on Large Corpora. In Proceedings of COLING-92, Nantes, France, pp454-60. 54 | 1995 | 7 |
DATR Theories and DATR Models Bill Keller School of Cognitive and Computing Sciences The University of Sussex Brighton, UK email: [email protected] Abstract Evans and Gazdar (Evans and Gazdar, 1989a; Evans and Gazdar, 1989b) intro- duced DATR as a simple, non-monotonic language for representing natural language lexicons. Although a number of implemen- tations of DATR exist, the full language has until now lacked an explicit, declarative se- mantics. This paper rectifies the situation by providing a mathematical semantics for DATR. We present a view of DATR as a lan- guage for defining certain kinds of partial functions by cases. The formal model pro- vides a transparent treatment of DATR's notion of global context. It is shown that DA-I'R's default mechanism can be accoun- ted for by interpreting value descriptors as families of values indexed by paths. 1 Introduction DATR was introduced by Evans and Gazdar (1989a; 1989b) as a simple, declarative language for repre- senting lexical knowledge in terms of path/value equations. The language lacks many of the con- structs found in general purpose, knowledge repre- sentation formalisms, yet it has sufficient expressive power to capture concisely the structure of lexical information at a variety of levels of linguistic des- cription. At the present time, DATR is probably the most widely-used formalism for representing natu- ral language lexicons in the natural language pro- cessing (NLP) community. There are around a do- zen different implementations of the language and large DATR lexicons have been constructed for use in a variety of applications (Cahill and Evans, 1990; Andry et al., 1992; Cahill, 1994). DATR has been applied to problems in inflectional and derivational morphology (Gazdar, 1992; Kilbury, 1992; Corbett and Fraser, 1993), lexical semantics (Kilgariff, 1993), morphonology (Cahill, 1993), prosody (Gibbon and Bleiching, 1991) and speech (Andry et al., 1992). In more recent work, the language has been used to provide a concise encoding of Lexicalised Tree Ad- joining Grammar (Evans et al., 1994; Evans et al., 1995). A primary objective in the development of DATR has been the provision of an explicit, mathematically rigorous semantics. This goal was addressed in one of the first publications on the language (Evans and Gazdar, 1989b). The definitions given there deal with a subset of DATR that includes core features of the language such as the notions of local and global inheritance and DATR's default mechanism. Howe- ver, they exclude some important and widely-used constructs, most notably string (or 'list') values and evaluable paths. Moreover, it is by no means clear that the approach can be generalized appropriately to cover these features. In particular, the formal ap- paratus introduced by Evans and Gazdar in (1989b) provides no explicit model of DATR's notion of glo- bal contexL Rather, local and global inheritance are represented by distinct semantic functions £: and G. This approach is possible only on the (overly restric- tive) assumption that DArR statements involve eit- her local or global inheritance relations, but never both. The purpose of the present paper is to remedy the deficiencies of the work described in (Evans and Gazdar, 1989b) by furnishing DATR with a trans- parent, mathematical semantics. There is a stan- dard view of DATR as a language for representing a certain class of non-monotonic inheritance networks ('semantic nets'). While this perspective provides an intuitive and appealing way of thinking about the structure and representation of lexical knowledge, it is less clear that it provides an accurate or particu- larly helpful picture of the DATR language itself. In fact, there are a number of constructs available in DATR that are impossible to visualize in terms of simple inheritance hierarchies. For this reason, the work described in this paper reflects a rather diffe- rent perspective on DATR, as a language for defining certain kinds of partial functions by cases. In the fol- lowing sections this viewpoint is made more precise. Section 2 presents the syntax of the DATR language and introduces the notion of a DATR theory. An 55 informal introduction to the DATR language is pro- vided, by example, in section 3. The semantics of DATR is then covered in two stages. Section 4.1 introduces DATR interepretations and describes the semantics of a restricted version of the language wit- hout defaults. The treatment of implicit information is covered in section 4.2, which provides a definition of a default model for a DATR theory. 2 DATR Theories Let NODE and ATOM be disjoint sets of symbols (the nodes and atoms respectively). Nodes are denoted by N and atoms by a. The set DESC of DATR value descriptors (or simply descriptors) is built up from the atoms and nodes as shown below. Descriptors are denoted by d. • a E DESC for any a E ATOM • For any N E NODE and dl...dn E DESC: N : (dl'"dn) E DESC "N : (dl --.dn)" E DESC "(dl "-'dn)" • DESC "N" • DESC Value descriptors are either atoms or inheritance descriptors, where an inheritance descriptor is fur- ther distinguished as either local (unquoted) or glo- bal (quoted). There is just one kind of local descrip- tor (node/path), but three kinds of global descriptor (node/path, path and node) 1 A path (al...an) is a (possibly empty) sequence of atoms enclosed in angle brackets. Paths are deno- ted by P. For N a node, P a path and a • ATOM* a (possibly empty) sequence of atoms, an equation of the form N : P = a is called an extensional sentence. Intuitively, an extensional sentence N : P = a states that the value associated with the path P at node N is a. For ¢ a (possibly empty) sequence of value descriptors, an equation of the form N : P == ¢ is called a definitional sentence. A definitional sent- ence N : P --- ¢ specifies a property of the node N, namely that the path P is associated with the value defined by the sequence of value descriptors ¢. A collection of equations can be used to specify the properties of different nodes in terms of one another, and a finite set of DATR sentences 7- is called a DATR theory. In principle, a DATR theory 7" may consist of any combination of DATR sentences, either defini- tional or extensional, but in practice, DATR theories are more restricted than this. The theory 7- is said to be definitional if it consists solely of definitional sentences and it is said to be functional if it meets the following condition: 1The syntax presented in (Evans and Gazdar, 1989a; Evans and Gazdar, 1989b) permits nodes and paths to stand as local descriptors. However, these additional forms can be viewed as conventional abbreviations, in the appropriate syntactic context, for node/path pairs N : P == ~b and N : P == ¢ E 7" implies ~b = ¢ There is a pragmatic distinction between defini- tional and extensional sentences akin to that drawn between the language used to define a database and that used to query it. DATR interpreters conventio- nally treat all extensional sentences as 'goal' state- ments, and evaluate them as soon as they are en- countered. Thus, it is not possible, in practice, to combine definitional and extensional sentences wi- thin a theory 2. Functionality for DATR theories, as defined above, is really a syntactic notion. Howe- ver, it approximates a deeper, semantic requirement that the nodes should correspond to (partial) func- tions from paths to values. In the remainder of this paper we will use the term (DATR) theory always in the sense functional, defi- nitional (DATR) theory. For a given DATR theory 7" and node N of 7", we write 7"/N to denote that subset of the sentences in 7" that relate to the node N. That is: T/N = {s e 7-Is = N : P == ~b} The set TIN is referred to as the definition of N (in 7-). 3 An Overview of DATR An example of (a fragment of) a DATR theory is shown in figure 1. The theory makes use of some standard abbreviatory devices that enable nodes and/or paths to be omitted in certain cases. For example, sets of sentences relating to the same node are written with the node name implicit in all but the first-given sentence in the set. Also, we write See : 0 == Verb to abbreviate the definitional sentence See: 0 == Verb : 0, and similarly else- where. The theory defines the properties of seven nodes: an abstract Verb node, nodes EnVerb, Aux and Modal, and three abstract lexemes Walk, Mow and Can. Each node is associated with a collec- tion of definitional sentences that specify values as- sociated with different paths. This specification is achieved either explicitly, or implicitly. Values given explicitly are specified either directly, by exhibiting a particular value, or indirectly, in terms of local and/or global inheritance. Implicit specification is achieved via DATR's default mechanism. For example, the definition of the Verb node gives the values of the paths (syn cat) and (syn type) directly, as verb and main, respectively. Similarly, the definition of Walk gives the value of (mor root / directly as walk. On the other hand, the value of 2It is not clear why one would wish to do this anyway, but the possibility is explicitly left open in the original definitions of (Evans and Gazdar, 1989a). 56 Verb : EnVerb : Aux : Modal : Walk : Mow : Can : (syn cat) == verb (syn type) == main (mor form)== "(mor "(syn form)")" (mor pres) == "(mor root)" (mor past) == "(mor root)" ed (mor pres part) --= "(mor root)" ing (mor pres sing three) == "(mor root)" 0 == Verb (mor past part) == "(mor root)" en 0 == Verb (syn type) == aux 0 == Aux (mor pres sing three) == "(mor root)" 0 == Verb (mor root) == walk 0 == EnVerb (mor root) --= mow 0 == Modal (mor root) == can (mor past) == could Figure 1: A DATR Theory the empty path at Walk is given indirectly, by local inheritance, as the value of the empty path at Verb. Note that in itself, this might not appear to be par- ticularly useful, since the theory does not provide an explicit value for the empty path in the definition of Verb. However, DATR's default mechanism permits any definitional sentence to be applicable not only to the path specified in its left-hand-side, but also for any rightward extension of that path for which no more specific definitional sentences exist. This means that the statement Walk : 0 == Verb : 0 actually corresponds to a class of implicit definitio- nal sentences, each obtained by extending paths on the left- and the right-hand-sides of the equation in the same manner. Examples include the following: Walk: (mor) == Verb: (mor) Walk: (mor form) =- Verb: (mor form) Walk : (syn cat) == Verb : (syn cat) Thus, the value associated with (syn cat) at Walk is given (implicitly) as the value of (syn cat) at Verb, which is given (explicitly) as verb. Also, the values of (mor) and (mor form), amongst many others, are inherited from Verb. In the same way, the value of (syn cat) at Mow is inherited lo- cally from EnVerb (which in turn inherits locally from Verb) and the value of (syn cat) at Can is inherited locally from Modal (which ultimately gets its value from Verb via Aux). Note however, that the following sentences do not follow by default from the specifications given at the relevant nodes: Walk: (mor root) == Verb: (mor root) Can: (mor past) == Modal: (mor past) Aux: (syn type) == Verb: (syn type) In each of the above cases, the theory provides an explicit statement about the value associated with the indicated path at the given node. As a result the default mechanism is effectively over-ridden. In order to understand the use of global (i.e. quo- ted) inheritance descriptors it is necessary to intro- duce DATR's notion of a global context. Suppose then that we wish to determine the value associated with the path (mor pres) at the node Walk. In this case, the global context will initially consist of the node/path pair Walk//mor pres). Now, by de- fault the value associated with (mor pres) at Walk is inherited locally from (mor pres) at Verb. This, in turn, inherits globally from the path (mor root). That is: Verb: (mor pres) == "(mor root)" Consequently, the required value is that associated with (mor root) at the 'global node' Walk (i.e. the node provided by the current global context), which is just walk. In a similar fashion, the value 57 Verb I nV°*Ul I I L TM Mow I Modal[ I Can I Figure 2: A Lexical Inheritance Hierarchy associated with (mor past) at Walk is obtained as walk ed (i.e. the string of atoms formed by evalua- ting the specification "(mor root)" ed in the global context Walk/(mor past)). More generally, the global context is used to fill in the missing node (path) when a global path (node) is encountered. In addition however, the evalua- tion of a global descriptor results in the global con- text being set to the new node/path pair. Thus in the preceding example, after the quoted descriptor "(mor root)" is encountered, the global context ef- fectively becomes Walk / (mor root) (i.e. the path component of the global context is altered). Note that there is a real distinction between a local inhe- ritance descriptor of the form N : P and it's global counterpart "N : P'. The former has no effect on the global context, while the latter effectively over- writes it. Finally, the definition of Verb in the theory of figure 1 illustrates a use of the 'evaluable path' con- struct: Verb: (mor form) == "(mor "(syn form)")" This states that the value of (mot form) at Verb is inherited globally from the path (mor...), where the dots represent the result of evaluating the global path "(syn form)" (i.e. the value associated with (syn form) in the prevailing global context). Eva- luable paths provide a powerful means of capturing generalizations about the structure of lexical infor- mation. 4 DATR Models To a first level of approximation, the DATR theory of figure 1 can be understood as a representation of an inheritance hierarchy (a 'semantic network') as shown in figure 2. In the diagram, nodes are written as labelled boxes, and arcs correspond to (local) in- heritance, or isa links. Thus, the node Can inherits from Modal which inherits from Aux which in turn is a Verb. The hierarchy provides a useful means of visualising the overall structure of the lexical know- ledge encoded by the DATR theory. However, the semantic network metaphor is of far less value as a way of thinking about the DATR language itself. Note that there is nothing inherent in DATR to en- sure that theories correspond to simple isa hierar- chies of the kind shown in the figure. What is more, the DATR language includes constructs that cannot be visualized in terms of simple networks of nodes connected by (local) inheritance links. Global inhe- ritance, for example, has a dynamic aspect which is difficult to represent in terms of static links. Simi- lar problems are presented by both string values and evaluable paths. Our conclusion is that the network metaphor is of primary value to the DATR user. In order to provide a satisfactory, formal model of how the language 'works' it is necessary to adopt a diffe- rent perspective. DATR theories can be viewed semantically as coll- ections of definitions of partial functions ('nodes' in DATR parlance) that map paths onto values. A mo- del of a DATR theory is then an assignment of func- 58 tions to node symbols that is consistent with the definitions of those nodes within the theory. This picture of DATR as a formalism for defining partial functions is complicated by two features of the lan- guage however. First, the meaning of a given node depends, in general, on the global context of inter- pretation, so that nodes do not correspond directly to mappings from paths to values, but rather to func- tions from contexts to such mappings. Second, it is necessary to provide an account of DATR's default mechanism. It will be convenient to present our ac- count of the semantics of DATR in two stages. 4.1 DATR Interpretations This section considers a restricted version of DATR without the default mechanism. Section 4.2 then shows how implicit information can be modelled by treating value descriptors as families of values in- dexed by paths. Definition 4.1 A DATR interpretation is a triple I = (U, I¢, F), where 1. U is a set; 2. ~ is a function assigning to each element of the set (U x V*) a partial funclion from (U x U*) to U*. 3. F is a valuation function assigning to each node N and atom a an element of U, such that di- stinct atoms are assigned distinct elements. Elements of the set U are denoted by u and ele- ments of U* are denoted by v. Intuitively, U* is the domain of (semantic) values/paths. Elements of the set C = (U x U*) are called contexts and denoted by c. The function t¢ can be thought of as mapping global contexts onto (partial) functions from local contexts to values. The function F is extended to paths, so that for P = (ax.-.a,~) (n > 0) we write F(P) to denote Ul ...un E U*, where ui = F(ai) for each i (1 < i < n). Intuitively, value descriptors denote elements of U* (as we shall see, this will need to be revised later in order to account for DATR's default mechanism). We associate with the interpretation I = (U, t:, F) a partial denotation function D : DESC -'-+ (C -+ U*) and write [d], to denote the meaning (value) of de- scriptor d in the global context c. The denotation function is defined as shown in figure 3. Note that an atom always denotes the same element of U, re- gardless of the context. By contrast, the denotation of an inheritance descriptor is, in general, sensitive to the global context c in which it appears. Note also that in the case of a global inheritance descrip- tor, the global context is effectively altered to reflect the new local context c'. The denotation function is extended to sequences of value descriptors in the ob- vious way. Thus, for ¢ =dl .. "dn (n >_ 0), we write [¢],todenotevl.-.vn E U* ifvi = [di]c (1 < i < n) is defined (and [¢], is undefined otherwise). Now, let I = (U, s, F) be an interpretation and 7" a theory. We will write [T/N]c to denote that partial function from U* to U* given by [T/N], = U {(F(P), [¢],)} N:P==~bE~T It is easy to verify that [T/N], does indeed denote a partial function (it follows from the functionality of the theory 7-). Let us also write [N], to denote that partial function from U* to U* given by [N],(v) = ~(c)(F(N),v), for all v e U*. Then, I models 7- just in case the following containment holds for each node N and context c: [N], _.D [T/N], That is, an interpretation is a model of a DATR theory just in case (for each global context) the func- tion it associates with each node respects the defini- tion of that node within the theory. 4.2 Implicit Information and Default Models The notion of a model presented in the preceding section is too liberal in that it takes no account of information implicit in a theory. For example, con- sider again the definition of the node Walk from the theory of figure 1, and repeated below. Walk: 0==Verb (mor root) == walk According to the definition of a model given previ- ously, any model of the theory of figure 1 will as- sociate with the node Walk a function from paths to values which respects the above definition. This means that for every global context c, the following containment must hold3: [Walk], ~ {(0, [Verb: 0]*), ((mor root), walk)} On the other hand, there is no guarantee that a given model will also respect the following contain- ment: [Walk]e _D {((mor), [Verb: (mor)],), ((mor root root),walk)} In fact, this containment (amongst other things) should hold. It follows 'by default' from the state- ments made about Walk that the path (mor) inhe- rits locally from Verb and that the value associated with any extension of (mor root) is walk. 3In this and subsequent examples, syntactic ob- jects (e.g.walk, (mor root)) are used to stand for their semantic counterparts under F (i.e. F(walk), F((mor root)), respectively). 59 [a]c ~'N: (dl--.d.)lo ["N: (dl """ d,)"]~ ["(dx'.. d.)'l. ["N"]¢ = F(a) if vi = ~di]c is defined for each i (1 < i < n), then = t~(c)(F(N),vl.. "vn) undefined otherwise if vi = [di]e is defined for each i (1 < i < n), then i¢(c')(d) where d = (F(N), vl ... vn) undefined otherwise if vi = [di]e is defined for each i (1 < i < n), then = ~¢(d)(d) where c = (u, v) and d = (u, Vl..-v,) undefined otherwise = i¢(d)(e) where c = (u, v) and d= (F(N), v) Figure 3: Denotation function for DATR Descriptors There have been a number of formal treatments of defaults in the setting of attribute-value formalisms (Carpenter, 1993; Bouma, 1992; Russell et al., 1992; Young and Rounds, 1993). Each of these approa- ches formalizes a notion of default inheritance by defining appropriate operations (e.g. default unifi- cation) for combining strict and default information. Strict information is allowed to over-ride default in- formation where the combination would otherwise lead to inconsistency (i.e. unification failure). In the case of DATR however, the formalism does not draw an explicit distinction between strict and de- fault values for paths. In fact, all of the information given explicitly in a DATR theory is strict. The non- monotonic nature of DATR theories arises from a general, default mechanism which 'fills in the gaps' by supplying values for paths not explicitly speci- fied in a theory. More specifically, DATR's default mechanism ensures that any path that is not expli- citly specified for a given node will take its definition from the longest prefix of that path that is specified. Thus, the default mechanism defines a class of im- plicit, definitional sentences with paths on the left that extend paths found on the left of explicit sent- ences. Furthermore, this extension of paths is also carried over to paths occurring on the right. In ef- fect, each (explicit) path is associated not just with a single value specification, but with a whole family of specifications indexed by extensions of those paths. This suggests the following approach to the se- mantics of defaults in DATR. Rather than interpre- ting node definitions (in a given global context) as partial functions from paths to values (i.e. of type U* --+ U*) we choose instead to interpret them as partial functions from (explicit) paths, to functions from extensions of those paths to values (i.e. of type U* -+ (U* --+ U*)). Now suppose that f : U* --~ (U* --~ U*) is the function associated with the node definition T/N in a given DATR interpretation. We can define a partial function A(f) : U* --~ U* (the default interpretation of T/N) as follows. For each v E U* set A(f)(v) = f(vl)(V2) where v = vlv2 and vx is the longest prefix of v such that f(vl) is defined. In effect, the function A(f) makes explicit that information about paths and values that is only implicit in f, but just in so far as it does not conflict with explicit information provided by f. In order to re-interpret node definitions in the manner suggested above, it is necessary to modify the interpretation of value descriptors. In a given global context c, a value descriptor d now corre- sponds to a total function [d]~ : U* --+ U* (intui- tively, a function from path extensions to values). For example, atoms now denote constant functions: [a]c(v) = F(a) for all v G U" More generally, value descriptors will denote dif- ferent values for different paths. Figure 4 shows the revised clause for global node/path pairs, the other definitions being very similar. Note the way in which the 'path' argument v is used to extend Vl ...vn in order to define the new local (and in this case also, global) context c ~. On the other hand, the meaning of each of the di is obtained with respect to the 'em- pty path' e (i.e. path extension does not apply to subterms of inheritance descriptors). As before, the interpretation function is extended to sequences of path descriptors, so that for ¢ = dl...d, (n >_ o) we have [¢]~(v) = Vl...v, G V*, if vi = Idil(v) is defined, for each i (1 < i < n) (and [¢],(v) is undefined otherwise). The definition of the interpretation of node definitions can be taken over unchanged from the previous section. However, for a theory T and node N, the function [T/N]e is now of type U* --+ (U* ~ U*). An interpretation I = (U, x, F) is a default model for theory T just in case for every context c and node N we have: IN], _~ A(IT"/NI,) As an example, consider the default interpretation of the definition of the node Walk given above. By 60 ["N: (dl'-"dn)"]c(v) ={ if v, = [dil¢(e) is defined for each i(1 < i < n), then ~(d)(d) where c'= (f(g),vl...vnv) undefined otherwise Figure 4: Revised denotation for global node/path pairs definition, any default model of the theory of figure 1 must respect the following containment: [W kL ((mor root), Av.walk)} /,From the definition of A, it follows that for any path v, if v extends (mor root), then it is mapped onto the value walk, and otherwise it is mapped to the value given by [Verb : 0It(v). We have the following picture: [Walklc _D {(0, [Verb: Oft(O)), ((mor), [Verb: Olc((mor))), ((mor root), walk), ((mor root root), walk), • . .} The default models of a theory 7" constitute a pro- per subset of the models ofT: just those that respect the default interpretations of each of the nodes defi- ned within the theory. 5 Conclusions The work described in this paper fulfils one of the objectives of the DATR programme: to provide the language with an explicit, declarative semantics. We have presented a formal model of DATR as a lan- guage for defining partial functions and this model has been contrasted with an informal view of DATR as a language for representing inheritance hierar- chies. The approach provides a transparent treat- ment of DATR's notion of (local and global) context and accounts for DATR's default mechanism by re- garding value descriptors (semantically) as families of values indexed by paths. The provision of a formal semantics for DATR is important for several reasons. First, it provi- des the DATR user with a concise, implementation- independent account of the meaning of DATR theo- ries. Second, it serves as a standard against which other, operational definitions of the formalism can be judged. Indeed, in the absence of such a stan- dard, it is impossible to demonstrate formally the correctness of novel implementation strategies (for an example of such a strategy, see (Langer, 1994)). Third, the process of formalisation itself aids our understanding of the language and its relationship to other non-monotonic, attribute-value formalisms. Finally, the semantics presented in this paper provi- des a sound basis for subsequent investigations into the mathematical and computational properties of DATR. 6 Acknowledgements The author would like to thank Roger Evans, Gerald Gazdar, Bill Rounds and David Weir for helpful dis- cussions on the work described in this paper. References Francois Andry, Norman Fraser, Scott McGlashan, Simon Thornton, and Nick Youd. 1992. Ma- king DATR work for speech: lexicon compila- tion in SUNDIAL• Computational Linguistics, 18(3):245-267. Gosse Bouma. 1992. Feature structures and nonmo- notonicity. Computational Linguistics, 18(2):183- 203. Lynne Cahill and Roger Evans. 1990. An applica- tion of DATR: the TIC lexicon. In Proceedings of the 9th European Conference on Artificial Intelli- gence, pages 120-125. Lynne Cahill. 1993. Morphonology in the lexicon. In Proceedings of the 6th Conference of the Euro- pean Chapter of the Association for Computatio- nal Linguistics, pages 87-96. Lynne Cahill. 1994. An inheritance-based lexicon for message understanding systems. In Procee- dings of the ~th ACL Conference on Applied Na- tural Language Processing, pages 211-212. Bob Carpenter. 1993. Skeptical and credulous de- fault unification with applications to templates and inheritance. In Ted Briscoe, Valeria de Paiva, and Ann Copestake, editors, Inheritance, Defaults and the Lexicon, pages 13-37. Cambridge Univer- sity Press, Cambridge. Greville Corbett and Norman Fraser. 1993. Net- work morphology: a DATR account of Russian nominal inflection. Journal of Linguistics, 29:113- 142. Roger Evans and Gerald Gazdar. 1989a. Inference in DATR. In Proceedings of the ~th Conference of the European Chapter of the Association for Com- putational Linguistics, pages 66-71. 6] Roger Evans and Gerald Gazdar. 1989b. The se- mantics of DATR. In Proceedings of AISB-89, pages 79-87. Roger Evans, Gerald Gazdar, and David Weir. 1994. Using default inheritance to describe LTAG. In 3e Colloque International sur les Grammaires d'Arbres Adjoints (TAG-l-3), pages 79-87. Roger Evans, Gerald Gazdar, and David Weir. 1995. Encoding lexicalized tree adjoining gram- mars with a nonmonotonic inheritance hierarchy. In Proceedings of the 33rd Annual Meeting of the Association for Computational Linguistics. Gerald Gazdar. 1992. Paradigm function morpho- logy in DATR. In Lynne Cahill and Richard Coa- tes, editors, Sussez Papers in General and Compu- tational Linguistics, number CSRP 239 in Cogni- tive Science Research Papers, pages 45-53. Uni- versity of Sussex, Brighton. Dafydd Gibbon and Doris Bleiching. 1991. An ILEX model for German compound stress in DATR. In Proceedings of the FORWISS-ASL Workshop on Prosody in Man-Machine Commu- nication. James Kilbury. 1992. Pardigm-based derivational morphology. In Guenther Goerz, editor, Procee- dings of KONVENS 92, pages 159-168. Springer, Berlin. Adam Kilgariff. 1993. Inheriting verb alternations. In Proceedings of the 6th Conference of the Euro- pean Chapter of the Association for Computatio- nal Linguistics, pages 213-221. Hagen Langer. 1994. Reverse queries in DATR. In Proceedings of the 15th International Conference on Computational Linguistics, volume II, pages 1089-1095, Kyoto. Graham Russell, Afzal Ballim, John Carroll, and Susan Warwick-Armstrong. 1992. A practi- cal approach to multiple default inheritance for unification-based lexicons. Computational Lingui- stics, 18(2):311-337. Mark Young and Bill Rounds. 1993. A logical se- mantics for nonmonotonic sorts. In Proceedings of the 31st Annual Meeting of the Association for Computational Linguistics, pages 209-215. 62 | 1995 | 8 |
User-Defined Nonmonotonicity in Unification-Based Formalisms Lena Str~Smb~ick Department of Computer and Information Science LinkSping University S-58185 LinkSping, Sweden lestr~ida, liu. so Abstract A common feature of recent unification- based grammar formalisms is that they give the user the ability to define his own structures. However, this possibility is mostly limited and does not include non- monotonic operations. In this paper we show how nonmonotonic operations can also be user-defined by applying default lo- gic (Reiter, 1980) and generalizing previous results on nonmonotonic sorts (Young and Rounds, 1993). 1 Background Most of the more recent unification-based forma- lisms, such as TFS (Emele and Zajac, 1990), UD (Johnson and Rosner, 1989), CUF (DSrre and Eisele, 1991), and FLUF (StrSmb~ick, 1994), provide some possibility for the user to define constructions of his own. This possibility can be more or less power- ful in different formalisms. There are, however, se- veral constructions proposed as desirable extensions to unification grammars that cannot be defined in a general and well-defined way in these formalisms. One such class of constructions is those that have some degree of nonmonotonic behaviour. Examples of such constructions are any-values, default-values, and some constructions (e.g. constraining equations, completeness and coherence) used in LFG (Kaplan and Bresnan, 1983). This paper describes a method that permits the user to define such nonmonotonic constructions. This is done through generalizing the work on non- monotonic sorts (Young and Rounds, 1993). This generalization results in a default logic similar to (Reiter, 1980), but where subsumption and unifica- tion are used instead of logical truth and consistency. There are three main advantages to Young and Rounds' work compared with other approaches to default unification (Bouma, 1990; Bouma, 1992; Russel et al., 1992) which justify choosing it as a starting point for this work. The first is the se- paration of definite and default information, where Young and Rounds are more distinct than the other. The second is that the nonmonotonic uni- fication operation used is order independent. This is achieved by separating the unification operation from computing the nonmonotonic extension, which Young and Rounds call explanation. This suggests that all the user needs to define when generalizing the approach is how a sort is explained. Finally, there is a close relationship to Reiter's (1980) de- fault logic. This paper starts by providing the minimal pro- perties required of a unification-based formalism when extending with nonmonotonic definitions. I then describe the approach of user-defined nonmo- notonicity and illustrate how some commonly used nonmonotonic constructions can be defined within it. Finally I conclude with a discussion of the re- lation to Reiter's default logic and computational properties of the approach. 2 Preliminaries There are two main properties that will be assumed of a unification-based formalism in order to extend it with the possibility of defining nonmonotonic con- structions. The first, and most important, is that we require a subsumption order on the set S of ob- jects denoted by the formalism. Secondly it should be possible to define inheritance hierarchies on the linguistic knowledge described by the formalism. One very plausible subsumption order that can be used is the ordinary subsumption lattice of feature structures. It is, however, possible to use some other kind of subsumption order if that is more suitable for the domain to be modelled by the formalism. Ex- amples of other subsumption orders that might be useful are typed feature structures, feature structu- res extended with disjunction, or simply an order based on sets and set inclusion. In this paper the notation a U b is used whene- ver a subsumes b (i.e. whenever a "is more specific than" or "contains more information than" b). Con- sequently, a I'- b is used whenever a _ b but a ¢ b. The subsumption order is assumed to be a semi- 63 lattice and permits computing a unifier, denoted a N b, corresponding to the greatest lower bound, for every pair of elements within it. The element corresponding to the bottom of the order relation is denoted fail and represents inconsistent information or unification failure. The second constraint placed on the formalism, the possibility of defining an inheritance hierarchy, is not essential for the definition of nonmonotonic operations. It is, however, very useful when de- fining nonmonotonic constructions. The following notation will be used for describing an inheritance hierarchy. class the name of the class; isa its parent in the hierarchy; requires a structure. Thus, each member in the inheritance hierarchy is called a class, which is defined by giving it a name and a parent in the hierarchy. It is also possible to define some constraints, called requirements, which must hold for a class. These requirements can be both structures in the subsumption order and non- monotonic rules. The constraints on classes are inhe- rited through the hierarchy. Every object in a class is assumed to contain at least the information given by the constraints specified for it and all its ancestors. For simplicity multiple inheritance between classes will not be allowed. This means that two classes where none of them is a subclass of the other, will always be considered inconsistent and thus yield a failure when unified. 3 User-Defined Nonmonotonicity I will now describe how the work by Young and Rounds can be generalized to allow the user to de- fine nonmonotonic constructions. The main idea in their approach is that each node in a feature struc- ture consists of a nonmonotonic sort. Such sorts can contain two different kinds of information, the ordi- nary monotonic information and a set of defaults. If we assume that fl is defined as a default in Young and Rounds' work then it is interpreted according to the rule: if it is consistent to believe # then believe #. In Reiter's default logic this is expressed with the following normal default rule. :# # In this paper I want to allow the user to use other forms of nonmonotonic inferences and not only the normal default rule given above. Therefore, I will consider the general form of default rules. An in- tuitive reading of a general default rule is, if a is believed and it is consistent to believe # then be- lieve 7. In default logic this is usually expressed as 7 The next question is how such defined nonmonoto- nic rules are going to be interpreted in a unification framework. In (Reiter, 1980), a rule like the one above could be applied whenever a is true and # is consistent with the information we already have. If we assume that V represents the information al- ready given this means that the default rule can be applied whenever Y C a and Y I-I # does not yield unification failure. When the rule is applied the new information obtained would be 1/Iq 7. In the approach described in this paper, the user is allowed to define the actual nonmonotonic rule that should be used for a particular operation by using the following syntax. nonmon name(parameter1,...parametern) : when a :#=>7 In the syntax given above name assigns a name to the defined rule, and thus allows the user to use nonmonotonic information when defining lin- guistic knowledge. The parameters in the rule de- finition are variables, which can be used within the actual default rule at the end of the descrip- tion. The user is assumed to assign the nonmonoto- nic information contained in this rule to his lingui- stic knowledge by using an expression of the form narne(pararneterl , . . . parametern ). The when slot in the rule allows the user to decide when the rule is going to be applied, or in Young and Rounds' terminology, explained. I will make use of two values for the when-slot, immediate and posterior. Immediate means that the nonmonotonic rule is going to be applied each time a full unifi- cation task has been solved or whenever all infor- mation about an object in the defined inheritance hierarchy has been retrieved. Posterior explanation means that the explanation of the rule is postponed until reaching the result of some external process, for example, a parser or generator. There is howe- ver no hinder in excluding the use of other values here. One could, for example, imagine cases where one would want different nonmonotonic rules to be explained after a completed parse, a generation, or after resolving discourse referents. Note that although the when slot in the defini- tion of a nonmonotonic rule allows the user to define when his rule is going to be applied we will still have an order independent nonmonotonic unification ope- rator. This is the case because we follow Young and Rounds' approach and separate the unification ope- ration from the explanation of a nonmonotonic rule. Therefore, what affects the final result of a computa- tion is when one chooses to explain default rules and not the order of the unification operations occurring between such explanations. 64 4 Formal Definitions In this section I provide give the formal definitions for nonmonotonic sorts and how nonmonotonic sorts are unified and explained. The definitions are gene- ralizations of the definitions in Young and Rounds (1993). The notation a -,~ b is used to denote the fact that a I-1 b does not yield unification failure. A nonmonotonic sort is a structure containing both information from the basic subsumption order and information about default rules to be explained at a later point in the computation. Definition 1 A nonmonotonic sort is a pair (s, A) where s E S and A is a set of nonmonotonic rules of the form (w, a : fl ==~ 3') where w is an atom and a, fl and 3' E S. It is assumed that for each nonmonotonic rule 3' _C fl, a --, s, fl ,~ s, and 713s C s. As seen by the definition a nonmonotonic sort is considered to be a pair of monotonic information from the subsumption order and nonmonotonic in- formation represented as a set of nonmonotonic ru- les, The user can assign nonmonotonic information to a nonmonotonic sort by calling a nonmonotonic definition as defined in the previous section. The ac- tual nonmonotonic rule occurring within the sort is a pair consisting of the when slot and the last part of the nonmonotonic definition, with the parameter variables instantiated according to the call made by the user. The second part of this definition contains some well-foundedness conditions for a nonmonotonic sort. The first condition (3' _C ~) is a restriction similar to the restriction to normal default rules in Reiter's (1980) default logic. This restriction ensu- res that the application of one default rule will never cause previously applied default rules to be inappli- cable. This makes the procedure for application of defaults more efficient and will be further discussed in section 6. The next two conditions in the definition, a ,-, s and fl ~ s, guarantee that the default rule is or can be applicable to the nonmonotonic sort. The reason for only checking that a ~ s instead of s C a is that future unifications can restrict the value of s into something more specific than a and thus may make the default rule applicable. The last condition on a nonmonotonic sort, 3'Us r- s, may seem superfluous. The reason for including it is to ensure that applying the default actually re- stricts the value of the sort. Otherwise the default rule would have no effect and can be removed. Note in particular that the above conditions on a nonmo- notonic sort implies that 7 may be fail. Given the unification operation of objects within the subsumption order and the definition of nonmo- notonic sorts it is possible to define an operation for nonmonotonic unification. Definition 2 The nonmonotonic unification (n~v) of two nonmonotonic sorts (sl, A1) and (s2, A2) is the sort (s, A) where $ S ~ S 1 17 $2 and , A = {did= (w, tr : fl ::¢, 7), de A1U A2, a.,~s, ~,..s, andTtqst-s} The nonmonotonic unification is computed by computing the unification of the monotonic parts of the two sorts and then taking the union of their non- monotonic parts. The extra conditions used when forming the union of the nonmonotonic parts of the sorts are the same as in the definition of a nonmo- notonic sort and their purpose is to remove nonmo- notonic rules that are no longer applicable, or would have no effect when applied to the sort resulting from the unification. It is important to note that this generalization of the original definition of nonmonotonic unification from Young and Rounds (1993) preserves the pro- perty of order independence for default unification. When using nonmonotonie sorts containing non- monotonic rules, we also need to know how to merge the monotonic and nonmonotonic informa- tion within the sort. I will use the terminology w- application for applying one nonmonotonic rule to the sort and w-ezplanation when applying all possi- ble rules. Definition 3 The nonmonotonic rule (w, c~ : fl =¢, 7) is w-applicable to s E S if: • sI--Ot • s...flors=fail • slqTl-sors=fail The result of the w-application is 3' I'1 s Note that the w in w-application should be consi- dered as a variable. This means that only nonmono- tonic rules whose first component is w are considered and that it is possible to choose which nonmonoto- nic rules should be applied in a particular point at some computation. In addition note that the restriction that 7 --- in all nonmonotonic rules and the special cases for s = fail ensures that the application of one non- monotonic rule never destroys the applicability of a previously applied rule. This reduces the amount of work required when computing a w-explanation. Based on these observations, a sufficient condition for w-explanation is defined as follows. Definition 4 t is a w-ezplanation of a nonmono- tonic sort (s, A) if it can be computed in the following way: 1. If s = fail or no d E A is w-applicable then t = s else 2. Ch,,ose a d = (w, cr : fl =¢, 7) E A such that d is w-applicable to s. 3. Let s = sl'lT and go to 1. 65 As shown by the definition, a w-explanation is computed by choosing one w-applicable default rule at a time and then applying it. Since the defini- tion of w-applicability and the condition that 7 - in all nonmonotonic rules ensures that whenever a nonmonotonic rule is applied it can never be inapp- licable, there is no need to check if the preconditions of earlier applied nonmonotonic rules still hold. Note also that the choice of which nonmonotonic rule to apply in each step of a w-explanation is non- deterministic. Consequently, it is possible to have conflicting defaults and multiple w-explanations for a nonmonotonic sort. Note also that the result of a w-explanation is al- lowed to be fail. Another option would be to inter- pret .fail as if the application of the nonmonotonic rule should not be allowed. However, as seen in the next section, for many uses of nonmonotonic exten- sions within unification-based formalisms, the aim is to derive failure if the resulting structure does not fulfill some particular conditions. This makes it im- portant to allow fail to be the result of applying a nonmonotonic rule. 5 Examples In this section I will show how some of the most common nonmonotonic extensions to unification- based grammar can be expressed by defining rules as above. I will start with defining default values. This is done by defining a nonmonotonic rule default for the class value, which is assumed to be the most general class in a defined hierarchy. The rule defi- ned here is treated as the one in (Young and Rounds, 1993). class value ; nonmon default(X) :immediate :X => X. This default rule can be used when defining verbs. The rule is used for stating that verbs are active by default. I also define the two Swedish verbs skickade (sent) and skickades (was sent) to illustrate how this rule works. class verb; isa value ; requires [form: default(active)]. class skickade; isa verb; requires [lex: skicka] . class skickades ; isa verb; requires [lex: skicka, form: passive]. While retrieving the information for these two verbs we will obtain the following two feature struc- tures containing nonmonotonic sorts: For skickade: [lex: skicka, form: ([I ,{(immediate, :active active )})3 For skickades: [lex: skicka, form: (passive,{(immediate, :active ::~ active )})] Since I have used immediate for the when-slot in the definition of the default rule, this nonmonotonic rule will be applied immediately after retrieving all information about a verb in the hierarchy. For the two structures above, the default rule can be app- lied for skickade, since active is consistent with D, but not for skickades, since active and passive are inconsistent. The result after applying immediate- explanation to the two structures above is shown below. For skickade: [lex: skicka, form: active] For skickades: [lex: skicka, form: passive] Another nonmonotonic operation that has been used in LFG (Kaplan and Bresnan, 1983) is the va- lue constraint (=e) used to check whether a sub- structure has a particular value after a completed parse. The definition of value constraints as a non- monotonic rule makes use of negation, interpreted as negation as failure. class value ; nonmon =c(X):posterior :-~X => fail. One use of value constraints in LFG is to assert a condition that some grammar rules can only be used for passive sentences. I will here assume that a representation for verbs where passive verbs have the value passive for the attribute form, but where other verbs have no value for this attribute. In the syntax used in this paper the constrMnt that a par- ticular grammar rule can only be used for passive verbs would be expressed as below: [form: =c(passive)] This would result in the nonmonotonic sort: [form: ([],{(posterior, : ~passive fail )})3 As seen by the definition of =c, the explanation for this nonmonotonic sort is postponed and is assu- med to be computed after finding a parse for some sentence. This implies that the only case where this rule would not apply, and thus not give fail as a re- sult, is when the value of form actually is passive. For all other values of form, we would have some- thing that is consistent with ~passive and thus the nonmonotonic rule will derive failure when applied. 66 The next nonmonotonic structure I want to dis- cuss is any-values. The inheritance hierarchy is used to be able to define any-values in a simple way. class value. class none; isa value. class any_value; isa value. nonmon any():posterior :any_no_value => fail. class any_no_value ; isa any_value. In this small hierarchy it is assumed that all pos- sible values of a structure is a subtype of value. We then divide this into none, which represents that a structure cannot have any value and any_value which contains all actual values. The class any_value is then further divided into a class called any_no_value, which only contains this single value, and the ac- tual values of a structure. The class any_no_value should not be used when defining linguistic know- ledge. However, when applying the default rule a value that has not been instantiated is compatible with this any_no_value. Therefore the default rule can make the conclusion that the structure is in- consistent, which is what we desire. Note that, as soon as a value has been further instantiated into a 'real' value, it will no longer be consistent with any_no_value, and the nonmonotonic rule cannot ap- ply. Two examples will further illustrate this. The nonmonotonic sort: (0, {( posterior, :any_no_value fail )}) will be posterior-explained to: fail While the sort: ([lex: kalle], {( posterior, :any_no_value ~ fail )}) will be posterior-explained to: [lex : kalle] The last nonmonotonic operations I want to dis- cuss are completeness and coherence as used in LFG. To be able to define these operations I assume the inheritance hierarchy above, without the nonmono- tonic definition of any. I will, instead, make use of the two nonmonotonic definitions below. class value; nonmon coherence(A) :immediate : [A: none] => [A: none]; nonmon completeness (A) :posterior :[A: any_no_value] => fail. The first of these rules is used to check coherence, and the effect is to add the value none to each attri- bute that has been defined to be relevant for cohe- rence check, but has not been assigned a value in the lexicon. The second rule is used for checking com- pleteness and it works similarly to the any-definition above. Finally, I will show how a fragment of a lexicon can be defined according to these rules. Note that in the definition of the transitive verb, the value any_value is given to the appropriate attributes. This means that they are inconsistent with none, and thus, the coherence rule cannot be applied. concept verb; isa any_value ; requires coherence(subj) A coherence(obj) A ...; requires completeness(subj) A completeness(obj ) A .... concept transitiveverb; isa verb; requircs [subj: any_value, obj: any_value]. 6 Relation to Default Logic In this section I will discuss the relation of this work to Reiter's (1980) default logic. There will also be some discussion on the computational properties and limitations of the given approach. Compared with Reiter's default logic, our notion of nonmonotonic sorts corresponds to default theo- ries. Unification of nonmonotonic sorts would then correspond to merging two default theories into one single theory and our notion of explaining a nonmo- notonic sort corresponds to computing the extension of a default theory in default logic. In default logic there is often a restriction to normal-default theories since non-normal default theories are not even semi-decidable. The restric- tion in our nonmonotonic rules that 7 C fl is similar to the restriction into normal default rules and cap- tures the important property, that the application of one nonmonotonic rule should not affect the appli- cability of previously applied rules. The decidability of the nonmonotonic rules defined here is, however, highly dependant on the given subsumption order. In particular it is dependent on having a decidable unification operation and subsumption check. As mentioned previously there is with nonmonoto- nic sorts, as well as normal default logic, a possibility of conflicting defaults and thus multiple nonmono- tonic extensions for a structure. One difference is that nonmonotonic sorts allow that the application of a nonmonotonic rule leads to fail, i.e. an incon- sistent structure, while default logic does not allow this outcome. However, since fail is allowed as a va- lid explanation for a nonmonotonic sort, there is, as 67 for normal default logic, always at least one expla- nation for a sort. The two following examples will illustrate the dif- ference between nonmonotonic rules giving multiple extensions and nonmonotonic rules giving a single explanation fail. Example a :In:l] :[c:2] [a:l b:l] [b:2 e:2] Example b :[a:l] :[b:21 [a:l b:l] [a:2 b:2] In example a the application of one rule, does not make the other inapplicable. Thus the only expla- nation for a structure is achieved by applying both these two rules and results in fail. In example b, however, the application of one of the rules would block the application of the other. Thus, in this case there are two explanations for the structure de- pendant on which of the rules that has been app- lied first. Note that even though there is an order dependency on the application order of nonmonoto- nic rules this does not affect the order independency on nonmonotonic unification between application of nonmonotonic rules. Allowing multiple extensions gives a higher com- putational complexity than allowing only theories with one extension. Since it is the user who defines the actual nonmonotonic theory multiple extensions must be allowed and it must be considered a task for the user to define his theory in the way he prefers. 7 Improvements of the Approach I will start with two observations regarding the de- finitions given in section 3. First, it is possible to generalize these definitions to allow the first com- ponent of a nonmonotonic sort to contain substruc- tures that are also nonmonotonic sorts. With the generalized versions of the definitions explanations that simultaneously explain all substructures of a nonmonotonic sort will be considered. Note that the explanation of default rules at one substructure might affect the explanation of rules at other sub- structures. Therefore the order on which nonmono- tonic rules at different substructures are applied is important and all possible application orders must be considered. Considering unification of nonmonotonic sorts it is not necessary to simplify the nonmonotonic part of the resulting sort. A = A i U A2 can be defined as an alternative to the given definition. This alternate definition is useful for applications where the simpli- fication of nonmonotonic sorts by each unification is expected to be more expensive than the extra work needed to explain a sort whose nonmonotonic part is not simplified. As stated previously, nonmonotonic sorts allow multiple explanations of a nonmonotonic sort. If de- sired, it would be fairly easy to add priorities to the nonmonotonic rules, and thus induce a preference order on explanations. One further example will illustrate that it is also possible to define negation as failure with nonmono- tonic rules. An intuitive interpretation of the defined rule below is that if X is believed (1/E X), failure should be derived. nonmon not(X) :immediate X => fail; However, if this definition is to be really useful we must also allow one definition of a nonmonoto- nic rule to make use of other nonmonotonic rules. In our original definition we said that the nonmo- notonic rule above should be applied if Y ,~ --X. This can be generalized to the case where --X is a nonmonotonic rule if we extend the definition of -~ to also mean that the application (or explanation) of the not rule at this node does not yield failure. However, this generalization is outside default logic. Therefore, its computational properties are unclear and needs more investigation. 8 Conclusion In this paper I have proposed a method allowing the user to define nonmonotonic operations in a unification-based grammar formalism. This was done by generalizing the work on nonmonotonic sorts (Young and Rounds, 1993) to allow not only normal defaults rules but general default rules that are defined by the user. The method has a very close relation to Reiter (1980). We also noted that the method can be applied to all domains of structu- res where we have a defined subsumption order and unification operation. The generality of the approach was demonstrated by defining some of the most commonly used nonmo- notonic operations. We also gave formal definitions for the approach and provided a discussion on its computational properties. Acknowledgments This work has been supported by the Swedish Re- search Council for Engineering Sciences (TFR). I would also like to thank Lars Ahrenberg and Pa- trick Doherty for comments on this work and Mark A. Young for providing me with much-needed infor- mation about his and Bill Rounds' work. References Gosse Bouma. 1990. Defaults in Unification Gram- mar. in Proceedings of ~he 1990 Conference of ~he 68 Association for Computational Linguistics, pages 165-172. Gosse Bouma. 1992. Feature Structures and Nonmonotonicity. Computational Linguistics 18(2):183-203. Jochen D6rre and Andreas Eisele. 1991. A Compre- hensive Unification-Based Grammar Formalism. DYANA Report - Deliverable R3.1B. January 1991. Martin C. Emele, and Remi Zaja¢. 1990. Typed Unification Grammars. In Proceedings of the I gth International Conference on Computational Lin- guistics, Vol. 3, pages 293-298, Helsinki, Finland. Rod Johnson and Michael Rosner. 1989. A Rich En- vironment for Experimentation with Unification Grammars. In Proceedings of the 4th Conference of the European Chapter of the Association for Computational Linguistics, pages 182-189, Man- chester, England. R. Kaplan and J.Bresnan. 1983. A Formal System for Grammatical Representation. In: J Bresnan (ed.), The Mental Representation of Grammatical Relations. MIT Press, Cambridge, Massachusetts. Ray Reiter. 1980. A Logic for Default Reasoning. In Artificial Intelligence, 13:81-132. Graham Russel, Afzal Ballim, John Carrol and Susan Warwick-Armstrong. 1992. A Practi- cal Approach to Multiple Default Inheritance for Unification-Based Lexicons. Computational Lin- guistics 18(3):311-337. Lena Str6mb/ick. 1994. Achieving Flexibility in Uni- fication Grammars. In Proceedings of the 15th In- ternational Conference on Computational Lingui- stics, Vol. 3, pages 842-846, Kyoto, Japan. Mark A Young and Bill Rounds. 1993. A Logi- cal Semantics for Nonmonotonic Sorts. In Procee- dings of the 1993 Conference of the Association for Computational Linguistics, pages 209-215 69 | 1995 | 9 |
Higher-Order Coloured Unification and Natural Language Semantics Claire Gardent Computational Linguistics Universit£t des Saarlandes D-Saarbriicken claire@coil, uni-sb, de Michael Kohlhase Computer Science Universit~t des Saarlandes D-Saarbriicken kohlhase¢cs, uni-sb, de Abstract In this paper, we show that Higher-Order Coloured Unification - a form of unification developed for automated theorem proving - provides a general theory for modeling the interface between the interpretation process and other sources of linguistic, non semantic information. In particular, it pro- vides the general theory for the Primary Occurrence Restriction which (Dalrymple et al., 1991)'s analysis called for. 1 Introduction It is well known that Higher-Order Unification (HOU) can be used to construct the semantics of Natural Language: (Dalrymple et al., 1991) - hence- forth, DSP - show that it allows a treatment of VP- Ellipsis which successfully captures the interaction of VPE with quantification and nominal anaphora; (Pulman, 1995; Gardent and Kohlhase, 1996) use HOU to model the interpretation of focus and its interaction with focus sensitive operators, adverbial quantifiers and second occurrence expressions; (Gar- dent et al., 1996) shows that HOU yields a sim- ple but precise treatment of corrections; Finally, (Pinkal, 1995) uses linear HOU to reconstruct under- specified semantic representations. However, it is also well known that the HOU approach to NL semantics systematically over- generates and that some general theory of the in- terface between the interpretation process and other sources of linguistic information is needed in order to avoid this. In their treatment of VP-ellipsis, DSP introduce an informal restriction to avoid over-generation: the Primary Occurrence Restriction (POR). Although this restriction is intuitive and linguistically well- motivated, it does not provide a general theoretical framework for extra-semantic constraints. In this paper, we argue that Higher-Order Coloured Unification (HOCU, (cf. sections 3,6), a restricted form of HOU developed independently for theorem proving, provides the needed general frame- work. We start out by showing that the HOCU approach allows for a precise and intuitive model- ing of DSP's Primary Occurrence Restriction (cf. section 3.1). We then show that the POR can be extended to capture linguistic restrictions on other phenomena (focus, second occurrence expressions and adverbial quantification) provided that the no- tion of primary occurrence is suitably adjusted (cf. section 4). Obviously a treatment of the interplay of these phenomena and their related notion of primary occurrence is only feasible given a precise and well- understood theoretical framework. We illustrate this by an example in section 4.4. Finally, we illustrate the generality of the HOCU framework by using it to encode a completely different constraint, namely Kratzer's binding principle (cf. section 5). 2 Higher-Order Unification and NL semantics The basic idea underlying the use of HOU for NL semantics is very simple: the typed A-calculus is used as a semantic representation language while se- mantically under-specified elements (e.g. anaphors and ellipses) are represented by free variables whose value is determined by solving higher-order equa- tions. For instance, the discourse (la) has (lb) as a semantic representation where the value of R is given by equation (lc) with solutions (ld) and (le). (1) a. Dan likes golf. Peter does too. b. like(dan, golf)AR(peter) c. like(dan,golf) = R(dan) d. R = Ax. like(x, golf) e. R = Ax. like(dan,golf) The process of solving such equations is tradition- ally called unification and can be stated as follows: given two terms M and N, find a substitution of terms for free variables that will make M and N equal. For first order logic, this problem is decidable and the set of solutions can be represented by a sin- gle most general unifier. For the typed A-calculus, the problem is undecidable, but there is an algorithm which - given a solvable equation - will enumerate a complete set of solutions for this equation (Huet, 1975). Note that in (1), unification yields a linguistically valid solution (ld) but also an invalid one: (le). To remedy this shortcoming, DSP propose an in- formal restriction, the Primary Occurrence Re- striction: In what follows, we present a unification framework which solves both of these problems. 3 Higher-Order Coloured Unification (HOCU) There is a restricted form of HOU which allows for a natural modeling of DSP's Primary Occurrence Restriction: Higher-Order Coloured Unification de- veloped independently for theorem proving (Hutter and Kohlhase, 1995). This framework uses a variant of the simply typed A-calculus where symbol occur- rences can be annotated with so-called colours and substitutions must obey the following constraint: Given a labeling of occurrences as either primary or secondary, the POR excludes of the set of linguistically valid solutions, any solution which contains a primary oc- currence. For any colour constant c and any c-coloured variable V~, a well-formed coloured substitution must assign to Vc a c- monochrome term i.e., a term whose sym- bols are c-coloured. Here, a primary occurrence is an occurrence that is directly associated with a source parallel element. Neither the notion of direct association, nor that of parallelism is given a formal definition; but given an intuitive understanding of these notions, a source parallel element is an element of the source (i.e. antecedent) clause which has a parallel counterpart in the target (i.e. elliptic or anaphoric) clause. To see how this works, consider example (1) again. In this case, dan is taken to be a primary occur- rence because it represents a source parallel element which is neither anaphoric nor controlled i.e. it is directly associated with a source parallel element. Given this, equation (lc) becomes (2a) with solu- tions (2b) and (2c) (primary occurrences are under- lined). Since (2c) contains a primary occurrence, it is ruled out by the POR and is thus excluded from the set of linguistically valid solutions. (2) a. like(dan, golf)=R(dan) b. R = Ax.like(x, golf) c. R = Ax.like(dan, golf) Although the intuitions underlying the POR are clear, two main objections can be raised. First, the restriction is informal and as such provides no good basis for a mathematical and computational evalua- tion. As DSP themselves note, a general theory for the POR is called for. Second, their method is a generate-and-test method: all logically valid solu- tions are generated before those solutions that vio- late the POR and are linguistically invalid are elimi- nated. While this is sufficient for a theoretical anal- ysis, for actual computation it would be preferable never to produce these solutions in the first place. 3.1 Modeling the Primary Occurrence Restriction Given this coloured framework, the POR is directly modelled as follows: Primary occurrences are pe- coloured whilst free variables are -~pe-coloured. For the moment we will just consider the colours pe (pri- mary for ellipsis) and ~pe (secondary for ellipsis) as distinct basic colours to keep the presentation sim- ple. Only for the analysis of the interaction of e.g. ellipsis with focus phenomena (cf. section 4.4) do we need a more elaborate formalization, which we will discuss there. Given the above restriction for well-formed coloured substitutions, such a colouring ensures that any solution containing a primary occurrence is ruled out: free variables are -~pe-coloured and must be assigned a -~pe-monochrome term. Hence no sub- stitution will ever contain a primary occurrence (i.e. a pe-coloured symbol). For instance, discourse (la) above is assigned the semantic representation (3a) and the equation (3b) with unique solution (3c). In contrast, (3d) is not a possible solution since it as- signs to an -~pe-coloured variable, a term containing a pe-coloured symbol i.e. a term that is not -~pe- monochrome. (3) a. like(danpe,gol f) A R~pe(peter) b. like(danpe, golf)= R~pe(danpe) c. R~pe = Ax.like(x, golf) d. R~pe = Ax.like(danpe,gOl f) 3.2 HOCU theory To be more formal, we presuppose a finite set g = {a, b, c, pe, -~pe,...) of colour constants and a 2 countably infinite supply ~ -- {A, B,...} of colour variables. As usual in A-calculus, the set wff of well- formed formulae consists of (coloured 1) con- stants ca,runs~,runsA,..., (possibly uncoloured) variables x, xa,yb,... (function) applications of the form MN and A-abstractions of the form Ax.M. Note that only variables without colours can be abstracted over. We call a formula M c- monochrome, if all symbols in M are bound or tagged with c. We will need the so-called colour erasure IMI of M, i.e. the formula obtained from M by erasing all colour annotations in M. We will also use various elementary concepts of the A-calculus, such as free and bound occurrences of variables or substitutions without defining them explicitly here. In particular we assume that free variables are coloured in all for- mulae occuring. We will denote the substitution of a term N for all free occurrences of x in M with [N/x]M. It is crucial for our system that colours annotate symbol occurrences (i.e. colours are not sorts!), in particular, it is intended that different occurrences of symbols carry different colours (e.g. f(xb, Xa)) and that symbols that carry different colours are treated differently. This observation leads to the no- tion of coloured substitutions, that takes the colour information of formulae into account. In contrast to traditional (uncoloured) substitutions, a coloured substitution a is a pair (at,at), where the term substitution a t maps coloured variables (i.e. the pair xc of a variable x and the colour c) to formulae of the appropriate type and the colour substitu- tion a c maps colour variables to colours. In order to be legal (a g-substitution) such a mapping a must obey the following constraints: • If a and b are different colours, then [a(xa)[ = [a(xb)[, i.e. the colour erasures have to be equal. • If c E C is a colour constant, then a(x¢) is c- monochrome. The first condition ensures that the colour erasure of a C-substitution is a well-defined classical substi- tution of the simply typed A-calculus. The second condition formalizes the fact that free variables with constant colours stand for monochrome subformu- lae, whereas colour variables do not constrain the substitutions. This is exactly the trait, that we will exploit in our analysis. 1Colours axe indicated by subscripts labeling term occurrences; whenever colours axe irrelevant, we simply omit them. Note that/37/-reduction in the coloured A-calculus is just the classical notion, since the bound vari- ables do not carry colour information. Thus we have all the known theoretical results, such as the fact that/~/-reduction always terminates producing unique normal forms and that /3T/-equality can be tested by reducing to normal form and comparing for syntactic equality. This gives us a decidable test for validity of an equation. In contrast to this, higher-order unification tests for satisfiability by finding a substitution a that makes a given equation M = N valid (a(M) =~ a(N)), even if the original equation is not (M ~Z, N). In the coloured A-calculus the space of (se- mantic) solutions is further constrained by requiring the solutions to be g-substitutions. Such a substi- tution is called a C-unifier of M and N. In par- ticular, C-unification will only succeed if compara- ble formulae have unifiable colours. For instance, introa (Pa, jb, Xa) unifies with introa (Ya, jA, Sa) but not with introa (Pa, ja, sa) because of the colour clash on j. It is well-known, that in first-order logic (and in certain related forms of feature structures) there is always a most general unifier for any equation that is solvable at all. This is not the case for higher-order (coloured) unification, where variables can range over functions, instead of only individu- als. Fortunately, in our case we are not interested in general unification, but we can use the fact that our formulae belong to very restricted syntactic sub- classes, for which much better results are known. In particular, the fact that free variables only occur on the left hand side of our equations reduces the prob- lem of finding solutions to higher-order matching, of which decidability has been proven for the sub- class of third-order formulae (Dowek, 1992) and is conjectured for the general case. This class, (intu- itively allowing only nesting functions as arguments up to depth two) covers all of our examples in this paper. For a discussion of other subclasses of formu- lae, where higher-order unification is computation- ally feasible see (Prehofer, 1994). 3 Some of the equations in the examples have multi- ple most general solutions, and indeed this multiplic- ity corresponds to the possibility of multiple differ- ent interpretations of the focus constructions. The role of colours in this is to restrict the logically pos- sible solutions to those that are linguistically sound. 4 Linguistic Applications of the POR In section 3.1, we have seen that HOCU allowed for a simple theoretical rendering of DSP's Primary Oc- currence Restriction. But isn't this restriction fairly idiosyncratic? In this section, we show that the re- striction which was originally proposed by DSP to model VP-ellipsis, is in fact a very general constraint which far from being idiosyncratic, applies to many different phenomena. In particular, we show that it is necessary for an adequate analysis of focus, second occurrence expressions and adverbial quantification. Furthermore, we will see that what counts as a primary occurrence differs from one phenomenon to the other (for instance, an occurrence directly asso- ciated with focus counts as primary w.r.t focus se- mantics but not w.r.t to VP-ellipsis interpretation). To account for these differences, some machinery is needed which turns DSP's intuitive idea into a fully- blown theory. Fortunately, the HOCU framework is just this: different colours can be used for different types of primary occurrences and likewise for differ- ent types of free variables. In what follows, we show how each phenomenon is dealt with. We then illus- trate by an example how their interaction can be accounted for. 4.1 Focus Since (Jackendoff, 1972), it is commonly agreed that focus affects the semantics and pragmatics of utter- ances. Under this perspective, focus is taken to be the semantic value of a prosodically prominent ele- ment. Furthermore, focus is assumed to trigger the formation of an additional semantic value (hence- forth, the Focus Semantic Value or FSV) which is in essence the set of propositions obtained by making a substitution in the focus position (cf. e.g. (Kratzer, 1991)). For instance, the FSV of (4a) 2 is (4b), the set of formulae of the form l(j,x) where x is of type e, and the pragmatic effect of focus is to presuppose that the denotation of this set is under considera- tion. (4) a. Jon likes SARAH b. {l(j,x) l x e wife} In (Gardent and Kohlhase, 1996), we show that HOU can successfully be used to compute the FSV of an utterance. More specifically, given (part of) an utterance U with semantic representation Sere and foci F1... F n, we require that the following equa- 2Focus is indicated using upper-case. tion, the FSV equation, be soIved: Sem = Gd(F1)... (F ~) On the basis of the Gd value, we then define the FSV, written Gd, as follows: Definition 4.1 (Focus Semantic Value) Let Gd be of type ~ = ~k --~ t and n be the number of loci (n < k), then the Focus Semantic Value deriv- able from Gd, written G---d, is {Gd(tl... t n) I ti e wife,}. This yields a focus semantic value which is in essence Kratzer's presupposition skeleton. For in- stance, given (4a) above, the required equation will be l(j, s) = Gd(s) with two possible values for Gd: Ax.l(j, x) and Ax.l(j, s). Given definition (4.1), (4a) is then assigned two FSVs namely (5) a. Gd= {l(j,x) l x e Wife} b. G'--d = {l(j,s) l x ~ Wife} That is, the HOU treatment of focus over- generates: (5a) is an appropriate FSV, but not (5b). Clearly though, the POR can be used to rule out (5b) if we assume that occurrences that are directly associated with a focus are primary occurrences. To capture the fact that those primary occurrences are different from DSP's primary occurrences when deal- ing with ellipsis, we colour occurrences that are di- rectly associated with focus (rather than a source parallel element in the case of ellipsis) pf. Conse- quently, we require that the variable representing the FSV be -~pf coloured, that is, its value may not contain any pf term. Under these assumptions, the equation for (4a) will be (6a) which has for unique solution (6b). (6) a. l(j, Spf) = FSV~pf(Spf) b. FSV~pf = Ax.l(j, x) 4 4.2 Second Occurrence Expressions A second occurrence expression (SOE) is a partial or complete repetition of the preceding utterance and is characterised by a de-accenting of the repeating part (Bartels, 1995). For instance, (Tb) is an SOE whose repeating part only likes Mary is deaccented. (7) a. Jon only likes MARY. b. No, PETER only likes Mary. In (Gardent, 1996; Gardent et al., 1996) we show that SOEs are advantageously viewed as involving a deaccented anaphor whose semantic representation must unify with that of its antecedent. Formally, this is captured as follows. Let SSem and TSem be the semantic representation of the source and target clause respectively, and TP 1 ... TP n, SP 1 ... SP n be the target and source parallel elements 3, then the interpretation of an SOE must respect the following equations: An(Sp1,..., SP n) = SSem An(Tp1,..., TP '~) = TSem Given this proposal and some further assumptions about the semantics of only, the analysis of (Tb) in- volves the following equations: (8) An(j)= VP[P e {)~x.like(x,y) l y • wife} A P(j) ~ P = ~x.like(x, m)] An(p) = VP[P • FSV A P(p) --+ P = Ax.like(x, m)] Resolution of the first equation then yields two solutions: An = )~zVP[P • {;kx.like(x,y) l Y • wife} A P(z) ~ P = )~x.like(x, m)] An = AzVP[P • {)~x.like(x,y) l Y • wife} A P(j) ~ P = )~x.like(x, m)] Since An represents the semantic information shared by target and source clause, the second so- lution is clearly incorrect given that it contains in- formation (j) that is specific to the source clause. Again, the POR will rule out the incorrect solutions, whereby contrary to the VP-ellipsis case, all occur- rences that are directly associated with parallel el- ements (i.e. not just source parallel elements) are taken to be primary occurrences. The distinction is implemented by colouring all occurrences that are directly associated with parallel element ps, whereas the corresponding free variable (An) is coloured as --ps. Given these constraints, the first equation in (8) is reformulated as: An~ps(jps) = VP[P • {)~x.like(x,y) l Y • wife} A P(Jps) --+ P = Ax.like(x, m)] with the unique well-coloured solution An.,s = )~z.VP[P • {Ax.like(x,y) l y • wife} A P(z) --~ P = )~x.like(x, m)] 4.3 Adverbial quantification Finally, let us briefly examine some cases of adver- bial quantification. Consider the following example from (von Fintel, 1995): Tom always takes SUE to Al's mother. Yes, and he always takes Sue to JO's mother. In (Gardent and Kohlhase, 1996), we suggest that such cases are SOEs, and thus can be treated as involving a deaccented anaphor (in this case, the anaphor he always takes Sue to _'s mother). Given some standard assumptions about the semantics of 3As in DSP, the identification of parallel elements is taken as given. 5 always, the equations constraining the interpretation An of this anaphor are: An(al) = always (Tom take x to al's mother) (Tom take Sue to al's mother) An(jo) = always FSV (Tom take Sue to Jo's mother) Consider the first equation. If An is the semantics shared by target and source clause, then the only possible value for An is )~z.always (Tom take x to z's mother) (Tom take Sue to z's mother) where both occurrences of the parallel element m have been abstracted over. In contrast, the following solutions for An are incorrect. Az.always (Tom take x to al's mother) (Tom )~z.always (Tom (Tom Az.always (Tom take Sue to z's mother) take x to al's mother) take Sue to al's mother) take x to z's mother.) (Tom take Sue to al's mother) Once again, we see that the POR is a necessary restriction: by labeling as primary, all occurrences representing a parallel element, it can be ensured that only the first solution is generated. 4.4 Interaction of constraints Perhaps the most convincing way of showing the need for a theory of colours (rather than just an in- formal constraint) is by looking at the interaction of constraints between various phenomena. Consider the following discourse (9) a. Jon likes SARAH b. Peter does too Such a discourse presents us with a case of inter- action between ellipsis and focus thereby raising the question of how DSP' POR for ellipsis should inter- act with our POR for focus. As remarked in section 3.1, we have to interpret the colour -~pe as the concept of being not primary for ellipsis, which includes pf (primary for focus). In order to make this approach work formally, we have to extend the supply of colours by allowing boolean combinations of colour constants. The semantics of these ground colour formula is that of propositional logic, where -~d is taken to be equivalent to the dis- junction of all other colour constants. Consequently we have to generalize the second condition on C-substitutions • For all colour annotations d of symbols in a(xc) d ~ c in propositional logic. Thus X.d can be instantiated with any coloured formula that does not contain the colour d. The HOCU algorithm is augmented with suitable rules for boolean constraint satisfaction for colour equa- tions. The equations resulting from the interpretation of (9b) are: l(jpe, 8pf) ~-- R-,pe(jpe) R~pe(P) = FSV~pf(F) where the first equation determines the interpre- tation of the ellipsis whereas the second fixes the value of the FSV. Resolution of the first equation yields the value Ax.l(x, Spf) for R~pe. As required, no other solution is possible given the colour con- stralnts; in particular Ax.l(jpe, Spf) is not a valid so- lution. The value of R~pe(jpe) is now l(Ppe, 8pf) SO that the second equation is4: l(p, Spf) = FSV~pf(F) Under the indicated colour constraints, three so- lutions are possible: FSV~pf = Ax.l(p, x), F = spf FSV~pf = AO.O(p), F = Ax.l(x, Spf) FSV~pf = ~X.X, F = l(p, spf) The first solution yields a narrow focus read- ing (only SARAH is in focus) whereas the second and the third yield wide focus interpretations corre- sponding to a VP and an S focus respectively. That is, not only do colours allow us to correctly capture the interaction of the two PORs restricting the in- terpretation of ellipsis of focus, they also permit a natural modeling of focus projection (cf. (Jackend- off, 1972)). 5 Another constraint An additional argument in favour of a general the- ory of colours lies in the fact that constraints that are distinct from the POR need to be encoded to prevent HOU analyses from over-generating. In this section, we present one such constraint (the so-called weak-crossover constraint) and show how it can be implemented within the HOCU framework. In essence, the main function of the POR is to en- sure that some occurrence occuring in an equation appears as a bound variable in the term assigned by substitution to the free variable occurring in this equation. However, there are cases where the dual 4Note that this equation falls out of our formal sys- tem in that it is untyped and thus cannot be solved by the algorithm described in section 6 (as the solutions will show, we have to allow for FSV and F to have different types). However, it seems to be a routine exercise to aug- ment HOU algorithms that can cope with type variables like (Hustadt, 1991; Dougherty, 1993) with the colour methods from (Hutter and Kohlhase, 1995). 6 constraint must be enforced: a term occurrence ap- pearing in an equation must appear unchanged in the term assigned by substitution to the free vari- able occurring in this equation. The following ex- ample illustrates this. (Chomsky, 1976) observes that focused NPs pattern with quantified and wh-NPs with re- spect to pronominal anaphora: when the quanti- fied/wh/focused NP precedes and c-commands the pronoun, this pronoun yields an ambiguity between a co-referential and a bound-variable reading. This is illustrated in example (10) We only expected HIMi to claim that he~ was brilliant where the presence of the pronoun hei gives rise to two possible FSVs s FSV = {Ax.ex(x,y,i) l Y E wife} FSV = {Ax.ex(x,y,y) [ y E Wife} thus allowing two different readings: the corefen- tial or strict reading VP[P E {Ax.ex(x,y,i) I Y E Wife} A P(we) --+ P = Ax.ex(x, i, i)] and the bound-variable or sloppy reading. VP[P E {Ax.ex(x,y,y)) [ y E wife} ^ P(we) ~ P = Ax.ex(x, i, i))] In contrast, if the quantified/wh/focused NP does not precede and c-command the pronoun, as in (11) We only expected himi to claim that HEi was brilliant there is no ambiguity and the pronoun can only give rise to a co-referential interpretation. For in- stance, given (11) only one reading arises VP[P E {Ax.ex(x,i,y) l Y E Wife} A P(we) ~ P = Ax.ex(x, i, i)] where the FSV is {Ax.ex(x,i,y) l Y E wife}. To capture this data, Government and Binding analyses postulate first, that the antecedent is raised by quantifier raising and second, that pronouns that are c-commanded and preceded by their antecedent are represented either as a A-bound variable or as a constant whereas other pronouns can only be rep- resented by a constant (cf. e.g. (Kratzer, 1991)'s binding principle). Using HOCU, we can model this restriction directly. As before, the focus term is pf- and the FSV variable -~pf-coloured. Furthermore, we assume that pronouns that are preceded and c- commanded by a quantified/wh/focused antecedent are variable coloured whereas other pronouns are -~pf-coloured. Finally, all other terms are taken to 5We abbreviate exp( x, cl(y, blt( i) ) ) to ex( x, y, i) to in- crease legibility. be --pf-coloured. Given these assumptions, the rep- resentation for (10) is ex~o~(we~pf,ipf ,iA) and the corresponding FSV equation R~pf(ipf) -- )~x.eX~pf (x, ipf, in) has two possible solutions R~0f = )~y.)~x.ex~pf (x, y, i~0f) R~of = )~y.)~x.ex~of(x , y, x) In contrast, the representation for (11) is ex-.pf(We~of, i~0f, ipf) and the equation is R-~pf(ipf) = )~x.ex~pf(X, i~of , /0f ) with only one well-coloured solution R~0f = )~y.Ax.ex~of ( x , i~of , Y) Importantly, given the indicated colour con- straints, no other solutions are admissible. Intu- itively, there are two reasons for this. First, the definition of coloured substitutions ensures that the term assigned to R~0f is -~pf-monochrome. In par- ticular, this forces any occurrences of/of to appear as a bound variable in the value assigned to R~pf whereas in can appear either as i~0f (a colour vari- able unifies with any colour constant) or as a bound variable - this in effect models the sloppy/strict am- biguity. Second, a colour constant only unifies with itself. This in effect rules out the bound variable reading in (11): if the i~0f occurrence were to be- come a bound variable, the value of R~of would then Ay.)~x.ex~of(x, y, y) . But then by ~-reduction, R~of(ipf ) would be )~x.ex~of(x, iof,iof ) which does not unify with the right hand side of the original equation i.e ~x.ex.of(x , i-0f, i0f). For a more formal account of how the unifiers are calculated see section 6.1. 6 Calculating Coloured Unifiers Since the HOCU is the principal computational de- vice of the analysis in this paper, we will now try to give an intuition for the functioning of the algo- rithm. For a formal account including all details and proofs see (Hutter and Kohlhase, 1995). Just as in the case of unification for first-order terms, the algorithm is a process of recursive decom- position and variable elimination that transform sets of equations into solved forms. Since C-substitutions have two parts, a term- and a colour part, we need two kinds (M =t N for term equations and c =c d for colour equations). Sets g of equations in solved form (i.e. where all equations are of the form x = M such that the variable x does not occur anywhere else in M or g) have a unique most general C-unifier a~ that also C-unifies the initial equation. There are several rules that decompose the syntac- tic structure of formulae, we will only present two of them. The rule for abstractions transforms equa- tions of the form )~x.A =t )~y.B to [c/x]A =t [c/y]B, and Ax.A =t B to [c/x]A =t Bc where c is a new constant, which may not appear in any solution. The rule for applications decomposes ha(s1,... ,s n) =t hb(tl,...,t '~) to the set {a =c b, sl =t tl,...,s,~ =t tn}, provided that h is a constant. Furthermore equations are kept in 13~/-normal form. The variable elimination process for colour vari- ables is very simple, it allows to transform a set g U {A =c d} of equations to [d/A]g U {A =c d}, making the equation {A =c d} solved in the result. For the formula case, elimination is not that simple, since we have to ensure that la(XA)l = la(xs)l to obtain a C-substitution a. Thus we cannot simply transform a set gU{Xd =t M} into [M/Xd]EU{Xd __t M}, since this would (incorrectly) solve the equa- tions {Xc = fc,Xd = gd}. The correct variable elimination rule transforms $ U {Xd =t M} into a(g) U {Xd =1 M, xc, = M1,...,Xc~ =t Mn}, where ci are all colours of the variable x occurring in M and g, the M i are appropriately coloured variants (same colour erasure) of M, and a is the g-substitution that eliminates all occurrences of x from g. Due to the presence of function variables, sys- tematic application of these rules can terminate with equations of the form xc(sl,...,s n) =t hd(tl,...,tm). Such equations can neither be fur- ther decomposed, since this would loose unifiers (if G and F are variables, then Ga = Fb as a solution Ax.c for F and G, but {F = G,a = b} is unsolv- able), nor can the right hand side be substituted for x as in a variable elimination rule, since the types would clash. Let us consider the uncoloured equa- tion x(a) ~t a which has the solutions (Az.a) and (Az.z) for x. The standard solution for finding a complete set of solutions in this so-called flex/rigid situation is to substitute a term for x that will enable decompo- sition to be applicable afterwards. It turns out that for finding all g-unifiers it is sufficient to bind x to terms of the same type as x (otherwise the unifier would be ill-typed) and compatible colour (other- wise the unifier would not be a C-substitution) that either • have the same head as the right hand side; the so-called imitation solution (.kz.a in our exam- ple) or • where the head is a bound variable that enables the head of one of the arguments of x to become head; the so-called projection binding ()~z.z). In order to get a better understanding of the situ- ation let us reconsider our example using colours. z(a¢) -- ad. For the imitation solution (~z.ad) we "imitate" the right hand side, so the colour on a must be d. For the projection solution we instantiate ($z.z) for x and obtain ()kz.z)ac, which f~-reduces to ac. We see that this "lifts" the constant ac from the argument position to the top. Incidentally, the pro- jection is only a C-unifier of our coloured example, if c and d axe identical. Fortunately, the choice of instantiations can be further restricted to the most general terms in the categories above• If Xc has type f~n --+ c~ and hd has type ~ -~ a, then these so-called general bind- ings have the following form: G h = ~kzal... z a".hd(H~l (-5),..., Hem (-5)) where the H i are new variables of type f)-~ ~ Vi and the ei are either distinct colour variables (if c E CI)) or ei = d = c (ifc E C). If his one of the bound variables z ~' , then ~h is called an imitation bind- ing, and else, (h is a constant or a free variable), a projection binding• The general rule for flex/rigid equations trans- forms {Xc(Sl,...,s n) =t hd(tl,...,tm)} into {Xc(S 1 .... , s n) =t hal(t1,..., tin), Xc =t ~h}, which in essence only fixes a particular binding for the head variable Xc. It turns out (for details and proofs see (Hutter and Kohlhase, 1995)) that these general bindings suffice to solve all flex/rigid situations, pos- sibly at the cost of creating new flex/rigid situations after elimination of the variable Xc and decompo- sition of the changed equations (the elimination of x changes xc(sl,..., s n) to ~h(sl, ..., s n) which has head h). 6.1 Example To fortify our intuition on calculating higher-order coloured unifiers let us reconsider examples (10) and (11) with the equations R~pf(ipf) __t ~x.ex~pf(X, ipf, iA) R~pf(ipf) =t Ax.ex~pf(X, i-~pf, ipf) We will develop the derivation of the solutions for the first equations (10) and point out the differences for the second (11). As a first step, the first equation is decomposed to R~pf(ipf, c) :t ex~pf(C, ipf, iA) where c is a new constant• Since R~pf is a vari- able, we are in a flex/rigid situation and have the possibilities of projection and imitation. The pro- jection bindings Axy.x and )~xy.y for R~pf would lead us to the equations ipf =t eX~pf(C, ipf,iA) and c =t eX~pf (c, ipf, iA), which are obviously unsolvable, since the head constants ipf (and c resp.) and eX~pf 8 clash 6. So we can only bind R~pf to the imitation binding ~kyx•ex~pf(H~pf(y, x), H~2pf (y, x), H 3 (y, x)). Now, we can directly eliminate the variable R~pf, since there are no other variants. The resulting equa- tion eX~pf(Hlpf(ipf, c), H2pf (ipf, c), g 3 (ipf, c)) =t eX~pf (c, ipf, iA) can be decomposed to the equations (17) Hlpf(ipf,C) __t c H~pf(ipf, c) =t ipf g3pf(/pf, C) __--t iA Let us first look at the first equation; in this flex/rigid situation, only the projection binding )kzw.w can be applied, since the imitation binding Azw.c contains the forbidden constant c and the other projection leads to a clash. This solves the equation, since (Azw.w)(ipf,c) j3-reduces to c, giv- ing the trivial equation c __t c which can be deleted by the decomposition rules• Similarly, in the second equation, the projection binding Azw.z for H 2 solves the equation, while the second projection clashes and the imitation binding )kzw.ipf is not -~pf-monochrome. Thus we are left with the third equation, where both imitation and projection bindings yield legal solutions: • The imitation binding for H3pf is )kzw.i~pf, and not Azw.iA, as one is tempted to believe, since it has to be -~pf-monochrome. Thus we are left with i~pf =t iA, which can (uniquely) be solved by the colour substitution [-~pf/A]. • If we bind H 3 to Azw.z, then we are left with ~pf Zpf. _-t iA, which can (uniquely) be solved by the colour substitution [pf/A]. If we collect all instantiations, we arrive at exactly the two possible solutions for R~pf in the original equations, which we had claimed in section 5: R~pf = ~kyx.ex~pf(X, y, i~pf) R~pf = )kyx•ex~pf(X, y, x) Obviously both of them solve the equation and furthermore, none is more general than the other, since i~pf cannot be inserted for the variable x in the second unifier (which would make it more general than the first), since x is bound• In the case of (11) the equations corresponding 1 __t 2 " __t - and to (17) are H.~pf(e, ipf) - e, H~pf(e, Zpf) - ?,~pf H3pf(ipf) __t ipf. Given the discussion above, it is im- mediate to see that H 1 has to be instantiated with -~pf the projection binding ~kzw.w, H 2 with the imitation 6For (11) we have the same situation• Here the cor- • t responding equation is tpf -- ex~pf(C, i~pf, ipf). binding Azw.i~of, since the projection binding leads to a colour clash (i~f =t ipf) and finally H~pf has to be bound to the projection binding Azw.z, since the imitation binding Azw.ipf is not -~pf-monochrome. Collecting the bindings, we arrive at the unique so- lution R~f = Ayx.ex~pf(x, i~pf, x). 7 Conclusion Higher-Order Unification has been shown to be a powerful tool for constructing the interpretation of NL. In this paper, we have argued that Higher- Order Coloured Unification allows a precise speci- fication of the interface between semantic interpre- tation and other sources of linguistic information, thus preventing over-generation. We have substan- tiated this claim by specifying the linguistic, extra- semantic constraints regulating the interpretation of VP-ellipsis, focus, SOEs, adverbial quantification and pronouns whose antecedent is a focused NP. Other phenomena for which the HOCU approach seems particularly promising are phenomena in which the semantic interpretation process is obvi- ously constrained by the other sources of linguistic information. In particular, it would be interesting to see whether coloured unification can appropriately model the complex interaction of constraints govern- ing the interpretation and acceptability of gapping on the one hand, and sloppy/strict ambiguity on the other. Another interesting research direction would be the development and implementation of a monos- tratal grammar for anaphors whose interpretation are determined by coloured unification. Colours are tags which decorate a semantic representation thereby constraining the unification process; on the other hand, there are also the reflex of linguistic, non-semantic (e.g. syntactic or prosodic) informa- tion. A full grammar implementation would make this connection more precise. 8 Acknowledgements The work reported in this paper was funded by the Deutsche Forschungsgemeinschaft (DFG) in Sonder- forschungsbereich SFB-378, Project C2 (LISA). References Christine Bartels. 1995. Second occurrence test. Ms. Noam Chomsky. 1976. Conditions on rules in gram- mar. Linguistic Analysis, 2(4):303-351. Mary Dalrymple, Stuart Shieber, and Fernando Pereira. 1991. Ellipsis and higher-order- unification. Linguistics and Philosophy, 14:399- 452. Daniel Dougherty. 1993. Higher-order unification using combinators. Theoretical Computer Science B, 114(2):273-298. Gilles Dowek. 1992. Third order matching is decid- able. In Proc. LICS92, pages 2-10. IEEE Com- puter Society Press. Claire Gardent and Michael Kohlhase. 1996. Focus and higher-order unification. In Proe. COLING96 forthcoming. Claire Gardent, Michael Kohlhase, and Noor van Leusen. 1996. Corrections and higher-order unifi- cation. CLAUS report 77, University of Saarland. Claire Gardent. 1996. Anaphores parall~les et tech- niques de r~solution. Langages. G@rard Huet. 1975. A unification algorithm for typed A-calculus. Theoretical Computer Science 1, pages 27-57. Ulrich Hustadt. 1991. A complete transformation system for polymorphic higher-order unification. Technical Report MPI-I-91-228, MPI Informatik, Saarbriicken, Germany. Dieter Hutter and Michael Kohlhase. 1995. A coloured version of the A-calculus. SEKI-Report SR-95-05, Universit/it des Saarlandes. Ray S. Jackendoff. 1972. Semantic Interpretation in Generative Grammar. The MIT Press. Angelika Kratzer. 1991. The representation of fo- cus. In Arnim van Stechow and Dieter Wunder- lich, editors, Semantik: Ein internationales Hand- buch der zeitgenoessischen Forschung. Berlin: Walter de Gruyter. Manfred Pinkal. 1995. Radical underspecification. In The lOth Amsterdam Colloquium. Christian Prehofer. 1994. Decidable higher-order unification problems. In Alan Bundy, editor, Proc. CADE94, LNAI, pages 635-649, Nancy, France. Steve G. Pulman. 1995. Higher-order unification and the interpretation of focus. Paper submitted for publication. Kai von Fintel. 1995. A minimal theory of adver- bial quantification. Unpublished draft Ms. MIT, Cambridge, March. 9 | 1996 | 1 |
Combining Trigram-based and Feature-based Methods for Context-Sensitive Spelling Correction Andrew R. Golding and Yves Schabes Mitsubishi Electric Research Laboratories 201 Broadway Cambridge, MA 02139 {golding, schabes}@merl, com Abstract This paper addresses the problem of cor- recting spelling errors that result in valid, though unintended words (such as peace and piece, or quiet and quite) and also the problem of correcting particular word usage errors (such as amount and num- ber, or among and between). Such cor- rections require contextual information and are not handled by conventional spelling programs such as Unix spell. First, we introduce a method called Trigrams that uses part-of-speech trigrams to encode the context. This method uses a small num- ber of parameters compared to previous methods based on word trigrams. How- ever, it is effectively unable to distinguish among words that have the same part of speech. For this case, an alternative feature-based method called Bayes per- forms better; but Bayes is less effective than Trigrams when the distinction among words depends on syntactic constraints. A hybrid method called Tribayes is then in- troduced that combines the best of the pre- vious two methods. The improvement in performance of Tribayes over its compo- nents is verified experimentally. Tribayes is also compared with the grammar checker in Microsoft Word, and is found to have sub- stantially higher performance. 1 Introduction Spelling correction has become a very common tech- nology and is often not perceived as a problem where progress can be made. However, conventional spelling checkers, such as Unix spell, are concerned only with spelling errors that result in words that cannot be found in a word list of a given language. One analysis has shown that up to 15% of spelling errors that result from elementary typographical er- rors (character insertion, deletion, or transposition) yield another valid word in the language (Peterson, 1986). These errors remain undetected by tradi- tional spelling checkers. In addition to typographical errors, words that can be easily confused with each other (for instance, the homophones peace and piece) also remain undetected. Recent studies of actual ob- served spelling errors have estimated that overall, errors resulting in valid words account for anywhere from 25% to over 50% of the errors, depending on the application (Kukich, 1992). We will use the term context-sensitive spelling cor- rection to refer to the task of fixing spelling errors that result in valid words, such as: (1) * Can I have a peace of cake? where peace was typed when piece was intended. The task will be cast as one of lexical disambigua- tion: we are given a predefined collection of confu- sion sets, such as {peace,piece}, {than, then}, etc., which circumscribe the space of spelling errors to look for. A confusion set means that each word in the set could mistakenly be typed when another word in the set was intended. The task is to predict, given an occurrence of a word in one of the confusion sets, which word in the set was actually intended. Previous work on context-sensitive spelling cor- rection and related lexical disambiguation tasks has its limitations. Word-trigram methods (Mays, Dam- erau, and Mercer, 1991) require an extremely large body of text to train the word-trigram model; even with extensive training sets, the problem of sparse data is often acute. In addition, huge word-trigram tables need to be available at run time. More- over, word trigrams are ineffective at capturing long- distance properties such as discourse topic and tense. Feature-based approaches, such as Bayesian clas- sifters (Gale, Church, and Yarowsky, 1993), deci- sion lists (Yarowsky, 1994), and Bayesian hybrids (Golding, 1995), have had varying degrees of suc- cess for the problem of context-sensitive spelling correction. However, we report experiments that show that these methods are of limited effective- ness for cases such as {their, there, they're} and {than, then}, where the predominant distinction to be made among the words is syntactic. 71 Confusion set Train Test Most freq. Base their, there, they're 3265 850 than, then 2096 514 its, it's 1364 366 your, you're 750 187 begin, being 559 146 passed, past 307 74 quiet, quite 264 66 weather, whether 239 61 accept, except 173 50 lead, led 173 49 cite, sight, site 115 34 principal, principle 147 34 raise, rise 98 39 affect, effect 178 49 peace, piece 203 50 country, county 268 62 amount, number 460 123 among, between 764 186 their 56.8 than 63.4 its 91.3 your 89.3 being 93.2 past 68.9 quite 83.3 whether 86.9 except 70.0 led 46.9 sight 64.7 principle 58.8 rise 64.1 effect 91.8 peace 44.0 country 91.9 number 71.5 between 71.5 Table 1: Performance of the baseline method for 18 confusion sets. "Train" and "Test" give the number of occurrences of any word in the confusion set in the training and test corpora. "Most freq." is the word in the confusion set that occurred most often in the training corpus. "Base" is the percentage of correct predictions of the baseline system on the test corpus. In this paper, we first introduce a method called Trigrams that uses part-of-speech trigrams to en- code the context. This method greatly reduces the number of parameters compared to known methods, which are based on word trigrams. This method also has the advantage that training can be done once and for all, and quite manageably, for all con- fusion sets; new confusion sets can be added later without any additional training. This feature makes Trigrams a very easily expandable system. Empirical evaluation of the trigram method demonstrates that it performs well when the words to be discriminated have different parts of speech, but poorly when they have the same part of speech. In the latter case, it is reduced to simply guessing whichever word in the confusion set is the most com- mon representative of its part-of-speech class. We consider an alternative method, Bayes, a Bayesian hybrid method (Golding, 1995), for the case where the words have the same part of speech. We confirm experimentally that Bayes and Trigrams have complementary performance, Trigrams being better when the words in the confusion set have dif- ferent parts of speech, and Bayes being better when they have the same part of speech. We introduce a hybrid method, Tribayes, that exploits this com- plementarity by invoking each method when it is strongest. Tribayes achieves the best accuracy of the methods under consideration in all situations. To evaluate the performance of Tribayes with re- spect to an external standard, we compare it to the grammar checker in Microsoft Word. Tribayes is found to have substantially higher performance. This paper is organized as follows: first we present the methodology used in the experiments. We then discuss the methods mentioned above, interleaved with experimental results. The comparison with Mi- crosoft Word is then presented. The final section concludes. 2 Methodology Each method will be described in terms of its op- eration on a single confusion set C = {Wl,..., w,}; that is, we will say how the method disambiguates occurrences of words wl through wn. The methods handle multiple confusion sets by applying the same technique to each confusion set independently. Each method involves a training phase and a test phase. We trained each method on 80% (randomly selected) of the Brown corpus (Ku6era and Francis, 1967) and tested it on the remain- ing 20%. All methods were run on a collection of 18 confusion sets, which were largely taken from the list of "Words Commonly Confused" in the back of Random House (Flexner, 1983). The con- fusion sets were selected on the basis of being frequently-occurring in Brown, and representing a variety of types of errors, including homophone con- fusions (e.g., {peace, piece}) and grammatical mis- takes (e.g., {among, between}). A few confusion sets not in Random House were added, representing ty- pographical errors (e.g., {begin, being}). The confu- sion sets appear in Table 1. 72 3 Baseline As an indicator of the difficulty of the task, we com- pared each of the methods to the method which ig- nores the context in which the word occurred, and just guesses based on the priors. Table 1 shows the performance of the baseline method for the 18 confusion sets. 4 Trigrams Mays, Damerau, and Mercer (1991) proposed a word-trigram method for context-sensitive spelling correction based on the noisy channel model. Since this method is based on word trigrams, it requires an enormous training corpus to fit all of these parame- ters accurately; in addition, at run time it requires extensive system resources to store and manipulate the resulting huge word-trigram table. In contrast, the method proposed here uses part- of-speech trigrams. Given a target occurrence of a word to correct, it substitutes in turn each word in the confusion set into the sentence. Por each substi- tution, it calculates the probability of the resulting sentence. It selects as its answer the word that gives the highest probability. More precisely, assume that the word wh occurs in a sentence W = wl...Wk...wn, and that w~ is a word we are considering substituting for it, yielding sentence W I. Word w~ is then preferred over wk iff P(W') > P(W), where P(W) and P(W') are the probabilities of sentences W and W f respectively. 1 We calculate P(W) using the tag sequence of W as an intermediate quantity, and summing, over all pos- sible tag sequences, the probability of the sentence with that tagging; that is: P(W) = ~ P(W, T) T where T is a tag sequence for sentence W. The above probabilities are estimated as is tra- ditionally done in trigram-based part-of-speech tag- ging (Church, 1988; DeRose, 1988): P(W,T) = P(WIT)P(T ) (1) = HP(wi[ti) HP(t, lt,_2t,_l)(2) i i where T = tl ...tn, and P(ti]tl-2ti-1) is the prob a- bility of seeing a part-of-speech tag tl given the two preceding part-of-speech tags ti-2 and ti-1. Equa- tions 1 and 2 will also be used to tag sentences W and W ~ with their most likely part-of-speech se- quences. This will allow us to determine the tag that 1To enable fair comparisons between sequences of dif- ferent length (as when considering maybe and may be), we actually compare the per-word geometric mean of the sentence probabilities. Otherwise, the shorter sequence will usually be preferred, as shorter sequences tend to have higher probabilities than longer ones. would be assigned to each word in the confusion set when substituted into the target sentence. Table 2 gives the results of the trigram method (as well as the Bayesian method of the next section) for the 18 confusion sets. 2 The results are broken down into two cases: "Different tags" and "Same tags". A target occurrence is put in the latter iff all words in the confusion set would have the same tag when substituted into the target sentence. In the "Different tags" condition, Trigrams generally does well, outscoring Bayes for all but 3 confusion sets -- and in each of these cases, making no more than 3 errors more than Bayes. In the "Same tags" condition, however, Trigrams performs only as well as Baseline. This follows from Equations 1 and 2: when comparing P(W) and P(WI), the dominant term corresponds to the most likely tagging; and in this term, if the target word wk and its substitute w~ have the same tag t, then the comparison amounts to comparing P(wk [/) and P(w~lt ). In other words, the decision reduces to which of the two words, Wk and w~, is the more common representative of part-of-speech class t. 3 5 Bayes The previous section showed that the part-of-speech trigram method works well when the words in the confusion set have different parts of speech, but es- sentially cannot distinguish among the words if they have the same part of speech. In this case, a more effective approach is to learn features that char- acterize the different contexts in which each word tends to occur. A number of feature-based methods have been proposed, including Bayesian classifiers (Gale, Church, and Yarowsky, 1993), decision lists (Yarowsky, 1994), Bayesian hybrids (Golding, 1995), and, more recently, a method based on the Winnow multiplicative weight-updating algorithm (Golding and Roth, 1996). We adopt the Bayesian hybrid method, which we will call Bayes, having experi- mented with each of the methods and found Bayes to be among the best-performing for the task at hand. This method has been described elsewhere (Golding, 1995) and so will only be briefly reviewed here; how- ever, the version used here uses an improved smooth- ing technique, which is mentioned briefly below. ~In the experiments reported here, the trigram method was run using the tag inventory derived from the Brown corpus, except that a handful of common func- tion words were tagged as themselves, namely: except, than, then, to, too, and whether. 3 In a few cases, however, Trig'rams does not get ex- actly the same score as Baseline. This can happen when the words in the confusion set have more than one tag in common; e.g., for (affect, effect}, the words can both be norms or verbs. Trigrams may then choose differ- ently when the words are tagged as nouns versus verbs, whereas Baseline makes the same choice in all cases. 73 Confusion set their, there, they're than, then its, it's your, you're begin, being passed, past quiet, quite weather, whether accept, except lead, led cite, sight, site principal, principle raise, rise affect, effect peace, piece country, county amount, number among, between Break- down I00 100 100 100 100 I00 100 100 100 I00 I00 29 8 6 2 0 0 0 Different tags System scores Base T B 56.8 97.6 94.4 63.4 94.9 93.2 91.3 98.1 95.9 89.3 98.9 89.8 93.2 97.3 91.8 68.9 95.9 89.2 83.3 95.5 89.4 86.9 93.4 96.7 70.0 82.0 88.0 46.9 83.7 79.6 64.7 70.6 73.5 0.0 100.0 70.0 100.0 100.0 100.0 100.0 100.0 66.7 0.0 100.0 100.0 Break- down 0 0 0 .0 0 0 0 0 0 0 0 71 92 94 98 100 100 100 Same tags System scores Base T B 83.3 83.3 91.7 61.1 61.1 72.2 91.3 93.5 97.8 44.9 42.9 89.8 91.9 91.9 85.5 71.5 73.2 82.9 71.5 71.5 75.3 Table 2: Performance of the component methods, Baseline (Base), Trigrams (T), and Bayes (B). System scores are given as percentages of correct predictions. The results are broken down by whether or not all words in the confusion set would have the same tagging when substituted into the target sentence. The "Breakdown" columns show the percentage of examples that fall under each condition. Bayes uses two types of features: context words and collocations. Context-word features test for the presence of a particular word within +k words of the target word; collocations test for a pattern of up to ~ contiguous words and/or part-of-speech tags around the target word. Examples for the confusion set {dairy, diary} include: (2) milk within +10 words (3) in POSS-DET where (2) is a context-word feature that tends to im- ply dairy, while (3) is a collocation implying diary. Feature (3) includes the tag POSS-I)ET for possessive determiners (his, her, etc.), and matches, for exam- ple, the sequence in his 4 in: (4) He made an entry in his diary. Bayes learns these features from a training corpus of correct text. Each time a word in the confusion set occurs in the corpus, Bayes proposes every fea- ture that matches the context -- one context-word feature for every distinct word within +k words of the target word, and one collocation for every way of 4A tag is taken to match a word in the sentence iff the tag is a member of the word's set of possible part-of- speech tags. Tag sets are used, rather than actual tags, because it is in general impossible to tag the sentence uniquely at spelling-correction time, as the identity of the target word has not yet been established. expressing a pattern of up to ~ contiguous elements. After working through the whole training corpus, Bayes collects and returns the set of features pro- posed. Pruning criteria may be applied at this point to eliminate features that are based on insufficient data, or that are ineffective at discriminating among the words in the confusion set. At run time, Bayes uses the features learned dur- ing training to correct the spelling of target words. Let jr be the set of features that match a particu- lar target occurrence. Suppose for a moment that we were applying a naive Bayesian approach. We would then calculate the probability that each word wi in the confusion set is the correct identity of the target word, given that we have observed features 9 r, using Bayes' rule with the independence assumption: P(w,l~') = P(flw,) P(5) where each probability on the right-hand side is cal- culated by a maximum-likelihood estimate (MLE) over the training set. We would then pick as our an- swer the wi with the highest P(wiI.T" ). The method presented here differs from the naive approach in two respects: first, it does not assume independence among features, but rather has heuristics for de- tecting strong dependencies, and resolving them by deleting features until it is left with a reduced set .T "~ 74 of (relatively) independent features, which are then used in place of ~" in the formula above. Second, to estimate the P(flwi) terms, rather than using a simple MLE, it performs smoothing by interpolat- ing between the MLE of P(flwi) and the MLE of the unigram probability, P(f). These enhancements greatly improve the performance of Bayes over the naive Bayesian approach. The results of Bayes are shown in Table 2. 5 Gener- ally speaking, Bayes does worse than Trigrams when the words in the confusion set have different parts of speech. The reason is that, in such cases, the pre- dominant distinction to be made among the words is syntactic; and the trigram method, which brings to bear part-of-speech knowledge for the whole sen- tence, is better equipped to make this distinction than Bayes, which only tests up to two syntactic el- ements in its collocations. Moreover, Bayes' use of context-word features is arguably misguided here, as context words pick up differences in topic and tense, which are irrelevant here, and in fact tend to degrade performance by detecting spurious differences. In a few cases, such as {begin, being}, this effect is enough to drive Bayes slightly below Baseline. 6 For the condition where the words have the same part of speech, Table 2 shows that Bayes almost al- ways does better than Trigrams. This is because, as discussed above, Trigrams is essentially acting like Baseline in this condition. Bayes, on the other hand, learns features that allow it to discriminate among the particular words at issue, regardless of their part of speech. The one exception is {country, county}, for which Bayes scores somewhat below Baseline. This is another case in which context words actu- ally hurt Bayes, as running it without context words again improved its performance to the Baseline level. 6 Tribayes The previous sections demonstrated the complemen- tarity between Trigrams and Bayes: Trigrams works best when the words in the confusion set do not all have the same part of speech, while Bayes works best when they do. This complementarity leads directly to a hybrid method, Tribayes, that gets the best of each. It applies Trigrams first; in the process, it as- certains whether all the words in the confusion set would have the same tag when substituted into the 5For the experiments reported here, Bayes was con- figured as follows: k (the half-width of the window of context words) was set to 10; £ (the maximum length of a collocation) was set to 2; feature strength was measured using the reliability metric; pruning of collocations at training time was enabled; and pruning of context words was minimal -- context words were pruned only if they had fewer than 2 occurrences or non-occurrences. eWe confirmed this by running Bayes without context words (i.e., with collocations only). Its performance was then always at or above Baseline. 75 target sentence. If they do not, it accepts the answer provided by Trigrams; if they do, it applies Bayes. Two points about the application of Bayes in the hybrid method: first, Bayes is now being asked to distinguish among words only when they have the same part of speech. It should be trained accord- ingly -- that is, only on examples where the words have the same part of speech. The Bayes component of the hybrid will therefore be trained on a subset of the examples that would be used for training the stand-alone version of Bayes. The second point about Bayes is that, like Tri- grams, it sometimes makes uninformed decisions -- decisions based only on the priors. For Bayes, this happens when none of its features matches the target occurrence. Since, for now, we do not have a good "third-string" algorithm to call when both Trigrams and Bayes fall by the wayside, we content ourselves with the guess made by Bayes in such situations. Table 3 shows the performance of Tribayes com- pared to its components. In the "Different tags" con- dition, Tribayes invokes Trigrams, and thus scores identically. In the "Same tags" condition, Tribayes invokes Bayes. It does not necessarily score the same, however, because, as mentioned above, it is trained on a subset of the examples that stand-alone Bayes is trained on. This can lead to higher or lower performance -- higher because the training exam- ples are more homogeneous (representing only cases where the words have the same part of speech); lower because there may not be enough training examples to learn from. Both effects show up in Table 3. Table 4 summarizes the overall performance of all methods discussed. It can be seen that Trigrams and Bayes each have their strong points. Tribayes, however, achieves the maximum of their scores, by and large, the exceptions being due to cases where one method or the other had an unexpectedly low score (discussed in Sections 4 and 5). The confusion set {raise, rise} demonstrates (albeit modestly) the ability of the hybrid to outscore both of its compo- nents, by putting together the performance of the better component for both conditions. 7 Comparison with Microsoft Word The previous section evaluated the performance of Tribayes with respect to its components, and showed that it got the best of both. In this section, we calibrate this overall performance by compar- ing Tribayes with Microsoft Word (version 7.0), a widely used word-processing system whose grammar checker represents the state of the art in commercial context-sensitive spelling correction. Unfortunately we cannot evaluate Word using "prediction accuracy" (as we did above), as we do not always have access to the system's predictions -- sometimes it suppresses its predictions in an effort to filter out the bad ones. Instead, in this section Confusion set Different tags Same tags Break- System scores Break- System scores down T TB down B TB their, there, they're 100 97.6 97.6 0 than, then 100 94.9 94.9 0 its, it's 100 98.1 98.1 0 your, you're 100 98.9 98.9 0 begin, being 100 97.3 97.3 0 passed, past 100 95.9 95.9 0 quiet, quite 100 95.5 95.5 0 weather, whether 100 93.4 93.4 0 accept, except 100 82.0 82.0 0 lead, led 100 83.7 83.7 0 cite, sight, site 100 70.6 70.6 0 principal, principle 29 100.0 100.0 71 91.7 83.3 raise, rise 8 100.0 100.0 92 72.2 75.0 affect, effect 6 100.0 100.0 94 97.8 95.7 peace, piece 2 100.0 100.0 98 89.8 89.8 country, county 0 100 85.5 85.5 amount, number 0 100 82.9 82.9 among, between 0 100 75.3 75.3 Table 3: Performance of the hybrid method, Tribayes (TB), as compared with Trigrams (T) and Bayes (B). System scores are given as percentages of correct predictions. The results are broken down by whether or not all words in the confusion set would have the same tagging when substituted into the target sentence. The "Breakdown" columns give the percentage of examples under each condition. Confusion set System scores Base T B TB their, there, they're than, then its, it's your, you're begin, being passed, past quiet, quite weather, whether accept, except lead, led cite, sight, site principal, principle raise, rise affect, effect peace, piece country, county amount, number among, between 56.8 97.6 94.4 97.6 63.4 94.9 93.2 94.9 91.3 98.1 95.9 98.1 89.3 98.9 89.8 98.9 93.2 97.3 91.8 97.3 68.9 95.9 89.2 95.9 83.3 95.5 89.4 95.5 86.9 93.4 96.7 93.4 70.0 82.0 88.0 82.0 46.9 83.7 79.6 83.7 64.7 70.6 73.5 70.6 58.8 88.2 85.3 88.2 64.1 64.1 74.4 76.9 91.8 93.9 95.9 95.9 44.0 44.0 90.0 90.0 91.9 91.9 85.5 85.5 71.5 73.2 82.9 82.9 71.5 71.5 75.3 75.3 Table 4: Overall performance of all methods: Baseline (Base), Trigrams System scores are given as percentages of correct predictions. (T), Bayes (B), and Tribayes (TB). 76 we will use two parameters to evaluate system per- formance: system accuracy when tested on correct usages of words, and system accuracy on incorrect usages. Together, these two parameters give a com- plete picture of system performance: the score on correct usages measures the system's rate of false negative errors (changing a right word to a wrong one), while the score on incorrect usages measures false positives (failing to change a wrong word to a right one). We will not attempt to combine these two parameters into a single measure of system "good- ness", as the appropriate combination varies for dif- ferent users, depending on the user's typing accuracy and tolerance of false negatives and positives. The test sets for the correct condition are the same ones used earlier, based on 20% of the Brown corpus. The test sets for the incorrect condition were gener- ated by corrupting the correct test sets; in particu- lar, each correct occurrence of a word in the confu- sion set was replaced, in turn, with each other word in the confusion set, yielding n - 1 incorrect occur- rences for each correct occurrence (where n is the size of the confusion set). We will also refer to the incorrect condition as the corrupted condition. To run Microsoft Word on a particular test set, we started by disabling error checking for all error types except those needed for the confusion set at issue. This was done to avoid confounding effects. For {their, there, they're}, for instance, we enabled "word usage" errors (which include substitutions of their for there, etc.), but we disabled "contractions" (which include replacing they're with they are). We then invoked the grammar checker, accepting every suggestion offered. Sometimes errors were pointed out but no correction given; in such cases, we skipped over the error. Sometimes the suggestions led to an infinite loop, as with the sentence: (5) Be sure it's out when you leave. where the system alternately suggested replacing it's with its and vice versa. In such cases, we accepted the first suggestion, and then moved on. Unlike Word, Tribayes, as presented above, is purely a predictive system, and never suppresses its suggestions. This is somewhat of a handicap in the comparison, as Word can achieve higher scores in the correct condition by suppressing its weaker sugges- tions (albeit at the cost of lowering its scores in the corrupted condition). To put Tribayes on an equal footing, we added a postprocessing step in which it uses thresholds to decide whether to suppress its sug- gestions. A suggestion is allowed to go through iff the ratio of the probability of the word being sug- gested to the probability of the word that appeared originally in the sentence is above a threshold. The probability associated with each word is the per- word sentence probability in the case of Trigrams, or the conditional probability P(wi[~) in the case of Bayes. The thresholds are set in a preprocessing 77 phase based on the training set (80% of Brown, in our case). A single tunable parameter controls how steeply the thresholds are set; for the study here, this parameter was set to the middle of its useful range, providing a fairly neutral balance between reducing false negatives and increasing false positives. The results of Word and Tribayes for the 18 confu- sion sets appear in Table 5. Six of the confusion sets (marked with asterisks in the table) are not handled by Word; Word's scores in these cases are 100% for the correct condition and 0% for the corrupted con- dition, which are the scores one gets by never mak- ing a suggestion. The opposite behavior -- always suggesting a different word -- would result in scores of 0% and 100% (for a confusion set of size 2). Al- though this behavior is never observed in its extreme form, it is a good approximation of Word's behavior in a few cases, such as {principal, principle}, where it scores 12% and 94%. In general, Word achieves a high score in either the correct or the corrupted condition, but not both at once. Tribayes compares quite favorably with Word in this experiment. In both the correct and corrupted conditions, Tribayes' scores are mostly higher (often by a wide margin) or the same as Word's; in the cases where they are lower in one condition, they are almost always considerably higher in the other. The one exception is {raise, rise}, where Tribayes and Word score about the same in both conditions. 8 Conclusion Spelling errors that result in valid, though unin- tended words, have been found to be very common in the production of text. Such errors were thought to be too difficult to handle and remain undetected in conventional spelling checkers. This paper in- troduced Trigrams, a part-of-speech trigram-based method, that improved on previous trigram meth- ods, which were word-based, by greatly reducing the number of parameters. The method was sup- plemented by Bayes, a method that uses context features to discriminate among the words in the confusion set. Trigrams and Bayes were shown to have complementary strengths. A hybrid method, Tribayes, was then introduced to exploit this com- plementarity by applying Trigrams when the words in the confusion set do not have the same part of speech, and Bayes when they do. Tribayes thereby gets the best of both methods, as was confirmed ex- perimentally. Tribayes was also compared with the grammar checker in Microsoft Word, and was found to have substantially higher performance. Tribayes is being used as part of a grammar- checking system we are currently developing. We are presently working on elaborating the system's threshold model; scaling up the number of confusion sets that can be handled efficiently; and acquiring confusion sets (or confusion matrices) automatically. Confusion set Tribayes Microsoft Word Correct Corrupted Correct Corrupted their, there, they're than, then its, it's your, you're begin, being passed, past quiet, quite weather, whether accept, except lead, led cite, sight, site principal, principle rMse, rise affect, effect peace, piece country, county amount, number among, between 99.4 87.6 97.9 85.8 99.5 92.1 98.9 98.4 100.0 84.2 100.0 92.4 100.0 72.7 100.0 65.6 90.0 70.0 87.8 81.6 100.0 35.3 94.1 73.5 92.3 48.7 98.0 93.9 96.0 74.0 90.3 80.6 91.9 68.3 88.7 54.8 98.8 59.8 100.0 22.2 96.2 73.0 98.9 79.1 100.0 * 0.0 * 37.8 86.5 100.0 * 0.0 * 100.0 * 0.0 * 74.0 36.0 100.0 * 0.0 * 17.6 66.2 11.8 94.1 92.3 51.3 100.0 77.6 36.0 88.0 100.0 * 0.0 * 100.0 * 0.0 * 97.8 0.0 Table 5: Comparison of Tribayes with Microsoft Word. System scores are given for two test sets, one con- taining correct usages, and the other containing incorrect (corrupted) usages. Scores are given as percentages of correct answers. Asterisks mark confusion sets that are not handled by Microsoft Word. References Church, Kenneth Ward. 1988. A stochastic parts program and noun phrase parser for unrestricted text. In Second Conference on Applied Natural Language Processing, pages 136-143, Austin, TX. DeRose, S.J. 1988. Grammatical category disam- biguation by statistical optimization. Computa- tional Linguistics, 14:31-39. Flexner, S. B., editor. 1983. Random House Unabridged Dictionary. Random House, New York. Second edition. Gale, William A., Kenneth W. Church, and David Yarowsky. 1993. A method for disambiguating word senses in a large corpus. Computers and the Humanities, 26:415-439. Golding, Andrew P~. and Dan Roth. 1996. Apply- ing Winnow to context-sensitive spelling correc- tion. In Lorenza Saitta, editor, Machine Learning: Proceedings of the 13th International Conference, Bari, Italy. To appear. Golding, Andrew R. 1995. A Bayesian hybrid method for context-sensitive spelling correction. In Proceedings of the Third Workshop on Very Large Corpora, pages 39-53, Boston, MA. Kukich, Karen. 1992. Techniques for automaticMly correcting words in text. ACM Computing Sur- veys, 24(4):377-439, December. 78 Kuaera, H. and W. N. Francis. 1967. Computa- tional Analysis of Present-Day American English. Brown University Press, Providence, RI. Mays, Eric, bred J. Damerau, and Robert L. Mercer. 1991. Context based spelling correction. Informa- tion Processing and Management, 27(5):517-522. Peterson, James L. 1986. A note on undetected typing errors. Communications of the ACM, 29(7):633-637, July. Yarowsky, David. 1994. Decision lists for lexi- cal ambiguity resolution: Application to accent restoration in Spanish and French. In Proceedings of the 32nd Annual Meeting of the Associa- tion for Computational Linguistics, pages 88-95, Las Cruces, NM. | 1996 | 10 |
Efficient Normal-Form Parsing for Combinatory Categorial Grammar* Jason Eisner Dept. of Computer and Information Science University of Pennsylvania 200 S. 33rd St., Philadelphia, PA 19104-6389, USA j eisner@linc, cis. upenn, edu Abstract Under categorial grammars that have pow- erful rules like composition, a simple n-word sentence can have exponentially many parses. Generating all parses is ineffi- cient and obscures whatever true semantic ambiguities are in the input. This paper addresses the problem for a fairly general form of Combinatory Categorial Grammar, by means of an efficient, correct, and easy to implement normal-form parsing tech- nique. The parser is proved to find ex- actly one parse in each semantic equiv- alence class of allowable parses; that is, spurious ambiguity (as carefully defined) is shown to be both safely and completely eliminated. 1 Introduction Combinatory Categorial Grammar (Steedman, 1990), like other "flexible" categorial grammars, suffers from spurious ambiguity (Wittenburg, 1986). The non-standard constituents that are so crucial to CCG's analyses in (1), and in its account of into- national focus (Prevost ~ Steedman, 1994), remain available even in simpler sentences. This renders (2) syntactically ambiguous. (1) a. Coordination: [[John likes]s/NP, and [Mary pretends to like]s/NP], the big galoot in the corner. b. Extraction: Everybody at this party [whom [John likes]s/NP] is a big galoot. (2) a. John [likes Mary]s\NP. b. [.John likes]s/N P Mary. The practical problem of "extra" parses in (2) be- comes exponentially worse for longer strings, which can have up to a Catalan number of parses. An *This material is based upon work supported under a National Science Foundation Graduate Fellowship. I have been grateful for the advice of Aravind Joshi, Nobo Komagata, Seth Kulick, Michael Niv, Mark Steedman, and three anonymous reviewers. exhaustive parser serves up 252 CCG parses of (3), which must be sifted through, at considerable cost, in order to identify the two distinct meanings for further processing. 1 (3) the galoot in the corner NP]N N (N\N)/NP NP]N N that I said Mary (N\N)](S/NP) S/(S\NP) (S\NP)]$ S](S\NP) pretends to (S\NP)](Sin f \NP) (Sin f \NP)/(Sstem \NP) like (Sstem\NP)/NP This paper presents a simple and flexible CCG parsing technique that prevents any such explosion of redundant CCG derivations. In particular, it is proved in §4.2 that the method constructs exactly one syntactic structure per semantic reading--e.g., just two parses for (3). All other parses are sup- pressed by simple normal-form constraints that are enforced throughout the parsing process. This ap- proach works because CCG's spurious ambiguities arise (as is shown) in only a small set of circum- stances. Although similar work has been attempted in the past, with varying degrees of success (Kart- tunen, 1986; Wittenburg, 1986; Pareschi & Steed- man, 1987; Bouma, 1989; Hepple & Morrill, 1989; KSnig, 1989; Vijay-Shanker & Weir, 1990; Hepple, 1990; Moortgat, 1990; ttendriks, 1993; Niv, 1994), this appears to be the first full normal-form result for a categorial formalism having more than context- free power. 2 Definitions and Related Work CCG may be regarded as a generalization of context- free grammar (CFG)--one where a grammar has infinitely many nonterminals and phrase-structure rules. In addition to the familiar atomic nonter- minal categories (typically S for sentences, N for 1Namely, Mary pretends to like the galoot in 168 parses and the corner in 84. One might try a statis- tical approach to ambiguity resolution, discarding the low-probability parses, but it is unclear how to model and train any probabilities when no single parse can be taken as the standard of correctness. 79 nouns, NP for noun phrases, etc.), CCG allows in- finitely many slashed categories. If z and y are categories, then x/y (respectively z\y) is the cat- egory of an incomplete x that is missing a y at its right (respectively left). Thus verb phrases are an- alyzed as subjectless sentences S\NP, while "John likes" is an objectless sentence or S/NP. A complex category like ((S\NP) \ (S\NP))/N may be written as S\NP\(S\NP)/N, under a convention that slashes are left-associative. The results herein apply to the TAG-equivalent CCG formalization given in (Joshi et M., 1991). 2 In this variety of CCG, every (non-lexical) phrase- structure rule is an instance of one of the following binary-rule templates (where n > 0): (4) Forward generalized composition >Bn: ;~/y y[nzn''" [2Z211Zl -'+ ;~[nZn''" ]2Z211Zl Backward generalized composition <Bn: yl.z...- 12z2 Ilzl x\y x I.z.... 12z llzl Instances with n -- 0 are called application rules, and instances with n > 1 are called composition rules. In a given rule, x, y, zl... z~ would be instantiated as categories like NP, S/I~P, or S\NP\(S\NP)/N. Each of ]1 through In would be instantiated as either / or \. A fixed CCG grammar need not include every phrase-structure rule matching these templates. In- deed, (Joshi et al., 1991) place certain restrictions on the rule set of a CCG grammar, including a re- quirement that the rule degree n is bounded over the set. The results of the present paper apply to such restricted grammars and also more generally, to any CCG-style grammar with a decidable rule set. Even as restricted by (Joshi et al., 1991), CCGs have the "mildly context-sensitive" expressive power of Tree Adjoining Grammars (TAGs). Most work on spurious ambiguity has focused on categorial for- malisms with substantially less power. (Hepple, 1990) and (Hendriks, 1993), the most rigorous pieces of work, each establish a normal form for the syn- tactic calculus of (Lambek, 1958), which is weakly context-free. (Kbnig, 1989; Moortgat, 1990) have also studied the Lambek calculus case. (Hepple & Morrill, 1989), who introduced the idea of normal- form parsing, consider only a small CCG frag- ment that lacks backward or order-changing com- position; (Niv, 1994) extends this result but does not show completeness. (Wittenburg, 1987) assumes a CCG fragment lacking order-changing or higher- order composition; furthermore, his revision of the combinators creates new, conjoinable constituents that conventional CCG rejects. (Bouma, 1989) pro- poses to replace composition with a new combina- tor, but the resulting product-grammar scheme as- 2This formalization sweeps any type-raising into the lexicon, as has been proposed on linguistic grounds (Dowty, 1988; Steedman, 1991, and others). It also treats conjunction lexically, by giving "and" the gener- alized category x\x/x and barring it from composition. 80 signs different types to "John likes" and "Mary pre- tends to like," thus losing the ability to conjoin such constituents or subcategorize for them as a class. (Pareschi & Steedman, 1987) do tackle the CCG case, but (Hepple, 1987) shows their algorithm to be incomplete. 3 Overview of the Parsing Strategy As is well known, general CFG parsing methods can be applied directly to CCG. Any sort of chart parser or non-deterministic shift-reduce parser will do. Such a parser repeatedly decides whether two adjacent constituents, such as S/NP and I~P/N, should be combined into a larger constituent such as S/N. The role of the grammar is to state which combi- nations are allowed. The key to efficiency, we will see, is for the parser to be less permissive than the grammar--for it to say "no, redundant" in some cases where the grammar says "yes, grammatical." (5) shows the constituents that untrammeled CCG will find in the course of parsing "John likes Mary." The spurious ambiguity problem is not that the grammar allows (5c), but that the grammar al- lows both (5f) and (5g)--distinct parses of the same string, with the same meaning. (5) a. [John]s/(s\sp) b. [likes](S\NP)/Np C. [John likes]s/N P d. [Mary]N P e. [likes Mary]s\N P f. [[John likes] Mary]s ~ to be disallowed g, [John [likes Mary]Is The proposal is to construct all constituents shown in (5) except for (5f). If we slightly con- strain the use of the grammar rules, the parser will still produce (5c) and (5d)--constituents that are indispensable in contexts like (1)--while refusing to combine those constituents into (5f). The relevant rule S/I~P NP --* S will actually be blocked when it attempts to construct (5f). Although rule-blocking may eliminate an analysis of the sentence, as it does here, a semantically equivalent analysis such as (5g) will always be derivable along some other route. In general, our goal is to discover exactly one anal- ysis for each <substring, meaning> pair. By prac- ticing "birth control" for each bottom-up generation of constituents in this way, we avoid a population explosion of parsing options. "John likes Mary" has only one reading semantically, so just one of its anal- yses (5f)-(5g) is discovered while parsing (6). Only that analysis, and not the other, is allowed to con- tinue on and be built into the final parse of (6). (6) that galoot in the corner that thinks [John likes Mary]s For a chart parser, where each chart cell stores the analyses of some substring, this strategy says that all analyses in a cell are to be semantically distinct. (Karttunen, 1986) suggests enforcing that property directly--by comparing each new analysis semanti- cally with existing analyses in the cell, and refus- ing to add it if redundant--but (Hepple & Morrill, 1989) observe briefly that this is inefficient for large charts. 3 The following sections show how to obtain effectively the same result without doing any seman- tic interpretation or comparison at all. 4 A Normal Form for "Pure" CCG It is convenient to begin with a special case. Sup- pose the CCG grammar includes not some but all instances of the binary rule templates in (4). (As always, a separate lexicon specifies the possible cat- egories of each word.) If we group a sentence's parses into semantic equivalence classes, it always turns out that exactly one parse in each class satisfies the fol- lowing simple declarative constraints: (7) a. No constituent produced by >Bn, any n ~ 1, ever serves as the primary (left) argument to >Bn', any n' > 0. b. No constituent produced by <Bn, any n > 1, ever serves as the primary (right) argument to <Bn', any n' > 0. The notation here is from (4). More colloquially, (7) says that the output of rightward (leftward) com- position may not compose or apply over anything to its right (left). A parse tree or subtree that satisfies (7) is said to be in normal form (NF). As an example, consider the effect of these restric- tions on the simple sentence "John likes Mary." Ig- noring the tags -OT, -FC, and -Be for the moment, (8a) is a normal-form parse. Its competitor (85) is not, nor is any larger tree containing (8b). But non- 3How inefficient? (i) has exponentially many seman- tically distinct parses: n = 10 yields 82,756,612 parses (2°) -- 48,620 equivalence classes. Karttunen's in 10 method must therefore add 48,620 representative parses to the appropriate chart cell, first comparing each one against all the previously added parses--of which there are 48,620/2 on average--to ensure it is not semantically redundant. (Additional comparisons are needed to reject parses other than the lucky 48,620.) Adding a parse can therefore take exponential time. n (i) ... S/S S/S S/S S S\S S\S S\S ... Structure sharing does not appear to help: parses that are grouped in a parse forest have only their syntactic category in common, not their meaning. Karttunen's ap- proach must tease such parses apart and compare their various meanings individually against each new candi- date. By contrast, the method proposed below is purely syntactic--just like any "ordinary" parser--so it never needs to unpack a subforest, and can run in polynomial time. standard constituents are allowed when necessary: (8c) is in normal form (cf. (1)). (8) a. S-OT S / ( S ~ ~ I P O T John (S\NP)/NP-OT ~P-OT I I likes Mary b. .forward application blocked by (Ta) (eq,,i.alently, nofi~X..~itted b~ (10a ) ) S/NP-FC I~P-OT [ Mary S/(S"\NP)-OT (S\NP)/IIP-OT I I John likes 81 c. N\N-OT (N\N) / (S/NP)-OT S/NP-FC I s ~ p whom S/( )/NP-OT I l John likes It is not hard to see that (7a) eliminates all but right-branching parses of "forward chains" like A/B B/C C or A/B/C C/D D/E/F/G G/H, and that (Tb) eliminates all but left-branching parses of "backward chains." (Thus every functor will get its arguments, if possible, before it becomes an argument itself.) But it is hardly obvious that (7) eliminates all of CCG's spurious ambiguity. One might worry about unexpected interactions involving crossing compo- sition rules like A/B B\C--~ A\C. Significantly, it turns out that (7) really does suffice; the proof is in §4.2. It is trivial to modify any sort of CCG parser to find only the normal-form parses. No seman- tics is necessary; simply block any rule use that would violate (7). In general, detecting violations will not hurt performance by more than a constant factor. Indeed, one might implement (7) by modi- fying CCG's phrase-structure grammar. Each ordi- nary CCG category is split into three categories that bear the respective tags from (9). The 24 templates schematized in (10) replace the two templates of (4). Any CFG-style method can still parse the resulting spuriosity-free grammar, with tagged parses as in (8). In particular, the polynomial-time, polynomial- space CCG chart parser of (Vijay-Shanker & Weir, 1993) can be trivially adapted to respect the con- straints by tagging chart entries. (9) -FC output of >Bn, some n > 1 (a forward composition rule) -BC output of <Bn, some n > 1 (a backward composition rule) -OT output of >B0 or <B0 (an application rule), or lexical item (10) a. Forward application >BO: ~ x/y-OT y-Be t -'+ x--OT y-OT ) b. Backward application <B0: y-Be ~ x\y-OT j" ~ x-OT 9-O'1" ) y l,,z,, l~z~ llz1-BC ---, x l,z,~..- ]2z2 llz1-FC c. Fwd. composition >Bn (n > 1): x/y-OT Y Inz,~ 12z2 IlZl-OT d. Bwd. composition <Bn (n >_ 1): Y I,~z~ 12z2 Ilzl-BC ---, x Inz,''" I~.z2 Ilzl--BC y I,z, I~.z2 IlZl-OT x\y-OT (ii) a. Syn/sem for >Bn (n _> 0): =/y y • I.z.... f g ~Cl~C2...~Cn.f(g(Cl)(C2)'"(Cn)) b. Syn/sem for <B, (, > 0): y I.z.--- 12z2 --* x I.z.... 12z2 [lZX g f )~Cl~C2...ACn.f(g(Cl)(C2)''" (Cn)) (12) a. A/C/F AIClD D/F AIB BICID DIE ElF b. AIClF A/C/E E/F A/C/D D/E A/B B/C/D C. ~y.l(g(h(k(~)))(y)) A/c/F A/B B/C/D f g h k It is interesting to note a rough resemblance be- tween the tagged version of CCG in (10) and the tagged Lambek cMculus L*, which (Hendriks, 1993) developed to eliminate spurious ambiguity from the Lambek calculus L. Although differences between CCG and L mean that the details are quite different, each system works by marking the output of certain rules, to prevent such output from serving as input to certain other rules. 4.1 Semantic equivalence We wish to establish that each semantic equivalence class contains exactly one NF parse. But what does "semantically equivalent" mean? Let us adopt a standard model-theoretic view. For each leaf (i.e., lexeme) of a given syntax tree, the lexicon specifies a lexical interpretation from the model. CCG then provides a derived interpretation in the model for the complete tree. The standard CCG theory builds the semantics compositionally, guided by the syntax, according to (11). We may therefore regard a syntax tree as a static "recipe" for combining word meanings into a phrase meaning. 82 One might choose to say that two parses are se- mantically equivalent iff they derive the same phrase meaning. However, such a definition would make spurious ambiguity sensitive to the fine-grained se- mantics of the lexicon. Are the two analyses of VP/VP VP VP\VP semantically equivalent? If the lexemes involved are "softly knock twice," then yes, as softly(twice(knock)) and twice(softly(knock)) ar- guably denote a common function in the semantic model. Yet for "intentionally knock twice" this is not the case: these adverbs do not commute, and the semantics are distinct. It would be difficult to make such subtle distinc- tions rapidly. Let us instead use a narrower, "inten- sional" definition of spurious ambiguity. The trees in (12a-b) will be considered equivalent because they specify the same "recipe," shown in (12c). No mat- ter what lexical interpretations f, g, h, k are fed into the leaves A/B, B/C/D, D/E, E/F, both the trees end up with the same derived interpretation, namely a model element that can be determined from f, g, h, k by calculating Ax~y.f(g(h(k(x)))(y)). By contrast, the two readings of "softly knock twice" are considered to be distinct, since the parses specify different recipes. That is, given a suitably free choice of meanings for the words, the two parses can be made to pick out two different VP-type func- tions in the model. The parser is therefore conser- vative and keeps both parses. 4 4.2 Normal-form parsing is safe & complete The motivation for producing only NF parses (as defined by (7)) lies in the following existence and uniqueness theorems for CCG. Theorem 1 Assuming "pure CCG," where all pos- sible rules are in the grammar, any parse tree ~ is se- mantically equivalent to some NF parse tree NF(~). (This says the NF parser is safe for pure CCG: we will not lose any readings by generating just normal forms.) Theorem 2 Given distinct NF trees a # o/ (on the same sequence of leaves). Then a and a t are not semantically equivalent. (This says that the NF parser is complete: generat- ing only normal forms eliminates all spurious ambi- guity.) Detailed proofs of these theorems are available on the cmp-lg archive, but can only be sketched here. Theorem 1 is proved by a constructive induction on the order of a, given below and illustrated in (13): • For c~ a leaf, put NF(c~) = a. • (<R, ~, 3'> denotes the parse tree formed by com- bining subtrees/~, 7 via rule R.) If ~ = <R, fl, 7>, then take NF(c~) = <R, gF(fl), NF(7)> , which exists by inductive hypothesis, unless this is not an NF tree. In the latter case, WLOG, R is a forward rule and NF(fl) = <Q,~l,flA> for some forward com- position rule Q. Pure CCG turns out to pro- vide forward rules S and T such that a~ = <S, ill, NF(<T, ~2, 7>)> is a constituent and is semantically equivalent to c~. Moreover, since fll serves as the primary subtree of the NF tree NF(fl),/31 cannot be the output of forward com- position, and is NF besides. Therefore a~ is NF: take NF(o 0 = o/. (13) If NF(/3) not output of fwd. composition, R R --* --* def = A = NF( ) 7 iF(Z) NF(7) R R _..+ --# else ~ : ~ ::~ 7 NF~"7 t(Hepple 8z Morrill, 1989; Hepple, 1990; Hendriks, 1993) appear to share this view of semantic equivalence. Unlike (Karttunen, 1986), they try to eliminate only parses whose denotations (or at least A-terms) are sys- tematically equivalent, not parses that happen to have the same denotation through an accident of the lexicon. 83 R S ::~ ~ def =Q-~7~1~1~'/72 NF (/72~7) = NF(~) This construction resembles a well-known normal- form reduction procedure that (Hepple & Morrill, 1989) propose (without proving completeness) for a small fragment of CCG. The proof of theorem 2 (completeness) is longer and more subtle. First it shows, by a simple induc- tion, that since c~ and ~' disagree they must disagree in at least one of these ways: (a) There are trees/?, 3' and rules R # R' such that <R, fl, 7> is a subtree of a and <R',/3, 7> is a subtree of a'. (For example, S/S S\S may form a constituent by either <Blx or >Blx.) (b) There is a tree 7 that appears as a subtree of both c~ and cd, but combines to the left in one case and to the right in the other. Either condition, the proof shows, leads to different "immediate scope" relations in the full trees ~ and ~' (in the sense in which f takes immediate scope over 9 in f(g(x)) but not in f(h(g(x))) or g(f(z))). Con- dition (a) is straightforward. Condition (b) splits into a case where 7 serves as a secondary argument inside both cr and a', and a case where it is a primary argument in c~ or a'. The latter case requires consid- eration of 7's ancestors; the NF properties crucially rule out counterexamples here. The notion of scope is relevant because semantic interpretations for CCG constituents can be written as restricted lambda terms, in such a way that con- stituents having distinct terms must have different interpretations in the model (for suitable interpreta- tions of the words, as in §4.1). Theorem 2 is proved by showing that the terms for a and a' differ some- where, so correspond to different semantic recipes. Similar theorems for the Lambek calculus were previously shown by (Hepple, 1990; ttendriks, 1993). The present proofs for CCG establish a result that has long been suspected: the spurious ambiguity problem is not actually very widespread in CCG. Theorem 2 says all cases of spurious ambiguity can be eliminated through the construction given in theorem 1. But that construction merely en- sures a right-branching structure for "forward con- stituent chains" (such as h/B B/C C or h/B/C C/D D/E/F/G G/H), and a left-branching structure for backward constituent chains. So these familiar chains are the only source of spurious ambiguity in CCG. 5 Extending the Approach to "Restricted" CCG The "pure" CCG of §4 is a fiction. Real CCG gram- mars can and do choose a subset of the possible rules. For instance, to rule out (14), the (crossing) back- ward rule N/N ~I\N ---* I~/N must be omitted from English grammar. (14) [theNP/N [[bigN/N [that likes John]N\N ]N/N galootN ]N]NP If some rules are removed from a "pure" CCG grammar, some parses will become unavailable. Theorem 2 remains true (< 1 NF per reading). Whether theorem 1 (>_ 1 NF per reading) remains true depends on what set of rules is removed. For most linguistically reasonable choices, the proof of theorem 1 will go through, 5 so that the normal-form parser of §4 remains safe. But imagine removing only the rule B/a C --~ B: this leaves the string A/B B/C C with a left-branching parse that has no (legal) NF equivalent. In the sort of restricted grammar where theorem 1 does not obtain, can we still find one (possibly non- NF) parse per equivalence class? Yes: a different kind of efficient parser can be built for this case. Since the new parser must be able to generate a non-NF parse when no equivalent NF parse is avail- able, its method of controlling spurious ambiguity cannot be to enforce the constraints (7). The old parser refused to build non-NF constituents; the new parser will refuse to build constituents that are se- mantically equivalent to already-built constituents. This idea originates with (Karttunen, 1986). However, we can take advantage of the core result of this paper, theorems 1 and 2, to do Karttunen's redundancy check in O(1) time--no worse than the normal-form parser's check for -FC and -Be tags. (Karttunen's version takes worst-case exponential time for each redundancy check: see footnote §3.) The insight is that theorems 1 and 2 estab- lish a one-to-one map between semantic equivalence classes and normal forms of the pure (unrestricted) CCG: (15) Two parses a, ~' of the pure CCG are semantically equivalent iff they have the same normal form: gF(a) = gF(a'). The NF function is defined recursively by §4.2's proof of theorem 1; semantic equivalence is also defined independently of the grammar. So (15) is meaningful and true even if a, a' are produced by a restricted CCG. The tree NF(a) may not be a legal parse under the restricted grammar. How- ever, it is still a perfectly good data structure that can be maintained outside the parse chart, to serve 5For the proof to work, the rules S and T must be available in the restricted grammar, given that R and Q are. This is usually true: since (7) favors standard con- stituents and prefers application to composition, most grammars will not block the NF derivation while allow- ing a non-NF one. (On the other hand, the NF parse of A/B B/C C/D/E uses >B2 twice, while the non-NF parse gets by with >B2 and >B1.) as a magnet for a's semantic class. The proof of theorem 1 (see (13)) actually shows how to con- struct NF(a) in O(1) time from the values of NF on smaller constituents. Hence, an appropriate parser can compute and cache the NF of each parse in O(1) time as it is added to the chart. It can detect redun- dant parses by noting (via an O(1) array lookup) that their NFs have been previously computed. Figure (1) gives an efficient CKY-style algorithm based on this insight. (Parsing strategies besides CKY would Mso work, in particular (Vijay-Shanker & Weir, 1993).) The management of cached NFs in steps 9, 12, and especially 16 ensures that duplicate NFs never enter the oldNFs array: thus any alter- native copy of a.nfhas the same array coordinates used for a.nfitself, because it was built from identi- cal subtrees. The function Pre:ferableTo(~, r) (step 15) pro- vides flexibility about which parse represents its class. PreferableTo may be defined at whim to choose the parse discovered first, the more left- branching parse, or the parse with fewer non- standard constituents. Alternatively, PreferableTo may call an intonation or discourse module to pick the parse that better reflects the topic-focus divi- sion of the sentence. (A variant algorithm ignores PreferableTo and constructs one parse forest per reading. Each forest can later be unpacked into in- dividual equivalent parse trees, if desired.) (Vijay-Shanker & Weir, 1990) also give a method for removing "one well-known source" of spurious ambiguity from restricted CCGs; §4.2 above shows that this is in fact the only source. However, their method relies on the grammaticality of certain inter- mediate forms, and so can fail if the CCG rules can be arbitrarily restricted. In addition, their method is less efficient than the present one: it considers parses in pairs, not singly, and does not remove any parse until the entire parse forest has been built. 6 Extensions to the CCG Formalism In addition to the Bn ("generalized composition") rules given in §2, which give CCG power equivalent to TAG, rules based on the S ("substitution") and T ("type-raising") combinators can be linguistically useful. S provides another rule template, used in the analysis of parasitic gaps (Steedman, 1987; Sz- abolcsi, 1989): (16) a. >s: x/yllz yllz • Ilz / g b. <S: yllz x\yIlz --* xIlz Although S interacts with Bn to produce another source of spurious ambiguity, illustrated in (17), the additional ambiguity is not hard to remove. It can be shown that when the restriction (18) is used to- gether with (7), the system again finds exactly one 84 1. for/:= lton 2. C[i - 1, i] := LexCats(word[i]) (* word i stretches from point i - 1 to point i *) 3. for width := 2 to n 4. for start := 0 to n- width 5. end := start + width 6. for mid := start + 1 to end- 1 7. for each parse tree ~ = <R,/9, 7> that could be formed by combining some /9 6 C[start, miaq with some 7 e C[mid, ena~ by a rule/~ of the (restricted) grammar 8. a.nf := NF(a) (* can be computed in constant time using the .nf fields of fl, 7, and other constituents already in C. Subtrees are also NF trees. *) 9. ezistingNF := oldNFs[~.nf .rule, c~.nf .leftchild.seqno, a.nf .rightchild.seqno] 10. if undefined(existingNF) (* the first parse with this NF *) 11. ~.nf.seqno := (counter := counter + 1) (* number the new NF ~ add it to oldNFs *) 12. oldNFs[c~.nf .rule, c~.nf .leflchild.seqno, a.nf .rightchild.seqno] := a.nf 13. add ~ to C[start, ena~ 14. a.nf.currparse := c~ 15. elsif PreferableTo(a, ezistingNF.currparse) (* replace reigning parse? *) 16. a.nf:= existingNF (* use cached copy of NF, not new one *) 17. remove a.nf. currparse from C[start, en~ 18. add ~ to C[start, enaq 19. ~.nfocurrparse := 20. return(all parses from C[0, n] having root category S) Figure 1: Canonicalizing CCG parser that handles arbitrary restrictions on the rule set. (In practice, a simpler normal-form parser will suffice for most grammars.) parse from every equivalence class. (17) a. VPo/NP (<Bx) VPI/NP (<Sx) VP2~P2/NP yesterday filed [without-reading] b. VPo/NP (<Sx) VP2/NP VP0\VP2/NP (<B2) VPI\V~VPI (18) a. No constituent produced by >Bn, any n _> 2, ever serves as the primary (left) argument to >S. b. No constituent produced by <Bn, any n > 2, ever serves as the primary (right) argument to <S. Type-raising presents a greater problem. Vari- ous new spurious ambiguities arise if it is permit- ted freely in the grammar. In principle one could proceed without grammatical type-raising: (Dowty, 1988; Steedman, 1991) have argued on linguistic grounds that type-raising should be treated as a mere lexical redundancy property. That is, when- ever the lexicon contains an entry of a certain cate- 85 gory X, with semantics x, it also contains one with (say) category T/(T\X) and interpretation Ap.p(z). As one might expect, this move only sweeps the problem under the rug. If type-raising is lexical, then the definitions of this paper do not recognize (19) as a spurious ambiguity, because the two parses are now, technically speaking, analyses of different sentences. Nor do they recognize the redundancy in (20), because--just as for the example "softly knock twice" in §4.1--it is contingent on a kind of lexical coincidence, namely that a type-raised subject com- mutes with a (generically) type-raised object. Such ambiguities are left to future work. (19) [JohnNp lefts\NP]S vs. [Johns/(S\NP) lefts\NP]S (20) [S/(S\NPs) [S\NPs/NPo/NP I T\(T/NPo)]]S/SI VS. [S/(S\NPs) S\NPs/NPo/NPI] T\(T/NPO)]S/S I 7 Conclusions The main contribution of this work has been formal: to establish a normal form for parses of "pure" Com- binatory Categorial Grammar. Given a sentence, every reading that is available to the grammar has exactly one normal-form parse, no matter how many parses it has in toto. A result worth remembering is that, although TAG-equivalent CCG allows free interaction among forward, backward, and crossed composition rules of any degree, two simple constraints serve to eliminate all spurious ambiguity. It turns out that all spuri- ous ambiguity arises from associative "chains" such as A/B B/C C or A/B/C C/D D/E\F/G G/H. (Wit- tenburg, 1987; Hepple & Morrill, 1989) anticipate this result, at least for some fragments of CCG, but leave the proof to future work. These normal-form results for pure CCG lead di- rectly to useful parsers for real, restricted CCG grammars. Two parsing algorithms have been pre- sented for practical use. One algorithm finds only normal forms; this simply and safely eliminates spu- rious ambiguity under most real CCG grammars. The other, more complex algorithm solves the spu- rious ambiguity problem for any CCG grammar, by using normal forms as an efficient tool for grouping semantically equivalent parses. Both algorithms are safe, complete, and efficient. In closing, it should be repeated that the results provided are for the TAG-equivalent Bn (general- ized composition) formalism of (Joshi et al., 1991), optionally extended with the S (substitution) rules of (Szabolcsi, 1989). The technique eliminates all spurious ambiguities resulting from the interaction of these rules. Future work should continue by eliminating the spurious ambiguities that arise from grammatical or lexical type-raising. References Gosse Bouma. 1989. Efficient processing of flexible categorial grammar. In Proceedings of the Fourth Conference of the European Chapter of the Associ- ation for Computational Linguistics, 19-26, Uni- versity of Manchester, April. David Dowty. 1988. Type raising, functional com- position, and non-constituent conjunction. In R. Oehrle, E. Bach and D. Wheeler, editors, Catego- rial Grammars and Natural Language Structures. Reidel. Mark Hepple. 1987. Methods for parsing combina- tory categorial grammar and the spurious ambi- guity problem. Unpublished M.Sc. thesis, Centre for Cognitive Science, University of Edinburgh. Mark Hepple. 1990. The Grammar and Process- ing of Order and Dependency: A Categorial Ap- proach. Ph.D. thesis, University of Edinburgh. Mark Hepple and Glyn Morrill. 1989. Parsing and derivational equivalence. In Proceedings of the Fourth Conference of the European Chapter of the Association for Computational Linguistics, 10-18, University of Manchester, April. Herman Hendriks. 1993. Studied Flexibility: Cate- gories and Types in Syntax and Semantics. Ph.D. thesis, Institute for Logic, Language, and Compu- tation, University of Amsterdam. Aravind Joshi, K. Vijay-Shanker, and David Weir. 1991. The convergence of mildly context-sensitive grammar formalisms. In Foundational Issues in Natural Language Processing, MIT Press. Lauri Karttunen. 1986. Radical lexicalism. Report No. CSLI-86-68, CSLI, Stanford University. E. KSnig. 1989. Parsing as natural deduction. In Proceedings of the 27lh Annual Meeting of the As- sociation for Computational Linguistics, Vancou- ver. J. Lambek. 1958. The mathematics of sen- tence structure. American Mathematical Monthly 65:154-169. Michael Moortgat. 1990. Unambiguous proof repre- sentations for the Lambek Calculus. In Proceed- ings of the Seventh Amsterdam Colloquium. Michael Niv. 1994. A psycholinguistically moti- vated parser for CCG. In Proceedings of the 32nd Annual Meeting of the Association for Computa- tional Linguistics, Las Cruces, NM, June. Remo Paresehi and Mark Steedman. A lazy way to chart parse with eombinatory grammars. In Pro- ceedings of the P5th Annual Meeting of the As- sociation for Computational Linguistics, Stanford University, July. Scott Prevost and Mark Steedman. 1994. Specify- ing intonation from context for speech synthesis. Speech Communication, 15:139-153. Mark Steedman. 1990. Gapping as constituent coor- dination. Linguistics and Philosophy, 13:207-264. Mark Steedman. 1991. Structure and intonation. Language, 67:260-296. Mark Steedman. 1987. Combinatory grammars and parasitic gaps. Natural Language and Linguistic Theory, 5:403-439. Anna Szabolcsi. 1989. Bound variables in syntax: Are there any? In R. Bartsch, J. van Benthem, and P. van Emde Boas (eds.), Semantics and Con- textual Expression, 295-318. Forts, Dordrecht. K. Vijay-Shanker and David Weir. 1990. Polyno- mial time parsing of combinatory ¢ategorial gram- mars. In Proceedings of the P8th Annual Meeting of the Association for Computational Linguistics. K. Vijay-Shanker and David Weir. 1993. Parsing some constrained grammar formalisms. Compu- tational Linguistics, 19(4):591-636. K. Vijay-Shanker and David Weir. 1994. The equiv- alence of four extensions of context-free gram- mars. Mathematical Systems Theory, 27:511-546. Kent Wittenburg. 1986. Natural Language Pars- ing with Combinatory Calegorial Grammar in a Graph-Unification-Based Formalism. Ph.D. the- sis, University of Texas. Kent Wittenburg. 1987. Predictive combinators: A method for efficient parsing of Combinatory Categorial Grammars. In Proceedings of the 25th Annual Meeting of the Association for Computa- tional Linguistics, Stanford University, July. 86 | 1996 | 11 |
Another Facet of LIG Parsing Pierre Boullier INRIA-Rocquencourt BP 105 78153 Le Chesnay Cedex, France Pierre. Boullier@inria. fr Abstract In this paper 1 we present a new pars- ing algorithm for linear indexed grammars (LIGs) in the same spirit as the one de- scribed in (Vijay-Shanker and Weir, 1993) for tree adjoining grammars. For a LIG L and an input string x of length n, we build a non ambiguous context-free grammar whose sentences are all (and exclusively) valid derivation sequences in L which lead to x. We show that this grammar can be built in (9(n 6) time and that individ- ual parses can be extracted in linear time with the size of the extracted parse tree. Though this O(n 6) upper bound does not improve over previous results, the average case behaves much better. Moreover, prac- tical parsing times can be decreased by some statically performed computations. 1 Introduction The class of mildly context-sensitive languages can be described by several equivalent grammar types. Among these types we can notably cite tree adjoin- ing grammars (TAGs) and linear indexed grammars (LIGs). In (Vijay-Shanker and Weir, 1994) TAGs are transformed into equivalent LIGs. Though context-sensitive linguistic phenomena seem to be more naturally expressed in TAG formalism, from a computational point of view, many authors think that LIGs play a central role and therefore the un- derstanding of LIGs and LIG parsing is of impor- tance. For example, quoted from (Schabes and Shieber, 1994) "The LIG version of TAG can be used for recognition and parsing. Because the LIG for- malism is based on augmented rewriting, the pars- ing algorithms can be much simpler to understand 1See (Boullier, 1996) for an extended version. 87 and easier to modify, and no loss of generality is in- curred". In (Vijay-Shanker and Weir, 1993) LIGs are used to express the derivations of a sentence in TAGs. In (Vijay-Shanker, Weir and Rainbow, 1995) the approach used for parsing a new formalism, the D-Tree Grammars (DTG), is to translate a DTG into a Linear Prioritized Multiset Grammar which is similar to a LIG but uses multisets in place of stacks. LIGs can be seen as usual context-free grammars (CFGs) upon which constraints are imposed. These constraints are expressed by stacks of symbols as- sociated with non-terminals. We study parsing of LIGs, our goal being to define a structure that ver- ifies the LIG constraints and codes all (and exclu- sively) parse trees deriving sentences. Since derivations in LIGs are constrained CF derivations, we can think of a scheme where the CF derivations for a given input are expressed by a shared forest from which individual parse trees which do not satisfied the LIG constraints are erased. Unhappily this view is too simplistic, since the erasing of individual trees whose parts can be shared with other valid trees can only be performed after some unfolding (unsharing) that can produced a forest whose size is exponential or even unbounded. In (Vijay-Shanker and Weir, 1993), the context- freeness of adjunction in TAGs is captured by giving a CFG to represent the set of all possible derivation sequences. In this paper we study a new parsing scheme for LIGs based upon similar principles and which, on the other side, emphasizes as (Lang, 1991) and (Lang, 1994), the use of grammars (shared for- est) to represent parse trees and is an extension of our previous work (Boullier, 1995). This previous paper describes a recognition algo- rithm for LIGs, but not a parser. For a LIG and an input string, all valid parse trees are actually coded into the CF shared parse forest used by this recog- nizer, but, on some parse trees of this forest, the checking of the LIG constraints can possibly failed. At first sight, there are two conceivable ways to ex- tend this recognizer into a parser: 1. only "good" trees are kept; 2. the LIG constraints are Ire-]checked while the extraction of valid trees is performed. As explained above, the first solution can produce an unbounded number of trees. The second solution is also uncomfortable since it necessitates the reeval- uation on each tree of the LIG conditions and, doing so, we move away from the usual idea that individ- ual parse trees can be extracted by a simple walk through a structure. In this paper, we advocate a third way which will use (see section 4), the same basic material as the one used in (Boullier, 1995). For a given LIG L and an input string x, we exhibit a non ambiguous CFG whose sentences are all possible valid derivation se- quences in L which lead to x. We show that this CFG can be constructed in (.9(n 6) time and that in- dividual parses can be extracted in time linear with the size of the extracted tree. 2 Derivation Grammar and CF Parse Forest In a CFG G = (VN, VT, P, S), the derives relation is the set {(aBa',aj3a') I B --~ j3 e P A V = G VN U VT A a, a ~ E V*}. A derivation is a sequence of strings in V* s.t. the relation derives holds be- tween any two consecutive strings. In a rightmost derivation, at each step, the rightmost non-terminal say B is replaced by the right-hand side (RHS) of a B-production. Equivalently if a0 ~ ... ~ an is G G a rightmost derivation where the relation symbol is overlined by the production used at each step, we say that rl ... rn is a rightmost ao/a~-derivation. For a CFG G, the set of its rightmost S/x- derivations, where x E E(G), can itself be defined by a grammar. Definition 1 Let G = (VN,VT,P,S) be a CFG, its rightmost derivation grammar is the CFG D = (VN, P, pD, S) where pD _~ {A0 --~ A1... Aqr I r --- Ao --+ woAlwl.., wq_lAqwq E P Awi E V~ A Aj E LFrom the natural bijection between P and pD, we can easily prove that L:(D) = {r~...rl I rl ... rn is a rightmost S/x-derivation in G~ This shows that the rightmost derivation language of a CFG is also CF. We will show in section 4 that a similar result holds for LIGs. Following (Lang, 1994), CF parsing is the inter- section of a CFG and a finite-state automaton (FSA) which models the input string x 2. The result of this intersection is a CFG G x -- (V~, V~, px, ISIS) called a shared parse forest which is a specialization of the initial CFG G = (V~, VT, P, S) to x. Each produc- J E px, is the production ri E P up to some tion r i non-terminal renaming. The non-terminal symbols in V~ are triples denoted [A]~ where A E VN, and p and q are states. When such a non-terminal is productive, [A] q :~ w, we have q E 5(p, w). G ~ If we build the rightmost derivation grammar as- sociated with a shared parse forest, and we remove all its useless symbols, we get a reduced CFG say D ~ . The CF recognition problem for (G, x) is equivalent to the existence of an [S]~-production in D x. More- over, each rightmost S/x-derivation in G is (the re- verse of) a sentence in E(D*). However, this result is not very interesting since individual parse trees can be as easily extracted directly from the parse forest. This is due to the fact that in the CF case, a tree that is derived (a parse tree) contains all the information about its derivation (the sequence of rewritings used) and therefore there is no need to distinguish between these two notions. Though this is not always the case with non CF formalisms, we will see in the next sections that a similar approach, when applied to LIGs, leads to a shared parse for- est which is a LIG while it is possible to define a derivation grammar which is CF. 3 Linear Indexed Grammars An indexed grammar is a CFG in which stack of symbols are associated with non-terminals. LIGs are a restricted form of indexed grammars in which the dependence between stacks is such that at most one stack in the RHS of a production is related with the stack in its LHS. Other non-terminals are associated with independant stacks of bounded size. Following (Vijay-Shanker and Weir, 1994) Definition 2 L = (VN,VT,VI,PL,S) denotes a LIG where VN, VT, VI and PL are respectively fi- nite sets of non-terminals, terminals, stack symbols and productions, and S is the start symbol. In the sequel we will only consider a restricted 2if x = al... as, the states can be the integers 0... n, 0 is the initial state, n the unique final state, and the transition function 5 is s.t. i E 5(i-- 1, a~) and i E 5(i, ~). 88 form of LIGs with productions of the form PL = {A0 --+ w} U {A(..a) --+ PlB(..a')r2} where A,B • VN, W • V~A0 < [w[ < 2, aa' • V;A 0 < [aa'[ < 1 and r,r2 • v u( }u(c01 c • An element like A(..a) is a primary constituent while C0 is a secondary constituent. The stack schema (..a) of a primary constituent matches all the stacks whose prefix (bottom) part is left unspec- ified and whose suffix (top) part is a; the stack of a secondary constituent is always empty. Such a form has been chosen both for complexity reasons and to decrease the number of cases we have to deal with. However, it is easy to see that this form of LIG constitutes a normal form. We use r 0 to denote a production in PL, where the parentheses remind us that we are in a LIG! The CF-backbone of a LIG is the underlying CFG in which each production is a LIG production where the stack part of each constituent has been deleted, leaving only the non-terminal part. We will only consider LIGs such there is a bijection between its production set and the production set of its CF- backbone 3. We call object the pair denoted A(a) where A is a non-terminal and (a) a stack of symbols. Let Vo = {A(a) [ A • VN Aa • V;} be the set of objects. We define on (Vo LJ VT)* the binary relation derives denoted =~ (the relation symbol is sometimes L overlined by a production): r A(a"a)r L I i A()=~w rlA()r2 ' ' FlWF2 L In the first above element we say that the object B(a"a ~) is the distinguished child of A(a"a), and if F1F2 = C0, C0 is the secondary object. A deriva- tion F~,..., Fi, Fi+x,..., Ft is a sequence of strings where the relation derives holds between any two consecutive strings The language defined by a LIG L is the set: £(L) = {x [ S 0 :=~ x A x • V~ } L As in the CF case we can talk of rightmost deriva- tions when the rightmost object is derived at each step. Of course, many other derivation strategies may be thought of. For our parsing algorithm, we need such a particular derives relation. Assume that at one step an object derives both a distinguished 3rp and rp0 with the same index p designate associ- ated productions. child and a secondary object. Our particular deriva- tion strategy is such that this distinguished child will always be derived after the secondary object (and its descendants), whether this secondary object lays to its left or to its right. This derives relation is denoted =~ and is called linear 4. l,L A spine is the sequence of objects Al(al) • .. Ai(ai) Ai+l (~i+1)... Ap(ap) if, there is a deriva- tion in which each object Ai+l (ai+l) is the distin- guished child of Ai(ai) (and therefore the distin- guished descendant of Aj(aj), 1 <_ j <_ i). 4 Linear Derivation Grammar For a given LIG L, consider a linear SO~x-derivation so . . . . . . = t,L t,L l,L The sequence of productions rl0...riO...rnO (considered in reverse order) is a string in P~. The purpose of this section is to define the set of such strings as the language defined by some CFG. Associated with a LIG L = (VN, VT, VI, PL, S), we first define a bunch of binary relations which are borrowed from (Boullier , 1995) -4,- = {(A,B) [A(..) ~ r,B(..)r~ e PL} 1 "r -~ = {(A,B) I A(.. ) -~ rlB(..~)r2 e PL} 1 7 >- = {(A,B) I 4 rxB(..)r2 e PL} I -~ = {(A1,Ap) [A10 =~ rlA,()r~ and A,0 q- L is a distinguished descendant of A1 O} The l-level relations simply indicate, for each pro- duction, which operation can be apply to the stack associated with the LHS non-terminal to get the stack associated with its distinguished child; ~ in- 1 dicates equality, -~ the pushing of 3", and ~- the pop- 1 1 ping of 3'- If we look at the evolution of a stack along a spine A1 (ax)... Ai (ai)Ai+x (ai+x)... Ap (ap), be- tween any two objects one of the following holds: OL i ~ O~i+1, Oli3 , ~ OLi+I, or ai = ai+l~. The -O- relation select pairs of non-terminals + (A1, Ap) s.t. al = ap = e along non trivial spines. 4linear reminds us that we are in a LIG and relies upon a linear (total) order over object occurrences in a derivation. See (Boullier, 1996) for a more formal definition. 89 7 7 7 If the relations >- and ~ are defined as >-=>- + + 1 7 "/7 U ~-~- and ~---- UTev~ "<>', we can see that the +1 1+ following identity holds Property 1 --¢,- = -¢.-U~U-K>--~,-Uw.,--~- + 1 1 + + In (Boullier, 1995) we can found an algorithm s which computes the -~, >- and ~ relations as the + + composition of -,¢,-, -~ and ~- in O(IVNI 3) time. 1 1 1 Definition 3 For a LIG L = (VN, VT, Vz, PL, S), we call linear derivation grammar (LDG) the CFG DL (or D when L is understood) D = (VND, V D, pD, S D) where • V D={[A]IA•VN}U{[ApB]IA,B•VNA p • 7~}, and ~ is the set of relations {~,-¢,-,'Y 1 1 • VTD = pL • S ° = [S] • Below, [F1F2] symbol [X] when FIF2 = string e when F1F2 • V~. being denotes either the non-terminal X 0 or the empty po is defined as {[A] -+ r 0 I rO = AO -~ w • PL} (1) U{[A] -+ r0[A +-~ B]I r 0 = B 0 -+ w • PL} (2) UI[A +~- C] ~ [rlr~]r0 I r 0 = A(..) ~ r,c(..)r: • PL} (3) u{[A +-~ C] --+ [A ~ C]} (4) u{[A c] [B c][rlr:lr0 I r0 = AC) rls(..)r2 • PL} (5) (6) U{[A +-~ C] -> [B ~ C][A ~ B]} U{[A ~ C] ~ [B ~- c][rlr2]r0 I + r 0 = A(..) ~ rlB(..~)r2 • PL} (7) 5Though in the referred paper, these relations are de- fined on constituents, the algorithm also applies to non- terminals. 6In fact we will only use valid non-terminals [ApB] for which the relation p holds between A and B. U{[A ~ C] ~ [rlr~]r0 I -I- r0 = A(..7) ~ rlc(..)r~ • PL} (8) U{[A ~-+ C] --~ [F1F2]r0[A ~ S]l r0 = B(..-y) rlc(..)r, • (9) The productions in pD define all the ways lin- ear derivations can be composed from linear sub- derivations. This compositions rely on one side upon property 1 (recall that the productions in PL, must be produced in reverse order) and, on the other side, upon the order in which secondary spines (the rlF2- spines) are processed to get the linear derivation or- der. In (Boullier, 1996), we prove that LDGs are not ambiguous (in fact they are SLR(1)) and define £(D) = {nO.-.r-OISOr~)... r_~)x l,L f.,L Ax 6 £(L)} If, by some classical algorithm, we remove from D all its useless symbols, we get a reduced CFG say D' = (VN D' , VT D' , pD', SO' ). In this grammar, all its terminal symbols, which are productions in L, are useful. By the way, the construction of D' solve the emptiness problem for LIGs: L specify the empty set iff the set VT D' is empty 7. 5 LIG parsing Given a LIG L : (VN, VT, Vz, PL, S) we want to find all the syntactic structures associated with an input string x 6 V~. In section 2 we used a CFG (the shared parse forest) for representing all parses in a CFG. In this section we will see how to build a CFG which represents all parses in a LIG. In (Boullier, 1995) we give a recognizer for LIGs with the following scheme: in a first phase a general CF parsing algorithm, working on the CF-backbone builds a shared parse forest for a given input string x. In a second phase, the LIG conditions are checked on this forest. This checking can result in some subtree (production) deletions, namely the ones for which there is no valid symbol stack evaluation. If the re- sulting grammar is not empty, then x is a sentence. However, in the general case, this resulting gram- mar is not a shared parse forest for the initial LIG in the sense that the computation of stack of sym- bols along spines are not guaranteed to be consis- tent. Such invalid spines are not deleted during the check of the LIG conditions because they could be 7In (Vijay-Shanker and Weir, 1993) the emptiness problem for LIGs is solved by constructing an FSA. 90 composed of sub-spines which are themselves parts of other valid spines. One way to solve this problem is to unfold the shared parse forest and to extract individual parse trees. A parse tree is then kept iff the LIG conditions are valid on that tree. But such a method is not practical since the number of parse trees can be unbounded when the CF-backbone is cyclic. Even for non cyclic grammars, the number of parse trees can be exponential in the size of the input. Moreover, it is problematic that a worst case polynomial size structure could be reached by some sharing compatible both with the syntactic and the %emantic" features. However, we know that derivations in TAGs are context-free (see (Vijay-Shanker, 1987)) and (Vijay- Shanker and Weir, 1993) exhibits a CFG which rep- resents all possible derivation sequences in a TAG. We will show that the analogous holds for LIGs and leads to an O(n 6) time parsing algorithm. Definition 4 Let L = (VN, VT, VI, PL, S) be a LIG, G = (VN,VT,PG, S) its CF-backbone, x a string in E(G), and G ~ = (V~,V~,P~,S ~) its shared parse ]orest for x. We define the LIGed forest for x as being the LIG L ~ = (V~r, V~, VI, P~, S ~) s.t. G z is its CF-backbone and its productions are the productions o] P~ in which the corresponding stack-schemas o] L have been added. For exam- ple rg 0 = [AI~(..~) -4 [BI{(..~')[C]~0 e P~ iff J k r q = [A] k -4 [B]i[C]j e P~Arp = A -4 BC e G A rpO = A(..~) -4 B(..~')C 0 e n. Between a LIG L and its LIGed forest L ~ for x, we have: x~£(L) ¢==~ xCf~(L ~) If we follow(Lang, 1994), the previous definition which produces a LIGed forest from any L and x is a (LIG) parserS: given a LIG L and a string x, we have constructed a new LIG L ~ for the intersec- tion Z;(L) C) {x}, which is the shared forest for all parses of the sentences in the intersection. However, we wish to go one step further since the parsing (or even recognition) problem for LIGs cannot be triv- ially extracted from the LIGed forests. Our vision for the parsing of a string x with a LIG L can be summarized in few lines. Let G be the CF- backbone of L, we first build G ~ the CFG shared parse forest by any classical general CF parsing al- gorithm and then L x its LIGed forest. Afterwards, we build the reduced LDG DL~ associated with L ~ as shown in section 4. Sof course, instead of x, we can consider any FSA. 91 The recognition problem for (L, x) (i.e. is x an element of £(L)) is equivalent to the non-emptiness of the production set of OLd. Moreover, each linear SO~x-derivation in L is (the reverse of) a string in ff.(DL*)9. So the extraction of individual parses in a LIG is merely reduced to the derivation of strings in a CFG. An important issue is about the complexity, in time and space, of DL~. Let n be the length of the input string x. Since G is in binary form we know that the shared parse forest G x can be build in O(n 3) time and the number of its productions is also in O(n3). Moreover, the cardinality of V~ is O(n 2) and, for any given non-terminal, say [A] q, there are at most O(n) [A]g-productions. Of course, these complexities extend to the LIGed forest L z. We now look at the LDG complexity when the input LIG is a LIGed forest. In fact, we mainly have to check two forms of productions (see definition 3). The first form is production (6) ([A +-~ C] -+ [B + C][A ~-0 B]), where three different non-terminals in VN are implied (i.e. A, B and C), so the number of productions of that form is cubic in the number of non-terminals and therefore is O(n6). In the second form (productions (5), (7) and (9)), exemplified by [A ~ C] -4 [B ~ c][rlr2]r(), there ÷ are four non-terminals in VN (i.e. A, B, C, and X if FIF2 = X0) and a production r 0 (the number of relation symbols ~ is a constant), therefore, the ÷ number of such productions seems to be of fourth degree in the number of non-terminals and linear in the number of productions. However, these variables are not independant. For a given A, the number of triples (B,X, r0) is the number of A-productions hence O(n). So, at the end, the number of produc- tions of that form is O(nh). We can easily check that the other form of pro- ductions have a lesser degree. Therefore, the number of productions is domi- nated by the first form and the size (and in fact the construction time) of this grammar is 59(n6). This (once again) shows that the recognition and parsing problem for a LIG can be solved in 59(n 6) time. For a LDG D = (V D, V D, pD SD), we note that for any given non-terminal A E VN D and string a E £:(A) with [a[ >_ 2, a single production A -4 X1X2 or A -4 X1X2X3 in pD is needed to "cut" a into two or three non-empty pieces al, 0"2, and 0-3, such that °In fact, the terminal symbols in DL~ axe produc- tions in L ~ (say Rq()), which trivially can be mapped to productions in L (here rp()). Xi ~ a{, except when the production form num- D bet (4) is used. In such a case, this cutting needs two productions (namely (4) and (7)). This shows that the cutting out of any string of length l, into elementary pieces of length 1, is performed in using O(l) productions. Therefore, the extraction of a lin- ear so~x-derivation in L is performed in time linear with the length of that derivation. If we assume that the CF-backbone G is non cyclic, the extraction of a parse is linear in n. Moreover, during an extrac- tion, since DL= is not ambiguous, at some place, the choice of another A-production will result in a dif- ferent linear derivation. Of course, practical generations of LDGs must im- prove over a blind application of definition 3. One way is to consider a top-down strategy: the X- productions in a LDG are generated iff X is the start symbol or occurs in the RHS of an already generated production. The examples in section 6 are produced this way. If the number of ambiguities in the initial LIG is bounded, the size of DL=, for a given input string x of length n, is linear in n. The size and the time needed to compute DL. are closely related to the actual sizes of the -<~-, >- and + + relations. As pointed out in (Boullier, 1995), their O(n 4) maximum sizes seem to be seldom reached in practice. This means that the average parsing time is much better than this (..9(n 6) worst case. Moreover, our parsing schema allow to avoid some useless computations. Assume that the symbol [A ~ B] is useless in the LDG DL associated with the initial LIG L, we know that any non-terminal s.t. [[A]{ +-~ [B]~] is also useless in DL=. Therefore, the static computation of a reduced LDG for the initial LIG L (and the corresponding -¢-, >- and .~ + + relations) can be used to direct the parsing process and decrease the parsing time (see section 6). 6 Two Examples 6.1 First Example In this section, we illustrate our algorithm with a LIG L -- ({S, T], {a, b, c}, {7~, 75, O'c}, PL, S) where PL contains the following productions: ~ 0 : s(..) -+ s(..eo)~ r30 : s(..) --+ S(..%)c rhO : T(..7~) --+ aT(..) rT0 = T(..%) -+ cT(..) r20 = S(..) --+ S(..Tb)b r40 = S(..) --+ T(..) r60 = T(..%) -+ bT(..) rs0 = T0 --+ c It is easy to see that its CF-backbone G, whose 92 production set Pc is: S-+ Sa S-~ Sb S-+ S c S-~ T T-}aT T -+ bT T -~ cT T -+ c defines the language £(G) = {wcw' I w,w' 6 {a, b, c]*}. We remark that the stacks of symbols in L constrain the string w' to be equal to w and there- fore the language £(L) is {wcw I w 6 {a, b, c]*}. We note that in L the key part is played by the middle c, introduced by production rs0, and that this grammar is non ambiguous, while in G the sym- bol c, introduced by the last production T ~ c, is only a separator between w and w' and that this grammar is ambiguous (any occurrence of c may be this separator). The computation of the relations gives: + = {(S,T)} 1 9% "{b 9"¢ = ~ = ~ = {(s,s)} 1 1 1 9% "Tb ~c >- = >- = >- = ~(T,T]] 1 1 1 + = {(S,T)} + = {(S,T)} 9'a 9'5 '7c >.- = >- = >- = {(T,T),(S,T)} + + + The production set pD of the LDG D associated with L is: [S] --+ rs0[S -~+ T] (2) IS T T] -+ ~0 (3) [S +-~T] --+ [S~T] (4) IS ~ T] --+ [S ~ T]rl 0 (7) [S ~ T] --+ [S ,~ T]r20 (7) [S ~ T] =-+ IS ~- T]ra 0 (7) + [S ~ T] -=+ rh()[S +-~ T] (9) IS ~:+ T] + ~()[S ~ T] (9) [S ~ T] --+ rT0[S -~+ T] (9) The numbers (i) refer to definition 3. We can easily checked that this grammar is reduced. Let x = ccc be an input string. Since x is an element of £(G), its shared parse forest G x is not empty. Its production set P~ is: rl = [s]~ -+ [s]~c r~ = [S]o ~ -+ [S]~c r4 ~ = [s]~ --+ IT] 1 r~ = [T]I 3 --+ c[T] 3 r 9 = [T]~ =+ c[T] 2 ~1 = [T]~ -+ c r~ = [S]~ -+ [T]o ~ r44 = [S]~ --~ [T]o 2 r~ = [T]3o =-+ c[T]31 rs s = [T] 3 --+ c rs 1° = [T]~ --+ c We can observe that this shared parse forest denotes in fact three different parse trees. Each one corre- sponding to a different cutting out of x = wcw' (i.e. w = ~ and w' = ce, or w : c and w' = c, or w = ec and w' = g). The corresponding LIGed forest whose start sym- bol is S * = [S]~ and production set P~ is: r~0 = [S]o%.) -~ [s]~(..%)¢ ~0 = IS]0%.) -, IT]o%.) ~0 = [S]o%.) ~ [S]o~(..%)c ~40 = [s]~(..) -~ IT]o%.) ~0 = ISIS(..) ~ [T]~(..) r60 T 3 = []0(..%) -~ ~[T]~(..) r~0 : [T]3(..%) ~ c[T]23(..) rsS0 = [T]~ 0 --+ c r~0 = [T]o%.%) -~ c[T]~(..) r~°0 : [T]~ 0 -+ e ~0 = [T]~0 -~ c For this LIGed forest the relations are: 1 1 ")'c 1 + >- __=_ + (([S]o a, [T]oa), ([S]o 2, [T]o2), ([S]o 1, [T]ol) } {(IsiS, [s]o~), ([S]o ~, IsiS)} { ([T]o 3, [T]~), ([T] 3 , [T]23), ([T]o 2 , [T]2) } {([s]~0, [T]~)} -¢.- (3 ~ 1 U{ ([S]o 3, [T]13), ([S]o 2, [T]~) } The start symbol of the LDG associated with the LIGed forest L * is [[S]o3]. If we assume that an A- production is generated iff it is an [[S]o3]-production or A occurs in an already generated production, we get: [[S]o ~] ~ ~°()[[s]~ +~ [T]~] (2) [[S]~ +~ [T]~] -+ [[S]o ~ ~ [Th'] (4) [[S] a ~-. [TIll -+ [[S]o 2 ~2 [T]~]r~ () (7) + [[S]o ~ ~:+ [T]~] -~ ~()[[S]o ~ ~+ [T]o ~1 (9) [[S]~ ~ [T]~] ~ ~0 (3) This CFG is reduced. Since its production set is non empty, we have ccc E ~(L). Its language is {r~ ° 0 r9 0 r4 ()r~ 0 } which shows that the only linear derivation in L is S() ~) S(%)c r~) T(Tc)C r=~) t,L t,L l,L eT()c ~) ccc. g,L 93 In computing the relations for the initial LIG L, we remark that though T ~2 T, T ~ T, and T ~ T, + + + the non-terminals IT ~ T], [T ~ T], and IT ~: T] are + + not used in pp. This means that for any LIGed for- est L ~, the elements of the form ([Tip q, [T]~:) do not ")'a need to be computed in the ~+, ~+ , and ~:+ relations since they will never produce a useful non-terminal. In this example, the subset ~: of ~: is useless. 1 -b The next example shows the handling of a cyclic grammar. 6.2 Second Example The following LIG L, where A is the start symbol: rl() = A(..) ~ A(..%) r2() = A(..) ~ B(..) r30 = B(..%) -~ B(..) r40 = B0 ~ a is cyclic (we have A =~ A and B =~ B in its CF- backbone), and the stack schemas in production rl 0 indicate that an unbounded number of push % ac- tions can take place, while production r3 0 indicates an unbounded number of pops. Its CF-backbone is unbounded ambiguous though its language contains the single string a. The computation of the relations gives: -~- = {(A,B)} 1 -< = {(A,A)} 1 >- = {(B,B)} 1 + = {(A,B)} + = {(d, B)} 7a ~- = {(A, B), (B, B)} + The start symbol of the LDG associated with L is [A] and its productions set pO is: [A] -+ r40[A +-~ B] (2) [A +~B] -+ r20 (3) [A +~-B] ~ [A~B] (4) [A ~ B] -~ [A ~ B]rl 0 (7) + [A ~2 B] -~ r3 0[A +~- B] (9) + We can easily checked that this grammar is re- duced. We want to parse the input string x -- a (i.e. find all the linear SO/a-derivations ). Its LIGed forest, whose start = = [Aft(..) = = [B]o 0 For this LIGed 1 7a ..< 1 1 .<,- + "t,* + symbol is [A]~ is: -, [Aft(..%) [B]~(..) --+ [B]~(..) a forest L x, the relations are: {(JAIL = {([Aft, [Aft)} = = {([Aft, -= {([Aft, [B]ol)} = {([A]~, [B]~), (IBIS, [B]~)} The start symbol of the LDG associated with L x is [[A]~]. If we assume that an A-production is gen- erated iff it is an [[A]~]-production or A occurs in an already generated production, its production set is: [[AI~] -+ r~()[[A]~ +-~ [S] 11 (2) [[A]~ -~+ [B]~] -+ r220 (3) [[A]~ +-~ [B]01] ~ [[A]o 1 ~ [B]o 1] (4) [[A]~ ~. [B]01] -+ [[A]~ ~: [B]~]r I 0 (7) + [[A]~ ~+ [B]~] --4 r3()[[A]l o ~ [S]10] (9) This CFG is reduced. Since its production set is non empty, we have a 6 £(L). Its language is {r4(){r]())kr~O{r~O} k ]0 < k) which shows that the only valid linear derivations w.r.t. L must con- tain an identical number k of productions which push 7a (i.e. the production rl0) and productions which pop 7a (i.e. the production r3()). As in the previous example, we can see that the element [S]~ ~ [B]~ is useless. + 7 Conclusion We have shown that the parses of a LIG can be rep- resented by a non ambiguous CFG. This represen- tation captures the fact that the values of a stack of symbols is well parenthesized. When a symbol 3' is pushed on a stack at a given index at some place, this very symbol must be popped some place else, and we know that such (recursive) pairing is the essence of context-freeness. In this approach, the number of productions and the construction time of this CFG is at worst O(n6), 94 though much better results occur in practical situa- tions. Moreover, static computations on the initial LIG may decrease this practical complexity in avoid- ing useless computations. Each sentence in this CFG is a derivation of the given input string by the LIG, and is extracted in linear time. References Pierre Boullier. 1995. Yet another (_O(n 6) recog- nition algorithm for mildly context-sensitive lan- guages. In Proceedings of the fourth international workshop on parsing technologies (IWPT'95), Prague and Karlovy Vary, Czech Republic, pages 34-47. See also Research Report No 2730 at http: I/www. inria, fr/R2~T/R~-2730.html, INRIA-Rocquencourt, France, Nov. 1995, 22 pages. Pierre Boullier. 1996. Another Facet of LIG Parsing (extended version). In Research Report No P858 at http://www, inria, fr/RRKT/KK-2858.html, INRIA-Rocquencourt, France, Apr. 1996, 22 pages. Bernard Lang. 1991. Towards a uniform formal framework for parsing. In Current Issues in Pars- ing Technology, edited by M. Tomita, Kluwer Aca- demic Publishers, pages 153-171. Bernard Lang. 1994. Recognition can be harder than parsing. In Computational Intelligence, Vol. 10, No. 4, pages 486-494. Yves Schabes, Stuart M. Shieber. 1994. An Alter- native Conception of Tree-Adjoining Derivation. In ACL Computational Linguistics, Vol. 20, No. 1, pages 91-124. K. Vijay-Shanker. 1987. A study of tree adjoining grammars. PhD thesis, University of Pennsylva- nia. K. Vijay-Shanker, David J. Weir. 1993. The Used of Shared Forests in Tree Adjoining Grammar Pars- ing. In Proceedings of the 6th Conference of the European Chapter of the Association for Com- putational Linguistics (EACL'93), Utrecht, The Netherlands, pages 384-393. K. Vijay-Shanker, David J. Weir. 1994. Parsing some constrained grammar formalisms. In A CL Computational Linguistics, Vol. 19, No. 4, pages 591-636. K. Vijay-Shanker, David J. Weir, Owen Rambow. 1995. Parsing D-Tree Grammars. In Proceed- ings of the fourth international workshop on pars- ing technologies (IWPT'95), Prague and Karlovy Vary, Czech Republic, pages 252-259. | 1996 | 12 |
Parsing for Semidirectional Lambek Grammar is NP-Complete Jochen Dfrre Institut ffir maschinelle Sprachverarbeitung University of Stuttgart Abstract We study the computational complexity of the parsing problem of a variant of Lambek Categorial Grammar that we call semidirectional. In semidirectional Lambek calculus SD[ there is an additional non- directional abstraction rule allowing the formula abstracted over to appear any- where in the premise sequent's left-hand side, thus permitting non-peripheral ex- traction. SD[ grammars are able to gen- erate each context-free language and more than that. We show that the parsing prob- lem for semidireetional Lambek Grammar is NP-complete by a reduction of the 3- Partition problem. Key words: computational complexity, Lambek Categorial Grammar 1 Introduction Categorial Grammar (CG) and in particular Lambek Categorial Grammar (LCG) have their well-known benefits for the formal treatment of natural language syntax and semantics. The most outstanding of these benefits is probably the fact that the specific way, how the complete grammar is encoded, namely in terms of 'combinatory potentials' of its words, gives us at the same time recipes for the construction of meanings, once the words have been combined with others to form larger linguistic entities. Although both frameworks are equivalent in weak generative capacity -- both derive exactly the context-free lan- guages --, LCG is superior to CG in that it can cope in a natural way with extraction and unbounded de- pendency phenomena. For instance, no special cate- gory assignments need to be stipulated to handle a relative clause containing a trace, because it is an- alyzed, via hypothetical reasoning, like a traceless clause with the trace being the hypothesis to be dis- charged when combined with the relative pronoun. Figure 1 illustrates this proof-logical behaviour. No- tice that this natural-deduction-style proof in the type logic corresponds very closely to the phrase- structure tree one would like to adopt in an analysis with traces. We thus can derive Bill misses ~ as an s from the hypothesis that there is a "phantom" np in the place of the trace. Discharging the hypoth- esis, indicated by index 1, results in Bill misses being analyzed as an s/np from zero hypotheses. Ob- serve, however, that such a bottom-up synthesis of a new unsaturated type is only required, if that type is to be consumed (as the antecedent of an impli- cation) by another type. Otherwise there would be a simpler proof without this abstraction. In our ex- ample the relative pronoun has such a complex type triggering an extraction. A drawback of the pure Lambek Calculus !_ is that it only allows for so-called 'peripheral extraction', i.e., in our example the trace should better be initial or final in the relative clause. This inflexibility of Lambek Calculus is one of the reasons why many researchers study richer systems today. For instance, the recent work by Moortgat (Moortgat 94) gives a systematic in-depth study of mixed Lambek systems, which integrate the systems L, NL, NLP, and LP. These ingredient systems are obtained by varying the Lambek calculus along two dimensions: adding the permutation rule (P) and/or dropping the assumption that the type combinator (which forms the sequences the systems talk about) is associative (N for non-associative). Taken for themselves these variants of I_ are of lit- tle use in linguistic descriptions. But in Moortgat's mixed system all the different resource management modes of the different systems are left intact in the combination and can be exploited in different parts of the grammar. The relative pronoun which would, for instance, receive category (np\np)/(np --o s) with --o being implication in LP, 1 i.e., it requires 1The Lambek calculus with permutation I_P is also called the "nondirectional Lambek calculus" (Ben- them 88). In it the leftward and rightward implication 95 (the book) which (np\np)/(s/np) misses e (n;\8)/n; Bill ~ ~ np np\s 8 I s/npl np\np Figure 1: Extraction as resource-conscious hypothetical reasoning as an argument "an s lacking an np somewhere" .2. The present paper studies the computational com- plexity of a variant of the Lambek Calculus that lies between / and tP, the Semidirectional Lambek Cal- culus SDk. 3 Since tP derivability is known to be NP- complete, it is interesting to study restrictions on the use of the I_P operator -o. A restriction that leaves its proposed linguistic applications intact is to admit a type B -o A only as the argument type in func- tional applications, but never as the functor. Stated prove-theoretically for Gentzen-style systems, this amounts to disallowing the left rule for -o. Surpris- ingly, the resulting system SD[. can be stated with- out the need for structural rules, i.e., as a monolithic system with just one structural connective, because the ability of the abstracted-over formula to permute can be directly encoded in the right rule for --o. 4 Note that our purpose for studying SDI_ is not that it might be in any sense better suited for a theory of grammar (except perhaps, because of its simplicity), but rather, because it exhibits a core of logical be- haviour that any richer system also needs to include, at least if it should allow for non-peripheral extrac- tion. The sources of complexity uncovered here are thus a forteriori present in all these richer systems as well. collapse. 2Morrill (Morrill 94) achieves the same effect with a permutation modality /k apphed to the np gap: (s/Anp) SThis name was coined by Esther K6nig-Baumer, who employs a variant of this calculus in her LexGram system (KSnig 95) for practical grammar development. 4It should be pointed out that the resource manage- ment in this calculus is very closely related to the han- dhng and interaction of local valency and unbounded dependencies in HPSG. The latter being handled with set-valued features SLASH, QUE and KEL essentially emu- lates the permutation potential of abstracted categories in semidirectional Lambek Grammar. A more detailed analysis of the relation between HPSG and SD[ is given in (KSnig 95). 2 Semidirectional Lambek Grammar 2.1 Lambek calculus The semidirectional Lambek calculus (henceforth SDL) is a variant of J. Lambek's original (Lam- bek 58) calculus of syntactic types. We start by defining the Lambek calculus and extend it to ob- tain SDL. Formulae (also called "syntactic types") are built from a set of propositional variables (or "primitive types") B = {bl, b2,...} and the three binary con- nectives • , \,/, called product, left implication, and right implication. We use generally capital letters A, B, C,... to denote formulae and capitals towards the end of the alphabet T, U, V, ... to denote sequences of formulae. The concatenation of sequences U and V is denoted by (U, V). The (usual) formal framework of these logics is a Gentzen-style sequent calculus. Sequents are pairs (U, A), written as U =~ A, where A is a type and U is a sequence of types. 5 The claim embodied by se- quent U =~ A can be read as "formula A is derivable from the structured database U". Figure 2 shows Lambek's original calculus t. First of all, since we don't need products to obtain our results and since they only complicate matters, we eliminate products from consideration in the se- quel. In Semidirectional Lambek Calculus we add as ad- ditional connective the [_P implication --% but equip it only with a right rule. U, B, V :=~ A (-o R) if T = (U, Y) nonempty. T :~ B --o A 5In contrast to Linear Logic (Girard 87) the order of types in U is essential, since the structural rule of permutation is not assumed to hold. Moreover, the fact that only a single formula may appear on the right of ~, make the Lambek calculus an intuitionistic fragment of the multiplicative fragment of non-commutative propo- sitional Linear Logic. 96 (Ax) T~B U,A,V=~C U, A/B, T, V =~ C (/L) U,B ~A U ::~ A/B (/1~) if U nonempty T ::v B U,A, V =v C U, T, B\A, V =~ C (\L) B,U~A U =~ B\A (\R) if U nonempty U,A,B, V =~ C (.L) U, AoB, V =~ C UsA V~B (.R) U,V =~ A.B T~A U,A,V=¢,C (Cut) U, T, V =~ U Figure 2: Lambek calculus L Let us define the polarity of a subformula of a se- quent A1, • •., Am ::~ A as follows: A has positive po- larity, each of Ai have negative polarity and if B/C or C\B has polarity p, then B also has polarity p and C has the opposite polarity of p in the sequent. A consequence of only allowing the (-o R) rule, which is easily proved by induction, is that in any derivable sequent --o may only appear in positive polarity. Hence, -o may not occur in the (cut) for- mula A of a (Cut) application and any subformula B -o A which occurs somewhere in the prove must also occur in the final sequent. When we assume the final sequent's RHS to be primitive (or --o-less), then the (-o R) rule will be used exactly once for each (positively) occuring -o-subformula. In other words, (-o R) may only do what it is supposed to do: ex- traction, and we can directly read off the category assignment which extractions there will be. We can show Cut Elimination for this calculus by a straight-forward adaptation of the Cut elimination proof for L. We omit the proof for reasons of space. Proposition 1 (Cut Elimination) Each SDL-derivable sequent has a cut-free proof. The cut-free system enjoys, as usual for Lambek-like logics, the Subformula Property: in any proof only subformulae of the goal sequent may appear. In our considerations below we will make heavy use of the well-known count invariant for Lambek sys- tems (Benthem 88), which is an expression of the resource-consciousness of these logics. Define #b(A) (the b-count of A), a function counting positive and negative occurrences of primitive type b in an arbi- 97 trary type A, to be if A= b if A primitive and A ~ b #b(A)= #b(B)-#b(C)ifA=B/CorA=V\B or A=C-o B [.#b(B) + #b(C) ifA = B. C The invariant now states that for any primitive b, the b-count of the RHS and the LHS of any derivable sequent are the same. By noticing that this invariant is true for (Ax) and is preserved by the rules, we immediately can state: Proposition 2 (Count Invariant) If I-sb L U ==~ A, then #b(U) = #b(A) fo~ any b ~ t~. Let us in parallel to SDL consider the fragment of it in which (/R) and (\R) are disallowed. We call this fragment SDL-. Remarkable about this fragment is that any positive occurrence of an implication must be --o and any negative one must be / or \. 2.2 Lambek Grammar Definition 3 We define a Lambek grammar to be a quadruple (E, ~r, bs, l) consisting of the finite alpha- bet of terminals E, the set jr of all Lambek formulae generated from some set of propositional variables which includes the distinguished variable s, and the lezical map l : ~, --* 2 7 which maps each terminal to a finite subset off. We extend the lexical map l to nonempty strings of terminals by setting l(wlw2...w~) := l(wl) × l(w~) x ... x l(w,) for wlw2...wn E ~+. The language generated by a Lambek grammar G = (~,~',bs,l) is defined as the set of all strings wlw~...wn E ~+ for which there exists a sequence x==~x x==~x B~, B2, C~, C2, c n+l, b n+l => y (*) B~, B2, C~, C2, c n, b n ~ c --o (b --o y) A2, B[, B2, C~, C2, c n, b n =* x n--1 A 1 , A2, B~, B2, C~, C2, c, b =v x A~ -1, A2, B~', B2, C~, C2 =~ c -0 (b -0 x) A?, A2, B~, B2, C{ ~, C2 ==> x Figure 3: Proof of A~, A2, B~, B2, C~, C2 =~ z 2x(-on) (]L) 2x(--on) (/L) of types U E l(wlw2...wn) and k k U ~ bs. We denote this language by L(G). An SDL-grammar is defined exactly like a Lambek grammar, except that kSD k replaces kl_. Given a grammar G and a string w = WlW2... wn, the parsing (or recognition) problem asks the ques- tion, whether w is in L(G). It is not immediately obvious, how the generative capacity of SDL-grammars relate to Lambek gram- mars or nondirectional Lambek grammars (based on calculus LP). Whereas Lambek grammars gener- ate exactly the context-free languages (modulo the missing empty word) (Pentus 93), the latter gen- erate all permutation closures of context-free lan- guages (Benthem 88). This excludes many context- free or even regular languages, but includes some context-sensitive ones, e.g., the permutation closure of a n b n c n . Concerning SD[, it is straightforward to show that all context-free languages can be generated by SDL- grammars• Proposition 4 Every context-free language is gen- erated by some SDL-grammar. Proof. We can use a the standard transformation of an arbitrary cfr. grammar G = (N, T, P, S) to a categorial grammar G'. Since -o does not appear in G' each SDl_-proof of a lexical assignment must be also an I_-proof, i.e. exactly the same strings are judged grammatical by SDL as are judged by L. D Note that since the {(Ax), (/L), (\L)} subset of I_ already accounts for the cfr. languages, this obser- vation extends to SDL-. Moreover, some languages which are not context-free can also be generated. Example. Consider the following grammar G for the language anbnc n. We use primitive types B = {b, c, x, y, z} and define the lexical map for E = 98 {a, b, c} as follows: l(a) := { x/(c ---o (b -o x)), xl(c ---o (b -o y)) } = )41 = A2 ----CI = C2 The distinguished primitive type is x• To simplify the argumentation, we abbreviate types as indicated above• Now, observe that a sequent U =~ x, where U is the image of some string over E, only then may have bal- anced primitive counts, if U contains exactly one oc- currence of each of A2, B2 and C2 (accounting for the one supernumerary x and balanced y and z counts) and for some number n >_ 0, n occurrences of each of A1, B1, and C1 (because, resource-oriented speak- ing, each Bi and Ci "consume" a b and c, resp., and each Ai "provides" a pair b, c). Hence, only strings containing the same number of a's, b's and c's may be produced. Furthermore, due to the Subformula Property we know that in a cut-free proof of U ~ x, the mMn formula in abstractions (right rules) may only be either c -o (b --o X) or b -o X, where X E {x,y}, since all other implication types have primitive antecedents. Hence, the LHS of any se- quent in the proof must be a subsequence of U, with some additional b types and c types interspersed. But then it is easy to show that U can only be of the form Anl, A2, B~, B2, C~, C2, since any / connective in U needs to be introduced via (/L). It remains to be shown, that there is actually a proof for such a sequent• It is given in Figure 3. The sequent marked with * is easily seen to be deriv- able without abstractions. A remarkable point about SDL's ability to cover this language is that neither L nor LP can generate it. Hence, this example substantiates the claim made in (Moortgat 94) that the inferential capacity of mixed Lambek systems may be greater than the sum of its component parts. Moreover, the attentive reader will have noticed that our encoding also extends to languages having more groups of n symbols, i.e., to languages of the form n n n al a2 ... a k • Finally, we note in passing that for this grammar the rules (/R) and (\R) are irrelevant, i.e. that it is at the same time an SOL- grammar. 3 NP-Completeness of the Parsing Problem We show that the Parsing Problem for SDL- grammars is NP-complete by a reduction of the 3-Partition Problem to it. 6 This well-known NP- complete problem is cited in (GareyJohnson 79) as follows. Instance: Set ,4 of 3m elements, a bound N E Z +, and a size s(a) E Z + for each a E `4 such that ~ < s(a) < ~- and ~o~ s(a) = mN. Question: Can `4 be partitioned into m disjoint sets `41,`42,...,Am such that, for 1 < i < m, ~ae.a s(a) = N (note that each `4i must 'therefore contain exactly 3 elements from `4)? Comment: NP-complete in the strong sense. Here is our reduction. Let F = (`4, m,N,s) be a given 3-Partition instance. For notational conve- nience we abbreviate (...((A/BI)/B~)/...)/Bn by A/B~ •...• B2 • B1 and similarly B, -o (... (B1 --o A)...) by Bn •... • B2 • B1 --o A, but note that this is just an abbreviation in the product-free fragment. Moreover the notation A k stands for AoAo ...oA k t~mes We then define the SDL-grammar Gr = (~, ~, bs, l) as follows: p, := {v, wl,..., warn} 5 t" := all formulae over primitive types m b B = {a,d}UUi=,{ i,c,:} bs :--= a • for l<i<3rn-l: l(wi) := UJ.<./<m d/d • bj • c: (~') 6A similar reduction has been used in (LincolnWin- kler 94) to show that derivability in the multiplicative fragment of propositional Linear Logic with only the con- nectives --o and @ (equivalently Lambek calculus with permutation LP) is NP-complete. 99 The word we are interested in is v wl w2...w3m. We do not care about other words that might be generated by Gr. Our claim now is that a given 3-Partition problem F is solvable if and only if v wl ... w3m is in L(Gr). We consider each direction in turn. Lemma 5 (Soundness) If a 3-Partition problem F = (A,m,N,s) has a solution, then vwl...w3m is in/(Gr). Proof. We have to show, when given a solution to F, how to choose a type sequence U ~ l(vwl...wzm) and construct an SDL proof for U ==~ a. Suppose `4 = {al,a2,...,a3m}. From a given solution (set of triples) A1,`4~,... ,-Am we can compute in poly- nomial time a mapping k that sends the index of an element to the index of its solution triple, i.e., k(i) = j iff ai e `4j. To obtain the required sequence U, we simply choose for the wi terminals the type • cS(a3"~) • c ~("~) (resp. d/bk(3m) k(3m) for W3m). did • bk(i) k(i) Hence the complete sequent to solve is: N d) a/(b 3 •b 3 •...•b3m ac N •c N •...•c m -o did • bko) • %(1) cS(a3,.-1) (*) did • bk(3m-1) • k(am-1) dlb • cS(a3") / k(3m) k(zm) Let a/Bo, B1,...B3m ~ a be a shorthand for (*), and let X stand for the sequence of primitive types c~(,,~,.) c~(,~.,,-~) c~(,~,) bk(3m), k(3m),bk(3m-l), k(3,~_l),...bko), k(1)" Using rule (/L) only, we can obviously prove B1, . . . B3m , X ::~ d. Now, applying (--o R) 3m + N m times we can obtain B1,...B3m =~ B0, since there are in total, for each i, 3 bi and N ci in X. As final step we have BI,...B3m ~ B0 a ~ a a/Bo, BI,... B3m ~ a (/L) which completes the proof. [] Lemma 6 (Completeness) Let F = (.4, m, N, s) be an arbitrary 3-Partition problem and Gr the cor- responding SDL-grammar as defined above. Then F has a solution, if v wl... w3m is in L(Gr). Proof. Let v wl... W3m 6 L(Gr) and N d), B1,. • • Bsm ~ a a/(b? .....em -o be a witnessing derivable sequent, i.e., for 1 < i < 3m, Bi E l(wi). Now, since the counts of this se- quent must be balanced, the sequence B1,...B3m must contain for each 1 _< j < m exactly 3 bj and exactly N cj as subformulae. Therefore we can read off the solution to F from this sequent by including in Aj (for 1 < j < m) those three ai for which Bi has an occurrence of bj, say these are aj(1), aj(2) and aj(3). We verify, again via balancedness of the prim- itive counts, that s(aj(1)) ÷ s(aj(2)) + s(aj(3)) = N holds, because these are the numbers of positive and negative occurrences of cj in the sequent. This com- pletes the proof. [] The reduction above proves NP-hardness of the pars- ing problem. We need strong NP-completeness of 3-Partition here, since our reduction uses a unary encoding. Moreover, the parsing problem also lies within NP, since for a given grammar G proofs are linearly bound by the length of the string and hence, we can simply guess a proof and check it in polyno- mial time. Therefore we can state the following: Theorem 7 The parsing problem for SDI_ is NP- complete. Finally, we observe that for this reduction the rules (/R) and (\R) are again irrelevant and that we can extend this result to SDI_-. 4 Conclusion We have defined a variant of Lambek's original cal- culus of types that allows abstracted-over categories to freely permute. Grammars based on SOl- can generate any context-free language and more than that. The parsing problem for SD[, however, we have shown to be NP-complete. This result indi- cates that efficient parsing for grammars that al- low for large numbers of unbounded dependencies from within one node may be problematic, even in the categorial framework. Note that the fact, that this problematic case doesn't show up in the correct analysis of normal NL sentences, doesn't mean that a parser wouldn't have to try it, unless some arbi- trary bound to that number is assumed. For practi- cal grammar engineering one can devise the motto avoid accumulation of unbounded dependencies by whatever means. On the theoretical side we think that this result for S01 is also of some importance, since SDI_ exhibits a core of logical behaviour that any (Lambek-based) logic must have which accounts for non-peripheral extraction by some form of permutation. And hence, this result increases our understanding of the nec- essary computational properties of such richer sys- tems. To our knowledge the question, whether the Lambek calculus itself or its associated parsing prob- lem are NP-hard, are still open. References J. van Benthem. The Lambek Calculus. In R. T. O. et al. (Ed.), Categorial Grammars and Natural Lan- guage Structures, pp. 35-68. Reidel, 1988. M. R. Garey and D. S. Johnson. Computers and Intractability--A Guide to the Theory of NP- Completeness. Freeman, San Francisco, Cal., 1979. J.-Y. Girard. Linear Logic. Theoretical Computer Science, 50(1):1-102, 1987. E. Khnig. LexGram - a practical categorial gram- mar formalism. In Proceedings of the Workshop on Computational Logic for Natural Language Process- ing. A Joint COMPULOGNET/ELSNET/EAGLES Workshop, Edinburgh, Scotland, April 1995. J. Lambek. The Mathematics of Sentence Struc- ture. American Mathematical Monthly, 65(3):154- 170, 1958. P. Lincoln and T. Winkler. Constant-Only Multi- plicative Linear Logic is NP-Complete. Theoretical Computer Science, 135(1):155-169, Dec. 1994. M. Moortgat. Residuation in Mixed Lambek Sys- tems. In M. Moortgat (Ed.), Lambek Calculus. Mul- timodal and Polymorphic Extensions, DYANA-2 de- liverable RI.I.B. ESPRIT, Basic Research Project 6852, Sept. 1994. G. Morrill. Type Logical Grammar: Categorial Logic of Signs. Kluwer, 1994. M. Pentus. Lambek grammars are context free. In Proceedings of Logic in Computer Science, Montreal, 1993. 100 | 1996 | 13 |
Computing Optimal Descriptions for Optimality Theory Grammars with Context-Free Position Structures Bruce Tesar The Rutgers Center for Cognitive Science / The Linguistics Department Rutgers University Piscataway, NJ 08855 USA tesar@ruccs, rutgers, edu Abstract This paper describes an algorithm for computing optimal structural descriptions for Optimality Theory grammars with context-free position structures. This algorithm extends Tesar's dynamic pro- gramming approach (Tesar, 1994) (Tesar, 1995@ to computing optimal structural descriptions from regular to context-free structures. The generalization to context- free structures creates several complica- tions, all of which are overcome without compromising the core dynamic program- ming approach. The resulting algorithm has a time complexity cubic in the length of the input, and is applicable to gram- mars with universal constraints that ex- hibit context-free locality. 1 Computing Optimal Descriptions in Optimality Theory In Optimality Theory (Prince and Smolensky, 1993), grammaticality is defined in terms of optimization. For any given linguistic input, the grammatical structural description of that input is the descrip- tion, selected from a set of candidate descriptions for that input, that best satisfies a ranked set of uni- versal constraints. The universal constraints often conflict: satisfying one constraint may only be pos- sible at the expense of violating another one. These conflicts are resolved by ranking the universal con- straints in a strict dominance hierarchy: one viola- tion of a given constraint is strictly worse than any number of violations of a lower-ranked constraint. When comparing two descriptions, the one which better satisfies the ranked constraints has higher Harmony. Cross-linguistic variation is accounted for by differences in the ranking of the same constraints. The term linguistic input should here be under- stood as something like an underlying form. In phonology, an input might be a string of segmental material; in syntax, it might be a verb's argument structure, along with the arguments. For exposi- tional purposes, this paper will assume linguistic in- puts to be ordered strings of segments. A candidate structural description for an input is a full linguis- tic description containing that input, and indicating what the (pronounced) surface realization is. An im- portant property of Optimality Theory (OT) gram- mars is that they do not accept or reject inputs; every possible input is assigned a description by the grammar. The formal definition of Optimality Theory posits a function, Gen, which maps an input to a large (of- ten infinite) set of candidate structural descriptions, all of which are evaluated in parallel by the universal constraints. An OT grammar does not itself specify an algorithm, it simply assigns a grammatical struc- tural description to each input. However, one can ask the computational question of whether efficient algorithms exist to compute the description assigned to a linguistic input by a grammar. The most apparent computational challenge is posed by the allowance of faithfulness violations: the surface form of a structural description may not be identical with the input. Structural positions not filled with input segments constitute overpars- ing (epenthesis). Input segments not parsed into structural positions do not appear in the surface pro- nunciation, and constitute underparsing (deletion). To the extent that underparsing and overparsing are avoided, the description is said to be faithful to the input. Crucial to Optimality Theory are faithful- ness constraints, which are violated by underparsing and overparsing. The faithfulness constraints ensure that a grammar will only tolerate deviations of the surface form from the input form which are neces- sary to satisfy structural constraints dominating the faithfulness constraints. Computing an optimal description means consid- ering a space of candidate descriptions that include structures with a variety of faithfulness violations, and evaluating those candidates with respect to a ranking in which structural and faithfulness con- straints may be interleaved. This is parsing in the generic sense: a structural description is being as- 101 signed to an input. It is, however, distinct from what is traditionally thought of as parsing in com- putationM linguistics. Traditional parsing attempts to construct a grammatical description with a sur- face form matching the given input string exactly; if a description cannot be fit exactly, the input string is rejected as ungrammatical. Traditional parsing can be thought of as enforcing faithfulness absolutely, with no faithfulness violations are allowed. Partly for this reason, traditional parsing is usually under- stood as mapping a surface form to a description. In the computation of optimal descriptions considered here, a candidate that is fully faithful to the input may be tossed aside by the grammar in favor of a less faithful description better satisfying other (dom- inant) constraints. Computing an optimal descrip- tion in Optimality Theory is more naturally thought of as mapping an underlying form to a description, perhaps as part of the process of language produc- tion. Tesar (Tesar, 1994) (Tesar, 1995a) has devel- oped algorithms for computing optimal descriptions, based upon dynamic programming. The details laid out in (Tesar, 1995a) focused on the case where the set of structures underlying the Gen function are formally regular. In this paper, Tesar's basic ap- proach is adopted, and extended to grammars with a Gen function employing fully context-free struc- tures. Using such context-free structures introduces some complications not apparent with the regular case. This paper demonstrates that the complica- tions can be dealt with, and that the dynamic pro- gramming case may be fully extended to grammars with context-free structures. 2 Context-Free Position Structure Grammars Tesar (Tesar, 1995a) formalizes Gen as a set of matchings between an ordered string of input seg- ments and the terminals of each of a set of position structures. The set of possible position structures is defined by a formal grammar, the position struc- ture grammar. A position structure has as terminals structural positions. In a valid structural descrip- tion, each structural position may be filled with at most one input segment, and each input segment may be parsed into at most one position. The linear order of the input must be preserved in all candidate structural descriptions. This paper considers Optimality Theory gram- mars where the position structure grammar is context-free; that is, the space of position structures can be described by a formal context-free grammar. As an illustration, consider the grammar in Exam- ples 1 and 2 (this illustration is not intended to rep- resent any plausible natural language theory, but does use the "peak/margin" terminology sometimes employed in syllable theories). The set of inputs is {C,V} +. The candidate descriptions of an input consist of a sequence of pieces, each of which has a peak (p) surrounded by one or more pairs of margin positions (m). These structures exhibit prototypi- cal context-free behavior, in that margin positions to the left of a peak are balanced with margin po- sitions to the right. 'e' is the empty string, and 'S' the start symbol. Example 1 The Position Structure Grammar S :=~ Fie F =~ YIYF Y ~ P I MFM M ::~ m P =:~ p Example 2 The Constraints -(m/V) Do not parse V into a margin position -(p/C) Do not parse C into a peak position PARSE Input segments must be parsed FILL m A margin position must be filled FILL p A peak position must be filled The first two constraints are structurM, and man- date that V not be parsed into a margin position, and that C not be parsed into a peak position. The other three constraints are faithfulness constraints. The two structural constraints are satisfied by de- scriptions with each V in a peak position surrounded by matched C's in margin positions: CCVCC, V, CVCCCVCC, etc. If the input string permits such an analysis, it will be given this completely faithful description, with no resulting constraint violations (ensuring that it will be optimal with respect to any ranking). Consider the constraint hierarchy in Example 3. Example 3 A Constraint Hierarchy {-(m/V),-(p/C), PARSE} ~> {FILL p} > {FILL m} This ranking ensures that in optimal descriptions, a V will only be parsed as a peak, while a C will only be parsed as a margin. Further, all input segments will be parsed, and unfilled positions will be included only as necessary to produce a sequence of balanced structures. For example, the input /VC/ receives the description 1 shown in Example 4. Example 4 The Optimal Description for/VC/ S(F(Y(M(C),P(V),M(C)))) The surface string for this description is CVC: the first C was "epenthesized" to balance with the one following the peak V. This candidate is optimal be- cause it only violates FILL m, the lowest-ranked con- straint. Tesar identifies locality as a sufficient condition on the universal constraints for the success of his l In this paper, tree structures will be denoted with parentheses: a parent node X with child nodes Y and Z is denoted X(Y,Z). 102 approach. For formally regular position structure grammars, he defines a local constraint as one which can be evaluated strictly on the basis of two consec- utive positions (and any input segments filling those positions) in the linear position structure. That idea can be extended to the context-free case as follows. A local constraint is one which can be evaluated strictly on the basis of the information contained within a local region. A local region of a description is either of the following: • a non4erminal and the child non-terminals that it immediately dominates; • a non-terminal which dominates a terminal symbol (position), along with the terminal and the input segment (if present) filling the termi- nal position. It is important to keep clear the role of the posi- tion structure grammar. It does not define the set of grammatical structures, it defines the Space of can- didate structures. Thus, the computation of descrip- tions addressed in this paper should be distinguished from robust, or error-correcting, parsing (Anderson and Backhouse, 1981, for example). There, the in- put string is mapped to the grammatical structure that is 'closest'; if the input completely matches a structure generated by the grammar, that structure is automatically selected. In the OT case presented here, the full grammar is the entire OT system, of which the position structure grammar is only a part. Error-correcting parsing uses optimization only with respect to the faithfulness of pre-defined grammati- cal structures to the input. OT uses optimization to define grammaticality. 3 The Dynamic Programming Table The Dynamic Programming (DP) Table is here a three-dimensional, pyramid-shaped data structure. It resembles the tables used for context-free chart parsing (Kay, 1980) and maximum likelihood com- putation for stochastic context-free grammars (Lari and Young, 1990) (Charniak, 1993). Each cell of the table contains a partial description (a part of a structural description), and the Harmony of that partial description. A partial description is much like an edge in chart parsing, covering a contigu- ous substring of the input. A cell is identified by three indices, and denoted with square brackets (e.g., [X,a,c]). The first index identifying the cell (X) indicates the cell category of the cell. The other two indices (a and c) indicate the contiguous substring of the input string covered by the partial description contained in the cell (input segments ia through ic). In chart parsing, the set of cell categories is pre- cisely the set of non-terminals in the grammar, and thus a cell contains a subtree with a root non- terminal corresponding to the cell category, and with leaves that constitute precisely the input substring covered by the cell. In the algorithm presented here, the set of cell categories are the non-terminals of the position structure grammar, along with a category for each left-aligned substring of the right hand side of each position grammar rule. Example 5 gives the set of cell categories for the position structure gram- mar in Example 1. Example 5 The Set of Cell Categories S, F, Y, M, P, MF The last category in Example 5, MF, comes from the rule Y =:~ MFM of Example 1, which has more than two non-terminals on the right hand side. Each such category corresponds to an incomplete edge in normal chart parsing; having a table cell for each such category eliminates the need for a separate data structure containing edges. The cell [MF,a,c] may contain an ordered pair of subtrees, the first with root M covering input [a,b], and the second with root F covering input [b+l,c]. The DP Table is perhaps best envisioned as a set of layers, one for each category. A layer is a set of all cells in the table indexed by a particular cell category. Example 6 A Layer of the Dynamic Programming Table for Category M (input i1"i3) [U,l,3] [M,1,2] [M,2,3] [M,I,1] [M,2,2] [M,3,3] I il i2 i3 For each substring length, there is a collection of rows, one for each category, which will collectively be referred to as a level. The first level contains the cells which only cover one input segment; the num- ber of cells in this level will he the number of input segments multiplied by the number of cell categories. Level two contains cells which cover input substrings of length two, and so on. The top level contains one cell for each category. One other useful partition of the DP table is into blocks. A block is a set of all cells covering a particular input subsequence. A block has one cell for each cell category. A cell of the DP Table is filled by comparing the results of several operations, each of which try to fill a cell. The operation producing the partial descrip- tion with the highest Harmony actually fills the cell. The operations themselves are discussed in Section 4. The algorithm presented in Section 6 fills the ta- ble cells level by level: first, all the cells covering only one input segment are filled, then the cells cov- ering two consecutive segments are filled, and so forth. When the table has been completely filled, cell [S,1,J] will contain the optimal description of the input, and its Harmony. The table may also be filled in a more left-to-right manner, bottom-up, in the spirit of CKY. First, the cells covering only segment il, and then i2, are filled. Then, the cells 103 covering the first two segments are filled, using the entries in the cells covering each of il and is. The cells of the next diagonal are then filled. 4 The Operations Set The Operations Set contains the operations used to fill DP Table cells. The algorithm proceeds by con- sidering all of the operations that could be used to fill a cell, and selecting the one generating the partial description with the highest Harmony to actually fill the cell. There are three main types of opera- tions, corresponding to underparsing, parsing, and overparsing actions. These actions are analogous to the three primitive actions of sequence comparison (Sankoff and Kruskal, 1983): deletion, correspon- dence, and insertion. The discussion that follows makes the assumption that the right hand side of every production is either a string of non-terminals or a single terminal. Each parsing operation generates a new element of struc- ture, and so is associated with a position structure grammar production. The first type of parsing op- eration involves productions which generate a single terminal (e.g., P:=~p). Because we are assuming that an input segment may only be parsed into at most one position, and that a position may have at most one input segment parsed into it, this parsing oper- ation may only fill a cell which covers exactly one input segment, in our example, cell [P,I,1] could be filled by an operation parsing il into a p position, giving the partial description P(p filled with il). The other kinds of parsing operations are matched to position grammar productions in which a parent non-terminal generates child non-terminals. One of these kinds of operations fills the cell for a cate- gory by combining cell entries for two factor cat- egories, in order, so that the substrings covered by each of them combine (concatenatively, with no over- lap) to form the input substring covered by the cell being filled. For rule Y =~ MFM, there will be an operation of this type combining entries in [M,a,b] and [F,b+l,c], creating the concatenated structure s [M,a,b]+[F,b+l,c], to fill [MF,a,c]. The final type of parsing operation fills a cell for a cate- gory which is a single non-terminal on the left hand side of a production, by combining two entries which jointly form the entire right hand side of the pro- duction. This operation would combining entries in [MF,a,c] and [M,c÷l,d], creating the structure Y([MF,a,c],[M,c+l,d]), to fill [Y,a,d]. Each of these operations involves filling a cell for a target cate- gory by using the entries in the cells for two factor categories. The resulting Harmony of the partial description created by a parsing operation will be the combina- 2This partial description is not a single tree, but an ordered pair of trees. In general, such concatenated structures will be ordered lists of trees. tion of the marks assessed each of the partial descrip- tions for the factor categories, plus any additional marks incurred as a result of the structure added by the production itself. This is true because the con- straints must be local: any new constraint violations are determinable on the basis of the cell category of the factor partial descriptions, and not any other internal details of those partial descriptions. All possible ways in which the factor categories, taken in order, may combine to cover the substring, must be considered. Because the factor categories must be contiguous and in order, this amounts to considering each of the ways in which the substring can be split into two pieces. This is reflected in the parsing operation descriptions given in Section 6.2. Underparsing operations are not matched with po- sition grammar productions. A DP Table cell which covers only one input segment may be filled by an underparsing operation which marks the input seg- ment as underparsed. In general, any partial de- scription covering any substring of the input may be extended to cover an adjacent input segment by adding that additional segment marked as under- parsed. Thus, a cell covering a given substring of length greater than one may be filled in two mirror- image ways via underparsing: by taking a partial description which covers all but the leftmost input segment and adding that segment as underparsed, and by taking a partial description which covers all but the rightmost input segment and adding that segment as underparsed. Overparsing operations are discussed in Section 5. 5 The Overparsing Operations Overparsing operations consume no input; they only add new unfilled structure. Thus, a block of cells (the set of cells each covering the same input sub- string) is interdependent with respect to overparsing operations, meaning that an overparsing operation trying to fill one cell in the block is adding structure to a partial description from a different cell in the same block. The first consequence of this is that the overparsing operations must be considered after the underparsing and parsing operations for that block. Otherwise, the cells would be empty, and the over- parsing operations would have nothing to add on to. The second consequence is that overparsing oper- ations may need to be considered more than once, because the result of one overparsing operation (if it fills a cell) could be the source for another overpars- ing operation. Thus, more than one pass through the overparsing operations for a block may be necessary. In the description of the algorithm given in Section 6.3, each Repeat-Until loop considers the overpars- ing operations for a block of cells. The number of loop iterations is the number of passes through the overparsing operations for the block. The loop iter- ations stop when none of the overparsing operations 104 is able to fill a cell (each proposed partial description is less harmonic than the partial description already in the cell). In principle, an unbounded number of overpars- ing operations could apply, and in fact descriptions with arbitrary numbers of unfilled positions are con- tained in the output space of Gen (as formally de- fined). The algorithm does not have to explicitly consider arbitrary amounts of overparsing, however. A necessary property of the faithfulness constraints, given constraint locality, is that a partial description cannot have overparsed structures repeatedly added to it until the resulting partial description falls into the same cell category as the original prior to over- parsing, and be more Harmonic. Such a sequence of overparsing operations can be considered a overpars- ing cycle. Thus, the faithfulness constraints must ban overparsing cycles. This is not solely a computa- tional requirement, but is necessary for the grammar to be well-defined: overparsing cycles must be har- monically suboptimal, otherwise arbitrary amounts of overparsing will be permitted in optimal descrip- tions. In particular, the constraints should prevent overparsing from adding an entire overparsed non- terminal more than once to the same partial descrip- tion while passing through the overparsing opera- tions. In Example 2, the constraints FILL m and FILL p effectively ban overparsing cycles: no mat- ter where these constraints are ranked, a description containing an overparsing cycle will be less harmonic (due to additional FILL violations) than the same description with the cycle removed. Given that the universal constraints meet this cri- terion, the overparsing operations may be repeatedly considered for a given level until none of them in- crease the Harmony of the entries in any of the cells. Because each overparsing operation maps a partial description in one cell category to one for another cell category, a partial description cannot undergo more consecutive overparsing operations than there are cell categories without repeating at least one cell category, thereby creating a cycle. Thus, the num- ber of cell categories places a constant bound on the number of passes made through the overparsing op- erations for a block. A single non-terminal may dominate an entire subtree in which none of the syllable positions at the leaves of the tree are filled. Thus, the optimal "unfilled structure" for each non-terminal, and in fact each cell category, must be determined, for use by the overparsing operations. The optimal over- parsing structure for category X is denoted with IX,0], and such an entity is referred to as a base overparsing structure. A set of such structures must be computed, one for each category, before filling input-dependent DP table cells. Because these val- ues are not dependent upon the input, base overpars- ing structures may be computed and stored in ad- vance. Computing them is just like computing other cell entries, except that only overparsing operations are considered. First, consider (once) the overpars- ing operations for each non-terminal X which has a production rule permitting it to dominate a terminal x: each tries to set IX,0] to contain the corresponding partial description with the terminal x left unfilled. Next consider the other overparsing operations for each cell, choosing the most Harmonic of those op- erations' partial descriptions and the prior value of IX,0]. 6 The Dynamic Programming Algorithm 6.1 Notation maxH{} returns the argument with maximum Har- mony (i~) denotes input segment i~ underparsed X t is a non-terminal x t is a terminal + denotes concatenation 6.2 The Operations Underparsing Operations for [X t,a,a]: create (i~/+[X*,0] Underparsing Operations for IX t,a,c]: create (ia)+[X~,a+l,c] create [Xt,a,e-1]+(ia) Parsing operations for [X t,a,a]: for each production X t ::~ x k create Xt(x k filled with ia) Parsing operations for [X*,a,c], where c>a and all X are cell categories: for each production X t =~ XkX m for b = a+l to c-1 create X* ([Xk,a,b],[X'~,b+ 1,c]) for each production X u :=~ X/:xmxn... where X t = XkX'~: for b=a+l to c-1 create [Xk,a,b]+[X'~,b+l,c] Overparsing operations for [X t,0]: for each production X t =~ x k create Xt(x k unfilled) for each production X t =~ XkX m create xt ([Xk,0],[Xm,0]) for each production X ~ ~ XkXmXn... where X t -- xkxm: create [Xk,0]+[Xm,0] Overparsing operations for [X t,a,a]: same as for [X*,a,c] Overparsing operations for [X t,a,c]: for each production X t ~ X k create X t ([X k ,a,c]) 105 for each production X t ::V xkx "~ create Xt ([Xk,0],[X'~,a,c]) create X~ ([Xk,a,c],[X'~,0]) for each production X u :=~ XkXmX~... where X t = XkX'~: create [Xk,a,c]+[Xm,0] create [Xk,0]+[Xm,a,c] 6.3 The Main Algorithm /* create the base overparsing structures */ Repeat For each X t, Set [Xt,0] to maxH{[Xt,0], overparsing ops for [Xt,0]} Until no IX t,0] has changed during a pass /* fill the cells covering only a single segment */ For a = 1 to J For each X t, Set [Xt,a,a] to maxH{underparsing ops for [Xt,a,a]} For each X t, Set [Xt,a,a] to maxH{[Xt,a,a], parsing ops for [Xt,a,a]} Repeat For each X t, Set [Xt,a,a] to maxH{[Xt,a,a], overparsing ops for [Xt,a,a]} Until no [X t,a,a] has changed during a pass /* fill the rest of the cells */ For d=l to (J-l) For a=l to (J-d) For each X t, Set [Xt,a,a+d] to maxH{underparsing ops for [Xt,a,a+d]} For each X ~, Set [Xt,a,a+d] maxH{[Xt,a,a+d], parsing ops for [Xt,a,a+d]} Repeat For each X t, Set [Xt,a,a+d] to maxH{[Xt,a,a+d], overparsing ops for [Xt,a,a+d]} Until no [Xt,a,a+d] has changed during a pass Return [S,1,J] as the optimal description 6.4 Complexity Each block of cells for an input subsequence is pro- cessed in time linear in the length of the subse- quence. This is a consequence of the fact that in general parsing operations filling such a cell must consider all ways of dividing the input subsequence into two pieces. The number of overparsing passes through the block is bounded from above by the number of cell categories, due to the fact that over- parsing cycles are suboptimal. Thus, the number of passes is bounded by a constant, for any fixed position structure grammar. The number of such blocks is the number of distinct, contiguous input subsequences (equivalently, the number of cells in a layer), which is on the order of the square of the length of the input. If N is the length of the input, the algorithm has computational complexity O(N3). 7 Discussion 7.1 Locality That locality helps processing should he no great surprise to computationalists; the computational significance of locality is widely appreciated. Fur- ther, locality is often considered a desirable property of principles in linguistics, independent of computa- tional concerns. Nevertheless, locality is a sufficient but not necessary restriction for the applicability of this algorithm. The locality restriction is really a special case of a more general sufficient condition. The general condition is a kind of "Markov" prop- erty. This property requires that, for any substring of the input for which partial descriptions are con- structed, the set of possible partial descriptions for that substring may be partitioned into a finite set of classes, such that the consequences in terms of constraint violations for the addition of structure to a partial description may he determined entirely by the identity of the class to which that partial de- scription belongs. The special case of strict locality is easy to understand with respect to context-free structures, because it states that the only informa- tion needed about a subtree to relate it to the rest of the tree is the identity of the root non-terminal, so that the (necessarily finite) set of non-terminals provides the relevant set of classes. 7.2 Underparsing and Derivational Redundancy The treatment of the underparsing operations given above creates the opportunity for the same par- tial description to be arrived at through several dif- ferent paths. For example, suppose the input is ia...ibicid...ie , and there is a constituent in [X,a,b] and a constituent [Y,d,e]. Further suppose the input segment ic is to be marked underparsed, so that the final description [S,a,e] contains [X,a,b] (i~) [Y,d,e]. That description could be arrived at either by com- bining [X,a,b] and (ic) to fill [X,a,c], and then com- bine [X,a,c] and [Y,d,e], or it could be arrived at by combining (i~) and [Y,d,e] to fill [Y,c,e], and then combine [X,a,b] and [Y,c,e]. The potential confu- sion stems from the fact that an underparsed seg- ment is part of the description, but is not a proper constituent of the tree. This problem can be avoided in several ways. An obvious one is to only permit underparsings to be added to partial descriptions on the right side. One exception would then have to be made to permit in- put segments prior to any parsed input segments to be underparsed (i.e., if the first input segment is un- derparsed, it has to be attached to the left side of some constituent because it is to the left of every- thing in the description). 106 8 Conclusions The results presented here demonstrate that the basic cubic time complexity results for processing context-free structures are preserved when Optimal- ity Theory grammars are used. If Gen can be speci- fied as matching input segments to structures gener- ated by a context-free position structure grammar, and the constraints are local with respect to those structures, then the algorithm presented here may be applied directly to compute optimal descriptions. 9 Acknowledgments I would like to thank Paul Smolensky for his valu- able contributions and support. I would also like to thank David I-Iaussler, Clayton Lewis, Mark Liber- man, Jim Martin, and Alan Prince for useful dis- cussions, and three anonymous reviewers for helpful comments. This work was supported in part by an NSF Graduate Fellowship to the author, and NSF grant IRI-9213894 to Paul Smolensky and Geraldine Legendre. Bruce Tesar. 1994. Parsing in Optimality Theory: A dynamic programming approach. Technical Re- port CU-CS-714-94, April 1994. Department of Computer Science, University of Colorado, Boul- der. Bruce Tesar. 1995a. Computing optimal forms in Optimality Theory: Basic syllabification. Tech- nical Report CU-CS-763-95, February 1995. De- partment of Computer Science, University of Col- orado, Boulder. Bruce Tesar. 1995b. Computational Optimality The- ory. Unpublished Ph.D. Dissertation. Department of Computer Science, University of Colorado, Boulder. June 1995. A.J. Viterbi. 1967. Error bounds for convolution codes and an asymptotically optimal decoding algorithm. IEEE Trans. on Information Theory 13:260-269. References S. O. Anderson and R. C. Backhouse. 1981. Lo- cally least-cost error recovery in Earley's algo- rithm. A CM Transactions on Programming Lan- guages and Systems 3: 318-347. Eugene Charniak. 1993. Statistical language learn- ing. Cambridge, MA: MIT Press. Martin Kay. 1980. Algorithmic schemata and data structures in syntactic processing. CSL-80-12, Oc- tober 1980. K. Lari and S. J. Young. 1990. The estimation of stochastic context-free grammars using the inside- outside algorithm. Computer Speech and Lan- guage 4: 35-36. Harry R. Lewis and Christos H. Papadimitriou. 1981. Elements of the theory of computation. En- glewood Cliffs, New Jersey: Prentice-Hall, Inc. Alan Prince and Paul Smolensky. 1993. Optimal- ity Theory: Constraint interaction in generative grammar. Technical Report CU-CS-696-93, De- partment of Computer Science, University of Col- orado at Boulder, and Technical Report TR-2, Rutgers Center for Cognitive Science, Rutgers University, New Brunswick, NJ. March. To ap- pear in the Linguistic Inquiry Monograph Series, Cambridge, MA: MIT Press. David Sankoff and Joseph Kruskal. 1983. Time warps, string edits, and macromolecules: The the- ory and practice of sequence comparison. Reading, MA: Addison-Wesley. 107 | 1996 | 14 |
Directed Replacement Lauri Karttunen Rank Xerox Research Centre Grenoble 6, chemin de Maupertuis F-38240 MEYLAN~ FRANCE lauri, karttunen@xerox, fr Abstract This paper introduces to the finite-state calculus a family of directed replace op- erators. In contrast to the simple re- place expression, UPPER -> LOWER, defined in Karttunen (1995), the new directed ver- sion, UPPER ©-> LOWER, yields an unam- biguous transducer if the lower language consists of a single string. It transduces the input string from left to right, mak- ing only the longest possible replacement at each point. A new type of replacement expression, UPPER @-> PREFIX ... SUFFIX, yields a transducer that inserts text around strings that are instances of UPPER. The symbol ... denotes the matching part of the input which itself remains unchanged. PREFIX and SUFFIX are regular expressions describ- ing the insertions. Expressions of the type UPPER @-> PI~EFIX •.. SUFFIX may be used to compose a de- terministic parser for a "local grammar" in the sense of Gross (1989). Other useful ap- plications of directed replacement include tokenization and filtering of text streams. 1 Introduction Transducers compiled from simple replace expres- sions UPPER -> LOWER (Karttunen 1995, Kempe and Karttunen 1996) are generally nondeterministic in the sense that they may yield multiple results even if the lower language consists of a single string. For example, let us consider the transducer in Figure 1, representing a b I b I b a I a b a-> x. 1 1The regular expression formalism and other nota- tional cdnventions used in the paper are explained in the Appendix at the end. a:x b:O b:x a:x Figure 1: a b I b I b a I a b a-> x . The four paths with "aba" on the upper side are: <0 a 0 b:x 2 a 0>, <0 a 0 b:x 2 a:0 0>, <0 a:x 1 b:0 2 a 0>, and <0 a:x 1 b:0 2 a:0 0>. The application of this transducer to the input "aba" produces four alternate results, "axa", "ax", "xa", and "x", as shown in Figure 1, since there are four paths in the network that contain "aba" on the upper side with different strings on the lower side. This nondeterminism arises in two ways. First of all, a replacement can start at any point. Thus we get different results for the "aba" depending on whether we start at the beginning of the string or in the middle at the "b". Secondly, there may be alter- native replacements with the same starting point. In the beginning of "aba", we can replace either "ab" or "aba". Starting in the middle, we can replace ei- ther "b" or "ba". The underlining in Figure 2 shows aba aba aba aba a X a a X X a X Figure 2: Four factorizations of "aba". the four alternate factorizations of the input string, that is, the four alternate ways to partition the string "aba" with respect to the upper language of the re- placement expression. The corresponding paths in the transducer are listed in Figure 1. For many applications, it is useful to define an- 108 other version of replacement that produces a unique outcome whenever the lower language of the rela- tion consists of a single string. To limit the number of alternative results to one in such cases, we must impose a unique factorization on every input. The desired effect can be obtained by constrain- ing the directionality and the length of the replace- ment. Directionality means that the replacement sites in the input string are selected starting from the left or from the right, not allowing any overlaps. The length constraint forces us always to choose the longest or the shortest replacement whenever there are multiple candidate strings starting at a given lo- cation. We use the term directed replacement to describe a replacement relation that is constrained by directionality and length of match. (See the end of Section 2 for a discussion about the choice of the term.) With these two kinds of constraints we can define four types of directed replacement, listed in Figure 3. longest shortest mat ch mat ch left-to-right ~-> @> right-to-left ->~ >@ Figure 3: Directed replacement operators For reasons of space, we discuss here only the left- to-right, longest-match version. The other cases are similar. The effect of the directionality and length con- straints is that some possible replacements are ig- nored. For example, a b I b I b a [ a b a @-> x maps "aba" uniquely into "x", Figure 4. a:x b:O Figure 4: a b [ b [ b a [ a b a @-> x. The single path with "aba" on the upper side is: <0 a:x I b:O 2 a:O 0>. Because we must start from the left and have to choose the longest match, "aba" must be replaced, ignoring the possible replacements for "b", "ba", and "ab". The ©-> operator allows only the last factorization of "aba" in Figure 2. Left-to-right, longest-match replacement can be thought of as a pr.ocedure that rewrites an input string sequentially from left to right. It copies the in- put until it finds an instance of UPPER. At that point it selects the longest matching substring, which is rewritten as LOWER, and proceeds from the end of that substring without considering any other alter- natives. Figure 5 illustrates the idea. Scan Scan Scan . . . . ~ r . . . . I r - -- --~" i Copy ' Replace I Copy ' Replace' ~[ ~I Copy ~ .~ ' f Longest Longest Match Match Figure 5: Left-to-right, longest-match replacement It is not obvious at the outset that the operation can in fact be encoded as a finite-state transducer for arbitrary regular patterns. Although a unique substring is selected for replacement at each point, in general the transduction is not unambiguous because LOWER is not required to be a single string; it can be any regular language. The idea of treating phonological rewrite rules in this way was the starting point of Kaplan and Kay (1994). Their notion of obligatory rewrite rule in- corporates a directionality constraint. They observe (p. 358), however, that this constraint does not by itself guarantee a single output. Kaplan and Kay suggest that additional restrictions, such as longest- match, could be imposed to further constrain rule application. 2 We consider this issue in more detail. The crucial observation is that the two con- straints, left-to-right and longest-match, force a unique factorization on the input string thus making the transduction unambiguous if the L01gER language consists of a single string. In effect, the input string is unambiguously parsed with respect to the UPPER language. This property turns out to be important for a number of applications. Thus it is useful to pro- vide a replacement operator that implements these constraints directly. The definition of the UPPER @-> LOWER relation is presented in the next section. Section 3 introduces a novel type of replace expression for constructing transducers that unambiguously recognize and mark 2The tentative formulation of the longest-match con- straint in (Kaplan and Kay, 1994, p. 358) is too weak. It does not cover all the cases. 109 instances of a regular language without actually re- placing them. Section 4 identifies some useful appli- cations of the new replacement expressions. 2 Directed Replacement We define directed replacement by means of a com- position of regular relations. As in Kaplan and Kay (1994), Karttunen (1995), and other previous works on related topics, the intermediate levels of the com- position introduce auxiliary symbols to express and enforce constraints on the replacement relation. Fig- ure 6 shows the component relations and how they are composed with the input. Input string .o. Initial match .0. Left-to-right constraint .0o Longest-match constraint .0o Replacement by a caret that are instances of the upper language. The initial caret is replaced by a <, and a closing > is inserted to mark the end of the match. We permit carets to appear freely while matching. No carets are permitted outside the matched substrings and the ignored internal carets are eliminated. In this case, there are four possible outcomes, shown in Figure 8, but only two of them are allowed under the constraint that there can be no carets outside the brackets. ALLOWED " a" b a " a'b a <a b> a < a b a> NOT ALLOWED a "b a " a'b a a <b>a -a<b a> Figure 8: Left-to-right constraint. No caret outside a bracketed region. Figure 6: Composition of directed replacement If the four relations on the bottom of Figure 6 are composed in advance, as our compiler does, the ap- plication of the replacement to an input string takes place in one step without any intervening levels and with no auxiliary symbols. But it helps to under- stand the logic to see where the auxiliary marks would be in the hypothetical intermediate results. Let us consider the caseofa b [ b I b a [ a b a ~-> x applying to the string "aba" and see in de- tail how the mapping implemented by the transducer in Figure 4 is composed from the four component re- lations. We use three auxiliary symbols, caret ('), left bracket (<) and right bracket (>), assuming here that they do not occur in any input. The first step, shown in Figure 7, composes the input string with a transducer that inserts a caret, in the beginning of every substring that belongs to the upper language. a b a a " b a Figure 7: Initial match. Each caret marks the be- ginning of a substring that matches "ab", "b", "ba", or ~aba". Note that only one " is inserted even if there are several candidate strings starting at the same loca- tion. In the left-to-right step, we enclose in angle brack- ets all the substrings starting at a location marked In effect, no starting location for a replacement can be skipped over except in the context of an- other replacement starting further left in the input string. (Roche and Schabes (1995) introduce a sim- ilar technique for imposing the left-to-right order on the transduction.) Note that the four alternatives in Figure 8 represent the four factorizations in Figure 2. The longest-match constraint is the identity rela- tion on a certain set of strings. It forbids any re- placement that starts at the same location as an- other, longer replacement. In the case at hand, it means that the internal > is disallowed in the context < a b > a. Because "aba" is in the upper language, there is a longer, and therefore preferred, < a b a > alternative at the same starting location, Figure 9. ALLOWED NOT ALLOWED <a b a> <a b > a Figure 9: Longest match constraint. No upper lan- guage string with an initial < and a nonfinal > in the middle. In the final replacement step, the bracketed re- gions of the input string, in the case at hand, just < a b a > , are replaced by the strings of the lower language, yielding "x" as the result for our example. Note that longest match constraint ignores any internal brackets. For example, the bracketing < a 110 > < a > is not allowed if the upper language con- tains "aa" as well as "a". Similarly, the left-to-right constraint ignores any internal carets. As the first step towards a formal definition of UPPER ©-> LOWER it is useful to make the notion of "ignoring internal brackets" more precise. Figure 10 contains the auxiliary definitions. For the details of the formalism (briefly explained in the Appendix), please consult Karttunen (1995), Kempe and Kart- tunen (1996). 3 UPPER' = UPPER/[Y, ^] - [?* 7''] UPPER'' = UPPER/[7,<IT'>] - [?* [7,<[7,>]'] Figure 10: Versions of UPPER that freely allow non- final diacritics. The precise definition of the UPPER ~-> LOWER re- lation is given in Figure 11. It is a composition of many auxiliary relations. We label the major com- ponents in accordance with the outline in Figure 6. The formulation of the longest-match constraint is based on a suggestion by Ronald M. Kaplan (p.c.). Initial match "$[ Y," 1 7'< 17'> "I .0. [..] -> 7" II _ UPPER °0° Left to right ['$[7,"] [7,':7,< UPPER' 0:7,>]'1, "$[7,':] ,O, 7,- -> [] .Oo Longest match "$[7,< [UPPER'' ~ $[7,>']']] ,O. Replacement Z< "$[Z>] Y,> -> LOWER ; Figure 11: Definition of UPPER @-> LOWER The logic of ~-> replacement could be encoded in many other ways, for example, by using the three pairs of auxiliary brackets, <i, >i, <c, >c, and <a, >a, introduced in Kaplan and Kay (1994). We take here a more minimalist approach. One reason is that we prefer to think of the simple unconditional (uncontexted) replacement as the basic case, as in Karttunen (1995). Without the additional complex- ities introduced by contexts, the directionality and 3UPPER' is the same language as UPPER except that carets may appear freely in all nonfinal positions. Simi- larly, UPPER'' accepts any nonfinal brackets. 111 length-of-match constraints can be encoded with fewer diacritics. (We believe that the conditional case can also be handled in a simpler way than in Kaplan and Kay (1994).) The number of auxiliary markers is an important consideration for some of the applications discussed below. In a phonological or morphological rewrite rule, the center part of the rule is typically very small: a modification, deletion or insertion of a single seg- ment. On the other hand, in our text processing ap- plications, the upper language may involve a large network representing, for example, a lexicon of mul- tiword tokens. Practical experience shows that the presence of many auxiliary diacritics makes it diffi- cult or impossible to compute the left-to-right and longest-match constraints in such cases. The size of intermediate states of the computation becomes a critical issue, while it is irrelevant for simple phono- logical rules. We will return to this issue in the dis- cussion of tokenizing transducers in Section 4. The transducers derived from the definition in Figure 11 have the property that they unambigu- ously parse the input string into a sequence of sub- strings that are either copied to the output un- changed or replaced by some other strings. How- ever they do not fall neatly into any standard class of transducers discussed in the literature (Eilenberg 1974, Schiitzenberger 1977, Berstel 1979). If the LOWER language consists of a single string, then the relation encoded by the transducer is in Berstel's terms a rational function, and the network is an unambigous transducer, even though it may con- tain states with outgoing transitions to two or more destinations for the same input symbol. An unam- biguous transducer may also be sequentiable, in • which case it can be turned into an equivalent se- quential transducer (Mohri, 1994), which can in turn be minimized. A transducer is sequential just in case there are no states with more than one transi- tion for the same input symbol. Roche and Sehabes (1995) call such transducers deterministic. Our replacement transducers in general are not unambiguous because we allow LOWER to be any reg- ular language. It may well turn out that, in all cases that are of practical interest, the lower language is in fact a singleton, or at least some finite set, but it is not so by definition. Even if the replacement trans- ducer is unambiguous, it may well be unsequentiable if UPPER is an infinite language. For example, the simple transducer for a+ b ~-> x in Figure 12 can- not be sequentialized. It has to replace any string of "a"s by "x" or copy it to the output unchanged de- pending on whether the string eventually terminates at "b'. It is obviously impossible for any finite-state b:O Figure 13, a simple parallel replacement of the two auxiliary brackets that mark the selected regions. Because the placement of < and > is strictly con- trolled, they do not occur anywhere else. Insertion 7,< -> PREFIX, 7.> -> SUFFIX ; Figure 12: a+ b ~-> x. This transducer is unam- biguous but cannot be sequentialized. device to accumulate an unbounded amount of de- layed output. On the other hand, the transducer in Figure 4 is sequentiable because there the choice between a and a:x just depends on the next input symbol. Because none of the classical terms fits exactly, we have chosen a novel term, directed transduction, to describe a relation induced by the definition in Figure 11. It is meant to suggest that the mapping from the input into the output strings is guided by the directionality and length-of-match constraints. Depending on the characteristics of the UPPER and LOWER languages, the resulting transducers may be unambiguous and even sequential, but that is not guaranteed in the general case. 3 Insertion The effect of the left-to-right and longest-match con- straint is to factor any input string uniquely with respect to the upper language of the replace expres- sion, to parse it into a sequence of substrings that either belong or do not belong to the language. In- stead of replacing the instances of the upper lan- guage in the input by other strings, we can also take advantage of the unique factorization in other ways. For example, we may insert a string before and after each substring that is an instance of the language in question simply to mark it as such. To implement this idea, we introduce the special symbol ... on the right-hand side of the replacement expression to mark the place around which the in- sertions are to be made. Thus we allow replace- ment expressions of the form UPPER ~-> PREFIX •.. SUFFIX. The corresponding transducer locates the instances of UPPER in the input string under the left-to-right, longest-match regimen just described. But instead of replacing the matched strings, the transducer just copies them, inserting the specified prefix and suffix. For the sake of generality, we allow PREFIX and SUFFIX to denote any regular language. The definition of UPPER ~-> PREFIX ... SUFFIX is just as in Figure 11 except that the Replacement expression is replaced by the Insertion formula in Figure 13: Insertion expression in the definition of UPPER ~-> PREFIX ... SUFFIX. With the ... expressions we can construct trans- ducers that mark maximal instances of a regular language. For example, let us assume that noun phrases consist of an optional determiner, (d), any number of adjectives, a*, and one or more nouns, n+. The expression (d) a* a+ ~-> 7,[ ... %3 com- piles into a transducer that inserts brackets around maximal instances of the noun phrase pattern. For example, it maps "damlvaan" into "[dann] v [aan] ", as shown in Figure 14. d a n n v aan [dana] v [aan] Figure 14: Application of (d) a* n+ ©-> ~,[...Y,] to "d a.tlI'tv aa.L-rl" Although the input string "dannvaan" contains many other instances of the noun phrase pattern, "n", "an", "nn", etc., the left-to-right and longest- match constraints pick out just the two maximal ones. The transducer is displayed in Figure 15. Note that ? here matches symbols, such as v, that are not included in the alphabet of the network. Figure 15: (d) a* n+ e-> ~,[...~,]. The one path with "dannvaan" on the upper side is: <00: [ 7 d 3 a3n4n40:] 5v00:[7a3a3a40:] 5>. 112 4 Applications The directed replacement operators have many use- ful applications. We describe some of them. Al- though the same results could often be achieved by using lex and yacc, sed, awk, perl, and other Unix utilities, there is an advantage in using finite- state transducers for these tasks because they can then be smoothly integrated with other finite-state processes, such as morphological analysis by lexi- cal transducers (Karttunen et al 1992, Karttunen 1994) and rule-based part-of-speech disambiguation (Chanod and Tapanainen 1995, Roche and Schabes 1995). 4.1 Tokenization A tokenizer is a device that segments an input string into a sequence of tokens. The insertion of end-of- token marks can be accomplished by a finite-state transducer that is compiled from tokenization rules. The tokenization rules may be of several types. For example, [WHITE_SPACE+ ~-> SPACE] is a normal- izing transducer that reduces any sequence of tabs, spaces, and newlines to a single space. [LETTER+ ~-> ... END_0F_TOKEN] inserts a special mark, e.g. a newtine, at the end of a letter sequence. Although a space generally counts as a token boundary, it can also be part of a multiword to- ken, as in expressions like "at least", "head over heels", "in spite of", etc. Thus the rule that intro- duces the END_0F_TOKEN symbol needs to combine the LETTER+ pattern with a list of multiword tokens which may include spaces, periods and other delim- iters. Figure 16 outlines the construction of a simple tokenizing transducer for English. WHITEY,_SPACE+ ©-> SPACE .O. [ LETTER+ I a t ~, 1 e a s t I h e a d Y. o v e r Y. h e e 1 s I i n Y, s p i t e Z o f ] ©-> ... ENDY,_OF~,_TOKEN ,O. SPACE-> [] If .#. ] ENDY,_OFY,_TOKEN _ ; Figure 16: A simple tokenizer The tokenizer in Figure 16 is composed of three transducers. The first reduces strings of whitespace characters to a single space. The second transducer inserts an END_0F_TOKEN mark after simple words and the, listed multiword expressions. The third re- moves the spaces that are not part of some multi- word token. The percent sign here means that the following blank is to be taken literally, that is, parsed as a symbol. Without the left-to-right, longest-match con- straints, the tokenizing transducer would not pro- duce deterministic output. Note that it must intro- duce an END_0F_TOKEN mark after a sequence of let- ters just in case the word is not part of some longer multiword token. This problem is complicated by the fact that the list of multiword tokens may con- tain overlapping expressions. A tokenizer for French, for example, needs to recognize "de plus" (more- over), "en plus" (more), "en plus de" (in addition to), and "de plus en plus" (more and more) as sin- gle tokens. Thus there is a token boundary after "de plus" in de plus on ne le fai~ plus (moreover one doesn't do it anymore) but not in on le ]:air de plus en plus (one does it more and more) where "de plus en plus" is a single token. If the list of multiword tokens contains hundreds of expressions, it may require a lot of time and space to compile the tokenizer even if the final result is not too large. The number of auxiliary symbols used to encode the constraints has a critical effect on the ef- ficiency of that computation. We first observed this phenomenon in the course of building a tokenizer for the British National Corpus according to the specifi- cations of the BNC Users Guide (Leech, 1995), which lists around 300 multiword tokens and 260 foreign phrases. With the current definition of the directed replacement we have now been able to compute sim- ilar tokenizers for several other languages (French, Spanish, Italian, Portuguese, Dutch, German). 4.2 Filtering Some text processing applications involve a prelimi- nary stage in which the input stream is divided into regions that are passed on to the calling process and regions that are ignored. For example, in processing an SGML-coded document, we may wish to delete all the material that appears or does not appear in a region bounded by certain SGML tags, say <A> and </A>. Both types of filters can easily be constructed us- ing the directed replace operator. A negative filter that deletes all the material between the two SGML codes, including the codes themselves, is expressed as in Figure 17. "<A>" -$["<A>"I"</A>"] "</A>" ~-> [] ; Figure 17: A negative filter A positive filter that excludes everything else can be expressed as in Figure 18. 113 "$"</A> .... <A>" ©-> "<A>" .O. "</A>" "$"<A>" @-> "</A>" ; dann v a a n [NP d a n n ] [VP v [NP a a n ] ] Figure 18: A positive filter Figure 21: Application of an NP-VP parser The positive filter is composed of two transducers. The first reduces to <A> any string that ends with it and does not contain the </A> tag. The second transducer does a similar transduction on strings that begin with </A>. Figure 12 illustrates the effect of the positive filter. <B>one</B><A>two</A><C>three</C><A>f our</A> <A> two </A> <A>four</A> By means of this simple "bottom-up" technique, it is possible to compile finite-state transducers that approximate a context-free parser up to a chosen depth of embedding. Of course, the left-to-right, longest-match regimen implies that some possible analyses are ignored. To produce all possible parses, we may introduce the ... notation to the simple re- place expressions in Karttunen (1995). 5 Extensions Figure 19: Application of a positive filter The idea of filtering by finite-state transduction of course does not depend on SGML codes. It can be applied to texts where the interesting and unin- teresting regions are defined by any kind of regular pattern. 4.3 Marking As we observed in section 3, by using the ... symbol on the lower side of the replacement expression, we can construct transducers that mark instances of a regular language without changing the text in any other way. Such transducers have a wide range of applications. They can be used to locate all kinds of expressions that can be described by a regular pat- tern, such as proper names, dates, addresses, social security and phone numbers, and the like. Such a marking transducer can be viewed as a deterministic parser for a "local grammar" in the sense of Gross (1989), Roche (1993), Silberztein (1993) and others. By composing two or more marking transduc- ers, we can also construct a single transducer that builds nested syntactic structures, up to any desired depth. To make the construction simpler, we can start by defining auxiliary symbols for the basic reg- ular patterns. For example, we may define NP as [(d) a* n+J. With that abbreviatory convention, a composition of a simple NP and VP spotter can be defined as in Figure 20. NP @-> ~[NP ... ~,] .0. v Y.[NP NP Y,] @-> ~,[VP ... Y,] ; Figure 20: Composition of an NP and a VP spotter Figure 21 shows the effect of applying this com- posite transducer to the string "dannvaan". The definition of the left-to-right, longest-match re- placement can easily be modified for the three other directed replace operators mentioned in Figure 3. Another extension, already implemented, is a di- rected version of parallel replacement (Kempe and Karttunen 1996), which allows any number of re- placements to be done simultaneously without in- terfering with each other. Figure 22 is an example of a directed parallel replacement. It yields a trans- ducer that maps a string of "£'s into a single "b" and a string of "b"s into a single '%'. a+ @-> b, b+ ~-> a ; Figure 22: Directed, parallel replacement The definition of directed parallel replacement re- quires no additions to the techniques already pre- sented. In the near future we also plan to allow direc- tional and length-of-match constraints in the more complicated case of conditional context-constrained replacement. 6 Acknowledgements I would like to thank Ronald M. Kaplan, Martin Kay, Andr4 Kempe, John Maxwell, and Annie Za- enen for helpful discussions at the beginning of the project, as well as Paula Newman and Kenneth 1%. Beesley for editorial advice on the first draft of the paper. The work on tokenizers and phrasal analyz- ers by Anne Schiller and Gregory Grefenstette re- vealed the need for a more efficient implementation of the idea. The final version of the paper has bene- fited from detailed comments by l%onald M. Kaplan and two anonymous reviewers, who convinced me to discard the ill-chosen original title ("Deterministic Replacement") in favor of the present one. 114 7 Appendix: Notational conventions The regular expression formalism used in this paper is essentially the same as in Kaplan and Kay (1994), in Karttunen (1995), and in Kempe and Karttunen (1996). Upper-case strings, such as UPPER, represent regular languages, and lower-case letters, such as x, represent symbols. We recognize two types of sym- bols: unary symbols (a, b, c, etc) and symbol pairs (a:x, b:0, etc. ). A symbol pair a:x may be thought of as the crossproduct of a and x, the minimal relation con- sisting of a (the upper symbol) and x (the lower symbol). To make the notation less cumbersome, we systematically ignore the distinction between the language A and the identity relation that maps every string of A into itself. Consequently, we also write a:a as just a. Three special symbols are used in regular expres- sions: 0 (zero) represents the empty string (often de- noted by c); ? stands for any symbol in the known alphabet and its extensions; in replacement expres- sions, .#. marks the start (left context) or the end (right context) of a string. The percent sign, Y,, is used as an escape character. It allows letters that have a special meaning in the calculus to be used as ordinary symbols. Thus Z[ denotes the literal square bracket as opposed to [, which has a special meaning as a grouping symbol; %0 is the ordinary zero symbol. The following simple expressions appear freqently in the formulas: [] the empty string language, ?* the universal ("sigma star") language. The regular expression operators used in the pa- per are: * zero or more (Kleene star), + one or more (Kleene plus), - not (complement), $ contains, / ignore, I or (union), t~ and (intersection), - minus (relative complement), .x. crossproduct, .o. com- position, -> simple replace. In the transducer diagrams (Figures 1, 4, etc.), the nonfinal states are represented by single circles, final states by double circles. State 0 is the initial state. The symbol ? represents any symbols that are not explicitly present in the network. Transitions that differ only with respect to the label are collapsed into a single multiply labelled arc. References Jean Berstel. 1979. Transductions and Context-Free Languages. B.G. Teubner, Stuttgart, Germany. Jean-Pierre Chanod and Pasi Tapanainen. 1995. Tagging French--comparing a statistical and a constraint-based mode]. In The Proceedings of the Seventh Conference of the European Chapter of the Association for Computational Linguistics, Dublin, Ireland. Samuel Eilenberg. 1974. Automata, Languages, and Machines. Academic Press. Maurice Gross. 1989. The Use of Finite Automata in the Lexical Representation of Natural Lan- guage. In Lecture Notes in Computer Science, pages 34-50, Springer-Verlag, Berlin, Germany. Ronald M. Kaplan and Martin Kay. 1994. Regular Models of Phonological Rule Systems. Computa- tional Linguistics, 20:3, pages 331-378. Lauri Karttunen, Kimmo Koskenniemi, and Ronald M. Kaplan. 1987. A Compiler for Two-level Phonological Rules. In Report No. CSLL87-108. Center for the Study of Language and Informa- tion, Stanford University. Palo Alto, California. Lauri Karttunen. 1994. Constructing Lexical Trans- ducers. In The Proceedings of the Fifteenth Inter- national Conference on Computational Linguis- tics. Coling 94, I, pages 406-411, Kyoto, Japan. Lauri Karttunen. 1995. The Replace Operator. In The Proceedings of the 33rd Annual Meeting of the Association for Computational Linguistics. ACL- 94, pages 16-23, Boston, Massachusetts. Andr~ Kempe and Lauri Karttunen. 1996. Parallel Replacement in the Finite-State Calculus. In The Proceedings of the Sixteenth International Con- ference on Computational Linguistics. Coling 96. Copenhagen, Denmark. Geoffrey Leech. 1995. User's Guide to the British National Corpus. Lancaster University. Mehryar Mohri. 1994. On Some Applications of Finite-State Automata Theory to Natural Lan- guage Processing. Technical Report 94-22. L'In- stitute Gaspard Monge. Universit~ de Marne-la- ValiSe. Noisy Le Grand. Emmanuel Roche. 1993. Analyse syntaxique trans- formationelle du franfais par transducteurs el lexique-grammaire. Doctoral dissertation, Univer- sit~ Paris 7. Emmanuel Roche and Yves Schabes. 1995. Deter- ministic Part-of-Speech Tagging. Computational Linguistics, 21:2, pages 227-53. Marcel Paul Schiitzenberger. 1977. Sur une variante des fonctions sequentielles. Theoretical Computer Science, 4, pages 47-57. Max Silberztein. 1993. Dictionnaires Electroniques et Analyse Lexicale du Franfais--Le Syst~me IN- TEX, Masson, Paris, France. 115 | 1996 | 15 |
Synchronous Models of Language Owen Rambow CoGenTex, Inc. 840 Hanshaw Road, Suite 11 Ithaca, NY 14850-1589 owen@cogentex, com Giorgio Satta Dipartimento di Elettronica ed Informatica Universit~ di Padova via Gradenigo, 6/A 1-35131 Padova, Italy satta@dei, unipd, it Abstract In synchronous rewriting, the productions of two rewriting systems are paired and applied synchronously in the derivation of a pair of strings. We present a new syn- chronous rewriting system and argue that it can handle certain phenomena that are not covered by existing synchronous sys- tems. We also prove some interesting for- mal/computational properties of our sys- tem. 1 Introduction Much of theoretical linguistics can be formulated in a very natural manner as stating correspondences (translations) between layers of representation; for example, related interface layers LF and PF in GB and Minimalism (Chomsky, 1993), semantic and syntactic information in HPSG (Pollard and Sag, 1994), or the different structures such as c-structure and f-structure in LFG (Bresnan and Kaplan, 1982). Similarly, many problems in natural language pro- cessing, in particular parsing and generation, can be expressed as transductions, which are calculations of such correspondences. There is therefore a great need for formal models of corresponding levels of representation, and for corresponding algorithms for transduction. Several different transduction systems have been used in the past by the computational and theoret- ical linguistics communities. These systems have been borrowed from translation theory, a subfield of formal language theory, or have been originally (and sometimes redundantly) developed. Finite state transducers (for an overview, see, e.g., (Aho and Ullman, 1972)) provide translations between regular languages. These devices have been pop- ular in computational morphology and computa- tional phonology since the early eighties (Kosken- niemi, 1983; Kaplan and Kay, 1994), and more re- cently in parsing as well (see, e.g., (Gross, 1989; Pereira, 1991; Roche, 1993)). Pushdown transduc- ers and syntax directed translation schemata (SDTS) (Aho and Ullman, 1969) translate between context- free languages and are therefore more powerful than finite state transducers. Pushdown transducers are a standard model for parsing, and have also been used (usually implicitly) in speech understanding. Recently, variants of SDTS have been proposed as models for simultaneously bracketing parallel cor- pora (Wu, 1995). Synchronization of tree adjoin- ing grammars (TAGs) (Shieber and Schabes, 1990; Shieber, 1994) are even more powerful than the pre- vious formalisms, and have been applied in machine translation (Abeill6, Schabes, and Joshi, 1990; Egedi and Palmer, 1994; Harbusch and Poller, 1994; Pri- gent, 1994), natural language generation (Shieber and Schabes, 1991), and theoretical syntax (Abeilld, 1994). The common underlying idea in all of these formalisms is to combine two generative devices through a pairing of their productions (or, in the case of the corresponding automata, of their tran- sitions) in such a way that right-hand side nonter- minal symbols in the paired productions are linked. The processes of derivation proceed synchronously in the two devices by applying the paired grammar rules only to linked nonterminals introduced previ- ously in the derivation. The fact that the above sys- tems all reflect the same translation technique has not always been recognized in the computational lin- guistics literature. Following (Shieber and Schabes, 1990) we will refer to the general approach as syn- chronous rewriting. While synchronous systems are becoming more and more popular, surprisingly little is known about the formal characteristics of these systems (with the exception of the finite-state de- vices). In this paper, we argue that existing synchronous systems cannot handle, in a computationally attrac- 116 tive way, a standard problem in syntax/semantics translation, namely quantifier scoping. We propose a new system that provides a synchronization be- tween two unordered vector grammars with domi- nance links (UVG-DL) (Rainbow, 1994). The type of synchronization is closely based on a previously proposed model, which we will call "local" synchro- nization. We argue that this synchronous system can deal with quantifier scoping in the desired way. The proposed system has the weak language preservation property, that is, the defined synchronization mech- anism does not alter the weak generative capacity of the formalism being synchronized. Furthermore, the tree-to-forest translation problem for our system can be solved in polynomial time; that is, given a derivation tree obtained according to one of the syn- chronized grammars, we can construct the forest of all the translated derivation trees in the other gram- mar, using a polynomial amount of time. The structure of this paper is as follows. In Sec- tion 2, we introduce quantifier raising and review two types of synchronization and mention some new formal results. We introduce our new synchronous system in Section 3, and present our formal results and outline the proof techniques in Section 4. 2 Types of Synchronization 2.1 Quantifier Raising We start by presenting an example which is based on transfer between a syntactic representation and a "semantic" representation of the scoping of quan- tified NPs. It is generally assumed that in English (and many other languages), quantified arguments of a verb can (in appropriate contexts) take scope in any possible order, and that this generalization extends to cases of embedded clauses (May, 1985). 1 For example, sentence (1) can have four possible in- terpretations (of the six possible orderings of the quantifiers, two pairs are logically equivalent), two of which are shown in (2). (1) Every man thinks some official said some Nor- wegian arrived (2) a. Vx, x a man, 3y, y an official, 3z, z a Nor- wegian, x thinks y said z arrived b. 3z, z a Norwegian, 3y, y an official, Vx, x a man, x thinks y said z arrived ~We explicitly exclude from our analysis cases of quantified NPs embedded in NPs, and do not, of course, propose to develop a serious linguistic theory of quanti- fier scoping. We give a simplified syntactic representation for (1) in Figure 1, and a simplified semantic represen- tation for (2b) in Figure 2. S every man VP thinks S some official VP said S some Norwegian arrived Figure 1: Syntactic representation for (1) F exists z, F z a Norwegian exists y, F y an official for all x, F x a man think T F X say T F ' Y arrive T I g Figure 2: Semantic representation for (2b) 2.2 Non-Local Synchronization We will first discuss a type of synchronization pro- posed by (Shieber and Schabes, 1990), based on TAG. We will refer to this system as non-local syn- chronous TAG (nISynchTAG). The synchronization is non-local in the sense that once links are intro- duced during a derivation by a synchronized pair of grammar rules, they need not continue to impinge on the nodes that introduced them: the links may be re- assigned to a newly introduced nonterminal when an original node is rewritten. We will refer to this mecl/- anism as link inheritance. To illustrate, we will give as an example an analysis of the quantifier-raising example introduced above, extending in a natural manner an example given by Shieber and Schabes. The elementary structures are shown in Figure 3 (we only give one NP -- the others are similar). The nominal arguments in the syntax are associated with 117 NP F F t { t every man for all x, F x lam~ Figure 3: Elementary structures in nlSynchTAG pairs of trees in the semantics, and are linked to two nodes, the quantifier and the variable. The deriva- tion proceeds as illustrated in Figure 4, finally yield- ing the two structures in Figure 1 and Figure 2. Note that some of the links originating with the NP nodes are inherited during the derivation. By changing the order in which we add the nominal arguments at the end of the derivation, we can obtain all quantifier scopes in the semantics. The problem with non-local synchronization is that the weak language preservation property does not hold. (Shieber, 1994) shows that not all nlSynchTAG left-projection languages can be gen- erated by TAGs. As a new result, in (Rambow and Satta, 1996) we show that the recognition of some fixed left-projection languages of a nlSynchTAG is NP-complete. Our reduction crucially relies on link inheritance. This makes nlSynchTAG unattractive for applications in theoretical or computational lin- guistics. 2.3 Local Synchronous Systems In contrast with non-local synchronization, in local synchronization there is no inheritance of synchro- nization links. This is enforced by requiring that the links establish a bijection between nonterminals in the two synchronously derived sentential forms, that is, each nonterminal must be involved in exactly one link. In this way, once a nonterminal is rewrit- ten through the application of a pair of rules to two NP ~ arrive T ( Figure 4: Non-local derivation in nlSynchTAG linked nonterminals, no additional link remains to be transferred to the newly introduced nonterminals. As a consequence of this, the derivation structures in the left and right grammars are always isomorphic (up to ordering and labeling of nodes). The canonical example of local synchronization is SDTS (Aho and Ullman, 1969), in which two context-free grammars are synchronized. We give an example of an SDTS and a derivation in Fig- ure 5. The links are indicated as boxed numbers to the right of the nonterminal to which they ap- ply. (Shieber, 1994) defines the tree-rewriting ver- sion of SDTS, which we will call synchronous TAG (SynchTAG), and argues that SynchTAG does not have the formal problems of nlSynchTAG (though 118 Grammar: NPS? likes NP[ NP4~ -+ John NP_~ -~ the white N~ NL~ j --~ house Derivation: (SE], Sg]) ==~(NPE] likes NEE], NP[~] pla~t a NP[~]) :::=~(NP[~] likes the white N~, la N~ blanche plai~ d NP[-;]) (John likes the white house, la maison blanche pla~t d Jean) Figure 5: Sample SDTS and derivation S[~ NPE] pla~t ~ NPF1 NP[4[ -+ Jean NP~ -~ la N~ blanche NIT ] --~ rnaison (Shieber, 1994) studies the translation problem mak- ing the unappealing assumption that each tree in the input grammar is associated with only one output grammar tree). However, SynchTAG cannot derive all possible scope orderings, because of the locality restriction. This can be shown by adapting the proof technique in (Becker, Rambow, and Niv, 1992). In the follow- ing section, we will present a synchronous system which has local synchronization's formal advantages, but handles the scoping data. 3 Extended Local Synchronization In this section, we propose a new synchronous sys- tem, which is based on local synchronization of unordered vector grammars with dominance links (UVG-DL) (Rambow, 1994). The presentations will be informal for reasons of space; we refer to (Ram- bow and Satta, 1996) for details. In UVG-DL, sev- eral context-free string rewriting rules are grouped into sets, called vectors. In a derivation, all or no rules from a given instance of a vector must be used. Put differently, all productions from a given vector must be used the same number of times. They can be applied in any order and need not be applied simultaneously or one right after the other. In addi- tion, UVG-DL has dominance links. An occurrence of a nonterminal A in the right-hand side of a rule p can be linked to the left-hand nonterminal of another rule p' in the same vector. This dominance link will act as a constraint on derivations: if p is used in a derivation, then p' must be used subsequently in the subderivation that starts with the occurrence of A introduced by p. A UVG-DL is lexicalized iff at least one production in every vector contains a ter- minal symbol. Henceforth, all UVG-DLs mentioned in this paper will implicitly be assumed to be lex- icalized. The derivation structure of a UVG-DL is just the derivation structure of the same derivation in the underlying context-free grammar (the CFG obtained by forming the union of all vectors). We give an example of a UVG-DL in Figure 6, in which the dotted lines represent the dominance links. A sample derivation is in Figure 7. { for all x, F x xaman '.,....' { exists y, F i Y say T F y an official '.,. ,.' z a Norwegian :. ... Figure 6: A UVG-DL for deriving semantic repre- sentations such as (2) Our proposal for the synchronization of two UVG- DL uses the notion of locality in synchronization, but with respect to entire vectors, not individual productions in these vectors. This approach, as we will see, gives us both the desired empirical coverage and acceptable computational and formal results. We suppose that in each vector v of a UVG-DL there is exactly one privileged element, which we call the synchronous production of v. All other elements of v are referred to as asynchronous productions. In Figures 6 and 7, the synchronous productions are designated by a bold-italic left-hand side symbol. Furthermore, in the right-hand side of each asyn- chronous production of v we identify a single non- terminal nonterminal, called the heir. In a synchronous UVG-DL (SynchUVG-DL), vec- tors from one UVG-DL are synchronized with vec- tors from another UVG-DL. Two vectors are syn- chronized by specifying a bijective synchronization mapping (as in local synchronization) between the non-heir right-hand side occurrences of nonterminals in the productions of the two vectors. A nontermi- nal on which a synchronization link impinges is re- ferred to as a synchronous nonterminal. A sample SynchUVG-DL grammar is shown in Figure 9. Informally speaking, during a SynchUVG-DL derivation, the two synchronous productions in a pair of synchronized vectors must be applied at the same time and must rewrite linked occurrences of nonterminals previously introduced. The asyn- chronous productions of the two synchronized gram- 119 mars are not subject to the synchronization require- ment, and they can be applied at any time and in- dependently of the other grammar (but of course subject to the grammar-specific dominance links). Any synchronous links that impinge on a nonter- minal rewritten by an asynchronous production are transferred to the heir of the asynchronous produc- tion. A production may introduce a synchronous nonterminal whose counterpart in the other gram- mar has not yet been introduced. In this case, the link remains "pending". Thus, while in SynchUVG- DL there is link inheritance as in non-local synchro- nization, link inheritance is only possible with those productions that themselves are not subject to the synchronization requirement. The locality of the synchronization becomes clear when we consider a new tree structure which we introduce here, called the vector derivation tree. Consider two synchronized UVG-DLderivations in a SynchUVG-DL. The vector derivation tree for either component derivation is obtained as follows. Each instance of a vector used in the derivation is repre- sented as a single node (which we label with that vector's lexeme). A node representing a vector vl is immediately dominated by the node representing the vector v2 which introduced the synchronization link that the synchronous production of vl rewrites. Unlike the standard derivation tree for UVG-DL, the vector derivation tree clearly shows how the vectors (rather than the component rules of the vectors) were combined during the derivation. The vector derivation tree for the derivation in Figure 7 is shown in Figure 8. F exists z, F z aNor~cgi~ .~-~.~ exists y, F ............ y an official a ~--llx, -F ....... "'"" ........... . . lot , "'". x a man ~ . think T F ..' X say T F I Y arrive T I Z Figure 7: Derivation of (2b) in a UVG-DL It should be clear that the vector derivation trees for two synchronized derivations are isomorphic, re- flecting the fact that our definition of SynchUVG- think every man say exists arrive an official exists a Norwegian Figure 8: Vector derivation tree for derivation of (2b) DL is local with respect to vectors (though not with respect to productions, since the derivation trees of two synchronized UVG-DL derivations need not be isomorphic). The vector derivation tree can be seen as representing an "outline" for the derivation. Such a view is attractive from a linguistic perspective: if each vector represents a lexeme and its projection (where the synchronous production is the basis of the lexical projection that the vector represents), then the vector derivation tree is in fact the depen- dency tree of the sentence (representing direct re- lations between lexemes such as grammatical func- tion). In this respect, the vector derivation tree of UVG-DL is like the derivation tree of tree adjoining grammar and of D-tree grammars (DTG) (Rambow, Vijay-Shanker, and Weir, 1995), which is not sur- prising, since all three formalisms share the same extended domain of locality. Furthermore, the vec- tor derivation tree of SynchUVG-DL shares with the the derivation tree of DTG the property that it reflects linguistic dependency uniformly; however, while the definition of DTG was motivated pre- cisely from considerations of dependency, the vector derivation tree is merely a by-product of our defi- nition of SynchUVG-DL, which was motivated from the desire to have a computationally tractable model of synchronization more powerful than SynchTAG.2 We briefly discuss a sample derivation. We start with the two start symbols, which are linked. We then apply an asynchronous production from the se- mantic grammar. In Figure 10 (top) we see how the link is inherited by the heir nonterminal of the applied production. This step is repeated with two more asynchronous productions, yielding Figure 10 (bottom). We now apply productions for the bodies of the clauses, but stop short before the two syn- chronous productions for the arrive clause, yielding Figure 11. We see the asynchronous production of the syntactic arrive vector has not only inherited the link to its heir nonterminal, but has introduced a link 2We do not discuss modifiers in this paper for lack of space. 1 20 S F { every man for all x, F* .: x x a Irmn :..... S-.. i ~" some officiall ~- exists y, F* y y an official '.. ./ ....* Figure 9: SynchUVG-DL grammar for quantifier scope disambiguation F S ~ exists z, F* F s ~ e X i s t s z, F z a Norwegian ~ ........... exists y, F "'"'-.. y an official ~ . . . ": for all x. F* 'i i Figure 10: SynchUVG-DL derivation, steps 1 and 2 of its own. Since the semantic end of the link has not been introduced yet, the links remains "pend- ing" until that time. We then finish the derivation to obtain the two trees in Figure 1 and Figure 2, with no synchronization or dominance links left. 4 Formal results Theorem 1 SynchUVG-DL has the language preservation property. Proof (outline). Let Gs be a SynchUVG-DL, G' and G" its left and right UVG-DL components, re- spectively. We construct a UVG-DL G generating the left-projection language of Gs. G uses all the S F NP VP exists z, F [ ~ z a Norwegian ~ .............. [ thinks S exists y, E.. "".. [ ~ y an off,c,al ~ ...... "..... [ NP VP for all x, F ""., "... / ~ said S think T F / ".. Figure 11: SynchUVG-DL derivation, step 3 nonterminal symbols of G' and G", and some com- pound nonterminals of the form [A, B], A and B nonterminals of G' and G", respectively. G simu- lates Gs derivations by intermixing symbols of G' and symbols of G", and without generating any of the terminal symbols of G". Most important, each pair of linked nonterminals generated by Gs is rep- resented by G using a compound symbol. This en- forces the requirement of simultaneous application of synchronous productions to linked nonterminals. Each vector v of G is constructed from a pair of synchronous vectors (v', v") of Gs as follows. First, all instances of nonterminals in v" are replaced by e. Furthermore, for any instance B of a right-hand side nonterminal of v" linked to a right-hand side non- terminal A of v', B is replaced by E and A by [A, B]. Then the two synchronous productions in v ~ and v" are composed into a single production in v, by com- posing the two left-hand sides in a compound symbol and by concatenating the two right-hand sides. Fi- nally, to simulate link inheritance in derivations of Gs, each asynchronous production in v' and v" is transferred to v, either without any change, or by composing with some nonterminal C both its left- hand side and the heir nonterminal in its right-hand side. Note that there are finitely many choices for the last step, and each choice gives a different vector in G, simulating the application of v' and v" to a set of (occurrences of) nonterminals in a particular link configuration in a sentential form of Gs. • We now introduce a representation for sets of derivation trees in a UVG-DL G. A parse tree in G is an ordered tree representing a derivation in G and encoding at each node the production p used to start the corresponding subderivation and the mul- tiset of productions f used in that subderivation. A 121 parse forest in G is a directed acyclic graph which is ordered and bipartite. (We use ideas originally developed in (Lang, 1991) for the context-free case.) Nodes of the graph are of two different types, called and-nodes and or-nodes, respectively, and each di- rected arc connects nodes of different types. A parse forest in G represents a set T of parse trees in G if the following holds. When starting at a root node and walking through the graph, if we follow exactly one of the outgoing arcs at each or-node, and all of the outgoing arcs at each and-node, we obtain a tree in T modulo the removal of the or-nodes. Further- more, every tree in T can be obtained in this way. Lemma 2 Let G be a UVG-DL and let q >__ 1 be a natural number. The parse forest representing the set of all parse trees in G with no more than q vectors can be constructed in an amount of time bounded by a polynomial function of q. • Let Gs be a SynchUVG-DL, G' and G" its left and right UVG-DL components, respectively. For a parse tree T in G', we denote as T(T) the set of all parse trees in G" that are synchronous with T according to Gs. The parse-to-forest translation problem for Gs takes as input a parse tree r in G' and gives as output a parse forest representation for T(T). If Gs is lexicalized, such a parse forest has size bounded by a polynomial function of I T I, despite the fact that the size of T(~) can be exponentially larger than the size of T. In fact, we have a stronger result. Theorem 3 The parse-to-forest translation prob- lem for a lexiealized SynchUVG-DL can be computed in polynomial time. Proof (outline). Let Gs be a SynchUVG-DL with G' and G" its left and right UVG-DL com- ponents, respectively. Let T be a parse tree in G ~ and 7r be the parse forest representing T(T). The construction of 7r consists of two stages. In the first stage, we construct the vector deriva- tion tree 7 associated with T. Let q be the number of nodes of % We also construct a parse forest 7rq representing the set of all parse trees in G" with no more than q vectors. This stage takes polynomial time in the size of % since 3' can be constructed from r in linear time and 7rq can be constructed as in Lemma 2. In the second stage, we remove from 7rq all the parse trees not in 7r. This completes the construc- tion, since the set of parse trees represented by 7r is included in the set of parse trees represented by 7rq. Let nr and F be the root node and the set of all nodes of 7, respectively. For n E F, out(n) denotes the set of all children of n. We call family the set {n~} and any nonempty subset of out(n), n E F. The main idea is to associate a set of families ~n to each node n of 7rq, such that the following condition is satis- fied. A family F belongs to ~-n if and only if at least one subderivation in G" represented at n induces a forest of vector derivation trees whose root nodes are all and only the nodes in F. Each ~'n can eas- ily be computed visiting 7rq in a bottom-up fashion. Crucially, we "block" a node of 7rq if we fail in the construction of ~'n. We claim that each set ~'n has size bounded by the number of nodes in % This can be shown using the fact that all derivation trees rep- resented at a node of ~rq employ the same multiset of productions of G". From the above claim, it follows that 7rq can be processed in time polynomial in the size of r. Finally, we obtain 7r simply by removing from 7rq all nodes that have been blocked. • 5 Conclusion We have presented SynchUVG-DL, a synchronous system which has restricted formal power, is com- putationally tractable, and which handles the quantifier-raising data. In addition, SynchUVG-DL can be used for modeling the syntax of languages with syntactic constructions which have been ar- gued to be beyond the formal power of TAG, such as scrambling in German and many other lan- guages (Rainbow, 1994) or wh-movement in Kash- miri (Rambow, Vijay-Shanker, and Weir, 1995). SynchUVG-DL can be used to synchronize a syn- tactic grammar for these languages either with a se- mantic grammar, or with the syntactic grammar of another language for machine translation applica- tions. However, SynchUVG-DL cannot handle the list of cases listed in (Shieber, 1994). These pose a problem for SynchUVG-DL for the same reason that they pose a problem for other local synchronous sys- tems: the (syntactic) dependency structures repre- sented by the two derivations are different. These cases remain an open research issue. Acknowledgments Parts of the present research were done while Ram- bow was supported by the North Atlantic Treaty Or- ganization under a Grant awarded in 1993, while at TALANA, Universit6 Paris 7, and while Satta was visiting the Center for Language and Speech Pro- cessing, Johns Hopkins University, Baltimore, MD. References Abeill6, Anne. 1994. Syntax or semantics? Han- dling nonlocal dependencies with MCTAGs or 122 Synchronous TAGs. Computational Intelligence, 10(4):471-485. Abeilld, Anne, Yves Schabes, and Aravind Joshi. 1990. Using lexicalized TAGs for machine trans- lation. In Proceedings of the 13th International Conference on Computational Linguistics (COL- ING'90), Helsinki. COLING-90. Aho, A. V. and J. D. Ullman. 1969. Syntax di- rected translations and the pushdown assembler. J. Comput. Syst. Sci., 3(1):37-56. Aho, A. V. and J. D. Ullman. 1972. The Theory of Parsing, Translation, and Compiling. Prentice Hall, Englewood Cliffs, NJ. Becket, Tilman, Owen Rambow, and Michael Niv. 1992. The derivational generative power, or, scrambling is beyond LCFRS. Technical Report IRCS-92-38, Institute for Research in Cognitive Science, University of Pennsylvania. Bresnan, J. and R. Kaplan. 1982. Lexical-functional grammar: A formal system for grammatical repre- sentation. In J. Bresnan, editor, The Mental Rep- resentation of Grammatical Relations. MIT Press. Chomsky, Noam. 1993. A minimalist program for linguistic theory. In Kenneth Hale and Samuel J. Keyser, editors, The View from Building 20. MIT Press, Cambridge, Mass., pages 1-52. Egedi, Dana and Martha Palmer. 1994. Constrain- ing lexical selection across languages using TAG. In 3 e Colloque International sur les Grammaires d'Arbres Adjoints (TAG+3), Rapport Technique TALANA-RT-94-01. Universit~ Paris 7. Gross, Maurice. 1989. The use of Finite-State Au- tomata in the lexical representation of natural lan- guage. In M. Gross and D. Perrin, editors, Elec- tronic Dictionaries and Automata in Computa- tional Linguistics. Springer. Harbusch, Karin and Peter Poller. 1994. Structural rewriting with synchronous rewriting systems. In 3 ~ Colloque International sur les Grammaires d'Arbres Adjoints (TAG+3), Rapport Technique TALANA-RT-94-01. Universit~ Paris 7. Kaplan, Ronald M. and Martin Kay. 1994. Regular models of phonological rule systems. Computa- tional Linguistics, 20(3):331-378. Koskenniemi, Kimmo. 1983. Two-level morphol- ogy: A general computational model for word- form recognition and production. Technical Re- port 11, Department of General Linguistics, Uni- versity of Helsinki. Lang, B. 1991. Towards a uniform formal frame- work for parsing. In M. Tomita, editor, Current Issues in Parsing technology. Kluwer Academic Publishers, chapter 11, pages 153-171. May, Robert. 1985. Logical Form: Its structure and Derivation. MIT Press, Cambridge, Mass. Pereira, Fernando. 1991. Finite-state approxima- tion of phrase structure grammars. In 29th Meet- ing of the Association for Computational Linguis- tics (ACL'91), Berkeley, California. ACL. Pollard, Carl and Ivan Sag. 1994. Head- Driven Phrase Structure Grammar. University of Chicago Press, Chicago. Prigent, Gilles. 1994. Synchronous tags and ma- chine translation. In 3 e Colloque International sur les Grammaires d'Arbres Adjoints (TAG+3), Rapport Technique TALANA-RT-94-01. Univer- sit~ Paris 7. Rambow, Owen. 1994. Multiset-valued linear index grammars. In 32nd Meeting of the Association for Computational Linguistics (.4 CL '94). ACL. Rambow, Owen and Giorgio Satta. 1996. Syn- chronous models of language. Manuscript under preparation. Rambow, Owen, K. Vijay-Shanker, and David Weir. 1995. D-Tree Grammars. In 33rd Meeting of the Association for Computational Linguistics (.4 CL'95). ACL. Roche, Emmanuel. 1993. Analyse syntaxique transformationelle du fran~ais par transducteur et lexique-grammaire. Ph.D. thesis, Universitd Raris 7, Paris, France. Shieber, Stuart and Yves Schabes. 1990. Syn- chronous tree adjoining grammars. In Proceedings of the 13th International Conference on Compu- tational Linguistics, Helsinki. Shieber, Stuart and Yves Schabes. 1991. Gener- ation and synchronous tree adjoining grammars. Computational Intelligence, 4(7):220-228. Shieber, Stuart B. 1994. Restricting the weak generative capacity of Synchronous Tree Ad- joining Grammar. Computational Intelligence, 10(4):371-385. Wu, Dekai. 1995. An algorithm for simultane- ously bracketing parallel texts by aligning words. In 33rd Meeting of the Association for Computa- tional Linguistics (ACL '95). ACL. 123 | 1996 | 16 |
Coordination as a Direct Process Augusta Mela LIPN-CNRS URA 1507 Universit6 de Paris XIII 93 430 Villetaneuse FRANCE am@uralS07, univ-par is 13. fr Christophe Fouquer6 LIPN-CNRS URA 1507 Universit4 de Paris XIII 93 430 Villetaneuse FRANCE cf ~ura1507. univ-par is 13. fr Abstract We propose a treatment of coordination based on the concepts of functor, argument and subcategorization. Its formalization comprises two parts which are conceptually independent. On one hand, we have ex- tended the feature structure unification to disjunctive and set values in order to check the compatibility and the satisfiability of subcategorization requirements by struc- tured complements. On the other hand, we have considered the conjunction e$ (and) as the head of the coordinate structure, so that coordinate structures stem simply from the subcategorization specifications of et and the general schemata of a head sat- uration. Both parts have been encoded within HPSG using the same resource that is the subcategorization and its principle which we have just extended. (1) Jean danse la vMse et le tango (Jean dances the waltz and the tango.) (2) Je sais son gge et qu'elle est venue ici. (I know her age and that she came here.) (3) Un livre int4ressant et que j'aurai du plaisir & lire. (An interesting book and which I will enjoy to read.) (4) Je demande & Pierre son v61o et & Marie sa canne & p~che. (I ask Peter for his bike and Mary for her fishing rod.) (5) Pierre vend un v61o et donne une canne k p~che g Marie. (Peter sells a bike and gives a fishing rod to Mary.) We claim here that the "local combinatory poten- tial" of lexical heads, encoded in the subcategoriza- tion feature, explains the previous linguistic facts: conjuncts may be of different categories as well as of more than one constituent, they just have to satisfy the subcategorization constraints. 1 Introduction Coordination has Mways been a centre of academic interest, be it in linguistic theory or in computa- tional linguistics. The problem is that the assump- tion according to only the constituents of the same category (1) may be conjoined is false; indeed, coor- dinations of different categories (2)-(3) and of more than one constituent (4)-(5) should not be dismissed though being marginal in written texts and must he accounted for 1. 1This research has been done for the French coordi- nation et (and). We focus here on the coordination of syntagmatic categories (as opposite of lexical categories). More precisely, we account for cases of non constituent coordination (4), of Right Node Raising (5) but not for cases of Gapping. Our approach which is independent of any frame- work, is easily and precisely encoded in the for- malism of Head Driven Phrase Structure Grammar (HPSG) (Pollard and Sag, 1994), which is based on the notion of head and makes available the feature sharing mechanism we need. The paper is organized as follows. Section 2 gives a brief description of ba- sic data and discusses some constraints and avail- able structures. Section 3 summarizes previous ap- proaches and section 4 is devoted to our approach. The french coordination with el serves throughout the paper as an example. 124 2 A brief description of Basic Data and Constraints The classical typology of coordination, i.e. coordi- nation of constituents (1) and of non-constituents, hides some regularity of the phenomenon as it fo- cuses on concepts of constituent and syntactic cate- gory. A coordination of constituents is interpreted as one phrase without any gap. The constituents may be of the same category (1) as well as of different categories (2)-(3). However, this last case is con- strained as examplified hereafter 2. (2) Je sais son gge et qu'elle est venue ici. (I know her age and that she came here.) (2a) Je sais son £ge et son adresse. (I know her age and her address.) (2b) Je sais qu'elle a 30 ans et qu'elle est venue ici. (I know that she is 30 and that she came here.) (2c) *Je sais £ Marie et qu'elle est venue ici. *(I know to Marie and that she came here.) (2d) 3e demande l'addition et que quelqu'un paie. (I ask for the bill and for someone to pay.) (2e) *]e rends ]'addition et que quelqu'un paie. *(I give back the bill and someone to pay.) In these examples, the coordinate structure acts as the argument of the verb. This verb must subcate- gorize for each constituent of the coordination and this is not the case in example (2c)-(2e). Note that modelizing coordination of different categories as the unification (i.e. underspecification) of the different categories would lead to accept the six examples or wrongly reject (2d) according to the descriptions used 3. Coordination of more than one constituent are of- ten classified as Conjunction Reduction (4), Gap- ping (la-lb) and Right Node Raising (5) (Hudson, 1976). (la) Jean danse la valse et Pierre, le tango. (Jean dances the waltz and Pierre the tango.) (lb) Hier, Jean a dans~ la valse et aujourd'hui, le tango. (Yesterday, Jean danced the waltz and today, the tango.) In the case of Gapping structures, the subject (la) and/or an extracted element (lb) is present in the two sides. The only allowed coordinated structure is [Jean danse la valse] et [Pierre le tango] for (la) and [Hier, Jean a dansd la valse] et [aujourd'hui, le tango] for (lb) as wh-sentences on other parts ([la valse] el [Pierre]or [la valse] el [Pierre le langoj~ are impossible. A contrario, in the case of Conjunction Reduc- tions, wh-sentences as well as cliticization are al- 2The star * marks ungrammatical sentences. 3Apart from ad hoc modelizations. lowed referring to what follows the verb (as for coor- dination of constituents) and treating the arguments simultaneously on the two parts of the coordination: (4a) Je sais k qui demander un v~lo etune canne p~che. (I know who I ask for a bike and for a fishing rod.) (4b) 3e sais ~ qui les demander. (I know who I ask for them.) (4c) Je leur demande un v~lo etune canne ~ p~che. (I ask them for a bike and for a fishing rod.) (4d) Je les leur demande. (I ask them for them.) Let us remark that a comma is inserted between Marie and sa canne ~ p~che in case of extraction before el as in (lb), indicating the two sentences have not necessarily to be analyzed in the same way: (4e) Je demande £ Pierre son v~lo et £ Marie sa canne ~ p~che. (I ask Peter for his bike and Marie for her fishing rod.) (4f) A Pierre, je demande son v~lo et £ Marie, sa canne ~ p~che. (Peter, I ask for a bike and Marie, for a fishing rod.) Two structures are available in case of Conjunc- tion Reductions. One structure corresponds to a co- ordination of sentences with a gap of the verb after el, the other one consists in taking the coordinate parallel sequence of constituents as only one struc- ture. The previous facts argue for the second pos- sibility (see also section 3 for criticism of deletion approach). Last, note that gapping the verb is less compati- ble with head-driven mechanisms (and the comma in (4f) could be such a head mark, see (BEF, 1996) for an analysis of Gapping coordinations). It seems then that the structure needed for Conjunction Reduc- tion is some generalization of the standard structure used for coordination of constituents. Our proposal is then focused on this extension. We do not care of Gapping cases as their linguistic properties seem to be different. It remains to integrate Right-Node Raising and to extend these cases to more complicated ones. Sec- tion 4 includes examples of such cases and shows that our proposal can manage them adequately. 3 Previous Approaches There exists a classical way to eschew the question "what can be coordinated ?" if one assumes a dele- tion analysis. Indeed, according to this approach (Chomsky, 1957; Banfield, 1981), only coordination of sentences are basic and other syntagmatic coordi- nations should be considered as coordinations of re- duced sentences, the reduction being performed by deleting repeated elements. This approach comes up 125 against insurmountable obstacles, chiefly with the problem of applying transformation in reverse, in the analysis process (Schachter, 1973). A direct approach has been proposed at once by Sag & al. (Sag et al., 1985) within the framework of Generalized Phrase Structure Grammar (GPSG), by (Pollard and Sag, 1994) within HPSG, and (Bresnan, 1986) within Lexical Functional Grammar (LFG). These approaches have tried to account for coordination of different categories in reducing the constraint from requiring the same category for con- juncts to a weaker constraint of category compat- ibility. Whatever the nature of subcategorization information may be, syntactical in GPSG, hybrid in HPSG, functional in LFG, two categories are com- patible if they subsume a "common denominator", in this case a common partial structure. Technically, the compatibility is checked by com- puting a "generalization" of categories and imposing the generalization comprises all features expected in the given context. For example, the context in (6), that is, the verb ~tre (to be), expects a predicative argument and both categories NP and AP are just predicative categories. (6) I1 est le p~re de Marie et tier de l'~tre. (He is Mary's father and proud of it.) However, this solution cannot be applied gener- ally because all coordinations have not such "natu- ral" intersection (see (2)). So we claim that we have nothing else to do but explicitly enumerate, within the head subcategorization feature, all the structures allowed as complement. 4 Our Approach Our proposition involves three stages. We begin by formulating constraints on coordinate structures, then we define how to build the coordinate struc- tures and we end by specifying how the previous constraints filter through such coordinate structures. 4.1 Constraints on coordinate structures In order to precisely formulate the constraints on co- ordinate structures, we distinguish the role of func- for and that of argument, where functor categories are those that bear unsatisfied subcategorization re- quirements, as it is the case in CategoriM Grammars (Dowty, 1988). Lexical heads (1) are functors in re- lation to the arguments they select and, by compo- sition, any expression that contains an unsaturated functor is a functor (5)-(7). (7) I1 pretend d~tester et refuse ces beaux spots lumineux. (He claims to hate and refuses these beautiful spotlights.) Arguments are the complements selected by the head 4. An argument may often be realized by differ- ent categories. For example, the argument required by savoir (to know) may be a NP or a Comple- tive: we say that the requirement is disjunctive and we represent the different alternatives within sub- categorization feature disjunctive values. An argu- ment specification is then a disjunction of categories. When the lexical head requires several complements (to ask somebody something), the requirement is said multiple or n-requirement. To the extent that dis- junction only appears in argument specifications, a n-requirement is a multi-set of simple requirements. The choice of set (or more precisely multiset) rather than list vMue for the feature SUBCAT allows us to account for Je demande ~ Pierre son vdlo as well as Je demande son vdlo ~ Pierre. Gunji (Gunji, 1987) makes the same choice. However our criterion can be formalized in a theory whose order of arguments obeys to an obliqueness hierarchy. Requirement inheritance. A functor may com- pose with another functor or with arguments. In functor-arguments composition, the resulting ex- pression inherits the unsatisfied requirement from the functor when it is not empty. For example, in (5), both conjuncts inherit the unsatisfied require- ment from their heads. Likewise the functor com- position inherits a requirement from the unsatisfied functor ~. In (7), pretend d~tester inherits the unsat- isfied requirement of d~tester, i.e. the requirement of an object. Adjuncts. To account for the continuum which exists from strictly subcategorized complements to adjuncts, we adopt the hypothesis suggested by (Miller, 1991) according to which adjuncts could be accorded the same status as arguments by inte- grating them into the subcategorization requirement through an optional lexical rule. That would enable us to account for coordination of adjuncts of differ- ent categories (3) as well as coordination of more than one constituent with adjuncts (10)-(11) below. Note that we may still have a special feature AD- JUNCT in order to distinguish adjuncts from other complements if necessary. Note also that these lexi- cal rules can be interpreted statically as well as dy- namicMly. In the first case, the extended lexicon is pre-computed and requires no runtime application. 4In this paper, we restrict arguments to complements. In our HPSG encoding, they are treated in the SUBCAT feature. In a Borsley-like manner, we suppose a special feature for the subject. However, our approach can be generalized to subjects. 5In functor composition, functors cannot be both un- saturated: ~" 1l promet de manger d sa m~re des ba- nanes.(* he promises to eat his mother bananas.), cf. the Incomplete Constituent Constraint (Pollard and Sag, 1994). 126 Satisfiability conditions of requirements. We observe here that a coordination of different cat- egories may appear as head complement when the head requirement is disjunctive and a coordination of more than one constituent appears when such a requirement is multiple. Last, functors may conjoin when their subcategorization requirements are com- patible. These observations are synthesized in one coordination criterion. The first observation is summarized in (C1) and illustrated in (2'). (C1) A subcategorization 1-requirement is satis- fied either by one of the disjuncts or by a coordi- nation of disjuncts. (2') Je sais son ~ge/qu'elle est venue ici / son £ge et qu'elle est venue iei. (I know her age/that she came here [ her age and that she came here.) The second one is illustrated below, where subcat- egorization n-requirements are satisfied either by: • a series of n complements which satisfy respec- tively the n requirements (8) Je demande ~ Pierre son v@lo et sa canne p@che. (I ask Peter for his bike and for his fishing rod.) • a coordination of a series of this kind (9) Je demande & Pierre son v@lo et ~ Marie d'ofl elle vient. (I ask Peter for his bike and Mary where she comes from.) • a coordination may concern sub-series of argu- ments (10) Pierre a achet@ un livre & Marie et un disque £ Pierre pour 100F. (Peter has bought a book for Mary and a CD for Peter for 205.) • or sequences of more than one constituent with adjuncts (11) (11) J'ai vu Pierre hier et Marie lundi. (I have seen Peter yesterday and Mary monday.) • or adjuncts of different categories (3). (3) Un livre int@ressant et quej'aurai du plaisir £ life. (An interesting book and which I will enjoy to read.) All these situations are summarized in (C2): (C2) A subcategorization n-requirement is satis-] fled by m arguments,0 < m < n~ either by a se- [ quence of m arguments such That each argument [ satisfies one and only one element of the require- I ment or by a coordination of such sequences. The I result has a n -- m requirement. ] Coordination criterion : satisfying and im- posing requirements. As an entity can be both functor and argument (12)-(13) our coordination cri- terion (necessary condition) is the following one: the conjuncts must satisfy the same simple or multiple subcategorization requirement and impose compati- ble subcategorization requirements. 4.2 Computing the subcategorization requirements compatibility We have now to define an extension of the usual unification U of structures in order to compute the subcategorization requirements compatibility. This extension is an internal operation over the subcate- gorization requirements which accounts for disjunc- tive and set values. U is the unification of argument specifications defined from U, U + is its extension to n-requirements. • Unification of two argument specifica- tions ~ and/3. Let us have c~ = Vk=l...p sk, t3 = Vl=l...q tz, with categories s~, tt, then aU/3 =V~,t sk U tt for k, l s.t. sk U tl exists undefined if sk tJ tt does not exist, Vk, l • Unification of two n-requirements ~ and ~. ¢ = {o, li e [1, n]} and ~ = {/3,1i e [1, n]} be 2 n-requirements, where al and /3/ are ar- gument specifications, the extended unification //+ of • and @ is defined if there exists a per- mutation p on [1, n] such that alU/3p[i] exists Vi E [1, n]. In this case ~U+@ = {ai/g/3p[i]/i E [1, n]) else ~L/+~ is undefined. Note that (C1) and (C2) should be computed si- multaneously in order to account for structures as (9). The notion of partial saturation in (C2) allows us to account for coordination of sub-series of argu- ments as in (10). ~hnctors coordination and compatibility of requirements. Functors may be simple (1), com- posed (7), of different structures (12) or partially saturated (13)-(5). (12) Je pense offrir et que je recevrai des cadeaux. (I think to offer and that I will receive gifts.) (13) Je pense recevoir de Jean et offrir £ Pierre du caviar de Russie. (I expect to receive from John and offer to Peter Russian caviar.) In all cases, when they are conjoined, they share their arguments: there must therefore exist at least one possibility of satisfying them simultaneously. In this case, the unification of their subcategorization requirements succeeds and they are said to be com- patible and the two functors may be conjoined. This unification has to account for disjunctive values. 127 I Two n-requirements are compatible iff their uni- I fication//+ succeeds. I We consider that conjoined functors should have the same valence 6. Note that the unification of two n-requirements is ambiguous because we may have several permutations which lead to success. 4.3 How coordinate structures are built Until now we have just defined constraints on the coordinate structures but we did not mention how these structures are built. We want that a coordi- nate structure inherits features from its conjuncts without necessarily failing in case of conflicting val- ues. The generalization method (Sag et al., 1985) has this objective but overgenerates because the con- flicting values are ignored. In contrast, the use of composite categories (Cooper, 1991) keeps conflict- ing values within the connective "A". Intuitively, if son age (her age) is a NP and qu'elle est venue ici (that she came here) is a Completive, son dge et qu 'elle es~ venue ici (her age and tha~ she came here) is a conjunctive composite category NPACompl. The structuring of categories : composite and tuple of categories. We propose to extend the operation A to complex categories and to use a new connective < ... > in order to define tuple of categories. With these two connectives, a total structuring of categories is possible and all the coor- dinate structures may have a status. For example, the underlined expression in (14) will be represented by the structured category: (pp, [NPACornpl] \ LSubcat PP J/" (14) Je recommande ~ Pierre la lecture et qu'il s'inspire de la Bible. (I recommend to Peter the lecture and that he inspires himself of the Bible.) The extension to complex categories is not uni- form. Coordinate structure features are not neces- sarily composites or tuples of corresponding features from each conjunct. In fact, features which are al- lowed to have conflicting values will be compounded, whereas other features as SUBCAT must unify. This structuring is encoded later within the definition of the lexical entry of et. Lexicalization of the coordination rule. We consider, as in (Paritong, 1992), the conjunction et as the head of the coordinate structure. Con- sequently, coordinate structures no longer have to be postulated in the grammar by a special rule of coordination: they stem simply from the general 6This condition will forbid the conjunction of e.g. verbs with SUBCAT lists of different lengths, but which would have a unification under the alternative interpre- tation, thus avoiding sentences like *John bought and gave the book to Mary, (Miller, 1991). schemata of the head saturation and the subcatego- rization specifications of the conjunction. For sake of simplicity, only binary coordination is treated here. (Paritong, 1992) accounts for multiple coordination as a binary structure where the comma has a simi- lar function as a lexical conjunction. With that one restriction, the tIPSG-like lexical entry of et can be: I Phon \et\ Synsern <[xl,...,IMl>^<llq ..... [Mq>lCat= ['Part <Ca,...,CM>A<C~,...,C~M> Part C1 Part C | |Sub,at I,,,, reart C: 1 ..... r Part elM "] I I I''' [S,,b,~,~ {}] ' ...,t'" J [S~,b~at ¢'~J ' The following LP-constraint on the lexical entry of et ensures the correct order of conjunction and conjuncts: [i] <conj < [i'], where i E [1, M], i' E [1', M']. This LP-constraint is the minimum required to distinguish the two parts of the coordinate struc- ture. However, the functor this coordinate struc- ture (partially-)saturates may impose its own LP- constraint (e.g. an obliqueness hierarchy). In such a case, this LP-constraint has to be satisfied si- multaneously by the two sets {[1],...,[M]} and {[lq,..., [Mq}. To represent the inheritance of the complements, here ~M//+ff~, we use a mechanism of argument composition inspired by (I-Iinrichs and Nakazawa, 1994): the conjunction et takes as complements the two conjuncts < C1,...,CM > and < C~,...,C~ > which may remain unsaturated for their comple- ments (]~M and ~4, and the set (I~M/~q-(]?~/. The coordination of m-tuples, as well as the coordination of simple conjuncts (M = 1) stems from the satura- tion of the conjunction eL As noted in 4.1., only the last element of the tuple CM (or C~) can be unsat- urated and be the source of inheritance. Example of resulting HPSG-like anMysis is given in figure 1 for the underlined phrase in (15). (15) Jean conseille k son p~re d'acheter et ~t sa m~re d'utiliser un lave-vaisselle. (Jea~ advises his father to buy and his mother to use a dish washer.) 4.4 How the constraints apply on coordinate structures We have now to define how arguments satisfy dis- junctive and set requirements. Intuitively, if ai is a (possibly disjunctive) argument specification, an argument (possibly composite) satisfies ai iff each element of the composite category matches one dis- junct of ai. Then, if ff is a n-requirement, a tuple (or a coordination of tuples) of categories (possibly composite) satisfies ff iff each element of the tuple (for each tuple) satisfies one and only one argument specification of ft. More formally: 128 Phon \A son p&re d'acheter et& sa rn~re d'utiliser\ ] Synsern<[1],[2]>A<[3],[4]>lOat Part <PP, Oornlal>A<PP, Oornpl> ] I Subcat {NP} J J [Phon \& son p&re\ rPhon \dtaeheter\ ] [Phon \~ sa rn&re\ [Phon \dtutiliser\ ] Part Corn I Part Corn 1 I.Syns,rntlllCattPart PP]] [Sy .... [~]lCat[Subea t {.,~/~}] ] tS~ .... [3]ICattPa,'t PP]] [Sy .... [']lCat[Subcat {.~/~}] ] [Phon\et\ [Part<PP, Compl>^<PP, Compl> ]] Part PP Part Corn I [Part PP ] [Part Cornpl ] NP} I.s',~ .... <tll,t=l>^<t31,t'-l>tCat [S,.,~,=a,~ {m [S,,b,:ot {}] ,t:~} [S,.,b~o,: {_-Y'~'}] ,[31 tS,,b~at {}J ,t"4 tS,,boat {."-P}J, Figure 1: Analysis of d son pdre d'acheter et d sa m~re d'utiliser i) let a = S 1 V... V S p be an argument specifica- tion, and C = A~=I..., Cr be a composite category, then C satisfies ~ iff for each element of the compos- ite category C,there exists one disjunct of e that matches it (iffVr e [1, z],gl E [1,p]/C, US z ex- ists). ii) let • be a n-requirement s.t.: : v...v <,...,,< v...v and E be a coordination of p tuples (if p > 1) or one tuple (if p = 1) of composite categories C k s.t.: =< q,...,c, > ^...^ < > = A,=,. 4 t,r then satisfies ~ iff each specification ai has one and only one realization in each tu- ple of E (iffVk E [1,p], 3 a permutation rrk on [1, n]/Vi E [1, n] C~kti ]k satis- fies '~i). Note that these requirement satisfiability condi- tions allows us to account for examples such as (9). 4.5 A Coding in HPSG We extend here the functor saturation schemata to the coordination case, within the framework of Head Driven Phrase Structure Grammar (Pollard and Sag, 1994). A subcategorization n-requirement is satisfied by m arguments, m < n, either by a sequence of m arguments (m-tuple) or by a coordination of m- tuples. The result has a n - m requirement. Saturation schemata 7 - partial (~ # {}) or total (~ = {}) of saturated complements (*' = {}) total (~ = {}) of complements, the last being partially (~' # {}) or totally saturated (~' = {}) [Synsem,Cat[Subcat~U~'] ]] Branches = [B - Yead[Synsem[Cat[Subcat ~ U ~] [B - Comp = ~[Subcat ~'] where E satisfies ~ and: • ¢ = {< s v...vsp >,..., < >} m-requirement, ~ n - m requirement • ~ ----< Cll,...,C 1 > A...A < C[,...,Cqm > coordination of q m-tuples (if q > 1) or one m- tuple (if q = 1) of composite Synsem C/k = A,=I...~ C'~ • • or ~' must be empty Example of resulting analysis is given in figure 2 for the underlined phrase in (15): (15) Jean conseille & son p@re d'acheter et& sa m~re d'utiliser un lave-vaisselle. (Jean advises his father to buy and his mother to use a dish washer.) Note that within a theory as HPSG which inte- grates syntactic and semantic information in a sin- gle representation, a whole range of lexically deter- mined dependencies, e.g. case assignment, govern- ment (of particular prepositions) and role assign- ment, are modeled at the same time via subcat- egorization because the value of subcategorization feature is a complex of syntactic and semantic infor- mation. r~ U ~Z is the set-union of ~ and t9 129 Pho. \conseille & son p~re dlacheter et h sa rn~re dlutiliser ur* lave--vaisselle\] Synserc* [VP] J Pho. \ ..... ill¢ & aon p~re d'acheter et i~ 8a rn~re dtutiliser\] [Phon \un I ...... issel/e\] Synnern IVP[Subcat {NP}] [Sy.$ern [Part NP] J [Phon \conseille\ ] [Phon \b son p~re dtacheter et b sa rn~re dS utiliser\ ] Part V . . . . <PP, Co,.p,>',, t Subcat {NP} J J Figure 2: Analysis of conseille ~ son p~re d'acheter et ~ sa m~re d'utiliser un lave-vaisselle 5 Conclusion This approach based on concept of functor, argu- ment and subcategorization allows us to account for many coordination data. Its formalization comprises two parts which are conceptually independent. On one hand, we have extended the feature structure unification to disjunctive and set values in order to check the compatibility and the satisfiability of sub- categorization requirements by structured comple- ments. On the other hand, we have considered the conjunction et as the head of the coordinate struc- ture, so that coordinate structures stem simply from the subcategorization specifications of et and a gen- eral schemata of the head saturation. Both parts have been encoded within HPSG using the same re- source that is the subcategorization and its principle which we have just extended. It remains to know in which extent our ap- proach can be used for other linguistic phenomena with symetrical sequences of more than one con- stituent (comparative constructions, Mternative con- structions): (16) Paul donne autant de couteaux aux filles que de pi~ces aux garcons. (Paul gives as much knives to the girls as coins to the boys.) References Banfield, A. 1981. Stylistic deletion in coordinate structures. Linguistics Analysis, 7(1):1-32. Bouchard, L., Emirkanian, L., Fouquer4, C. 1996. La coordination ~ trou4e : 4tude et analyse en GPSG et HPSG. In submission. Bresnan, J., Kaplan, R., Peterson, P. 1986. Co- ordination and the Flow of Information Through Phrase Structure. Ms., CSLI, Stanford Univer- sity. Chomsky, N. 1957. Structures syntaxiques. Seuil. 130 Cooper, 1%. P. 1991. Coordination in unification- based grammars. In Proceedings of the ACL, pages 167-172. Dowty, D. 1988. Type raising, functional composi- tion, and non-constituent conjunction. In Catego- rial Grammars and Natural Language Structures. 1%ichard T. Oehrle et al., pages 153-197. Gunji, T. 1987. Japanese Phrase Structure Gram- mar. Dordrecht, 1%eidel. I-Iinrichs, E. and T. Nakazawa. 1994. Linearizing AUXs in German Verbal Complexes. In Ger- man in Head-Driven Phrase Structure Grammar. J. Nerbonne, K. Netter and C. Pollard, pages 11- 37, CSLI Publications. Hudson, R. 1976. Conjunction reduction, gapping and right-node raising. Language, 52(3):535-562. Miller, P. 1991. Clitics and Constituents in Phrase Structure Grammar. Ph.D. thesis, Universit@ libre de Bruxelles, Facult4 de Philosophie et Lettres en Institut de 1%echerches en Intelligence Artificielle (I1%IDIA). Paritong, M. 1992. Constituent coordination in HPSG. In KONVENS 92, pages 228-237. Springer Verlag. Pollard, C. and I. A. Sag. 1994. Head-Driven Phrase Structure Grammar. CSLI. Sag, I., G. Gazdar, T. Wasow, and S. Weisler. 1985. Coordination and how to distinguish categories. Natural Language and Linguistic theory, (3):117- 171. Schachter, P. 1973. Conjunction. In The Major structures of English. Holt, Rinehart and Win- ston, chapter 6. Steedman, M. 1990. Gapping as constituent coordi- nation. Linguistics and Philosophy, (13):207-263. | 1996 | 17 |
High-Performance Bilingual Text Alignment Using Statistical and Dictionary Information Masahiko Haruno Takefumi Yamazaki NTT Communication Science Labs. 1-2356 Take Yokosuka-Shi Kanagawa 238-03, Japan haruno@nttkb, ntt .jp yamazaki©nttkb, ntt .jp Abstract This paper describes an accurate and robust text alignment system for struc- turally different languages. Among structurally different languages such as Japanese and English, there is a limitation on the amount of word correspondences that can be statistically acquired. The proposed method makes use of two kinds of word correspondences in aligning bilin- gual texts. One is a bilingual dictionary of general use. The other is the word corre- spondences that are statistically acquired in the alignment process. Our method gradually determines sentence pairs (an- chors) that correspond to each other by re- laxing parameters. The method, by com- bining two kinds of word correspondences, achieves adequate word correspondences for complete alignment. As a result, texts of various length and of various genres in structurally different languages can be aligned with high precision. Experimen- tal results show our system outperforms conventional methods for various kinds of Japanese-English texts. 1 Introduction Corpus-based approaches based on bilingual texts are promising for various applications(i.e., lexical knowledge extraction (Kupiec, 1993; Matsumoto et al., 1993; Smadja et al., 1996; Dagan and Church, 1994; Kumano and Hirakawa, 1994; Haruno et al., 1996), machine translation (Brown and others, 1993; Sato and Nagao, 1990; Kaji et al., 1992) and infor- mation retrieval (Sato, 1992)). Most of these works assume voluminous aligned corpora. Many methods have been proposed to align bilin- gual corpora. One of the major approaches is based on the statistics of simple features such as sentence length in words (Brown and others, 1991) or in characters (Gale and Church, 1993). These tech- niques are widely used because they can be imple- mented in an efficient and simple way through dy- namic programing. However, their main targets are rigid translations that are almost literal translations. In addition, the texts being aligned were structurally similar European languages (i.e., English-French, English-German). The simple-feature based approaches don't work in flexible translations for structurally different lan- guages such as Japanese and English, mainly for the following two reasons. One is the difference in the character types of the two languages. Japanese has three types of characters (Hiragana, Katakana, and Kanji), each of which has different amounts of in- formation. In contrast, English has only one type of characters. The other is the grammatical and rhetorical difference of the two languages. First, the systems of functional (closed) words are quite differ- ent from language to language. Japanese has a quite different system of closed words, which greatly influ- ence the length of simple features. Second, due to rhetorical difference, the number of multiple match (i.e., 1-2, 1-3, 2-1 and so on) is more than that among European languages. Thus, it is impossible in gen- eral to apply the simple-feature based methods to Japanese-English translations. One alternative alignment method is the lexicon- based approach that makes use of the word- correspondence knowledge of the two languages. (Church, 1993) employed n-grams shared by two lan- guages. His method is also effective for Japanese- English computer manuals both containing lots of the same alphabetic technical terms. However, the method cannot be applied to general transla- tions in structurally different languages. (Kay and Roscheisen, 1993) proposed a relaxation method to iteratively align bilingual texts using the word cor- respondences acquired during the alignment pro- cess. Although the method works well among Euro- pean languages, the method does not work in align- ing structurally different languages. In Japanese- English translations, the method does not capture enough word correspondences to permit alignment. As a result, it can align only some of the two texts. This is mainly because the syntax and rhetoric are 131 greatly differ in the two languages even in literal translations. The number of confident word cor- respondences of words is not enough for complete alignment. Thus, the problem cannot be addressed as long as the method relies only on statistics. Other methods in the lexicon-based approach embed lex- ical knowledge into stochastic models (Wu, 1994; Chen, 1993), but these methods were tested using rigid translations. To tackle the problem, we describe in this paper a text alignment system that uses both statistics and bilingual dictionaries at the same time. Bilingual dictionaries are now widely available on-line due to advances in CD-ROM technologies. For example, English-Spanish, English-French, English-German, English-Japanese, Japanese-French, Japanese-Chinese and other dic- tionaries are now commercially available. It is rea- sonable to make use of these dictionaries in bilingual text alignment. The pros and cons of statistics and online dictionaries are discussed below. They show that statistics and on-line dictionaries are comple- mentary in terms of bilingual text alignment. Statistics Merit Statistics is robust in the sense that it can extract context-dependent usage of words and that it works well even if word segmentation 1 is not correct. Statistics Demerit The amount of word corre- spondences acquired by statistics is not enough for complete alignment. Dictionaries Merit They can contain the infor- mation about words that appear only once in the corpus. Dictionaries Demerit They cannot capture context-dependent keywords in the corpus and are weak against incorrect word segmentation. Entries in the dictionaries differ from author to author and are not always the same as those in the corpus. Our system iteratively aligns sentences by using statistical and on-line dictionary word correspon- dences. The characteristics of the system are as fol- lows. • The system performs well and is robust for var- ious lengths (especially short) and various gen- res of texts. • The system is very economical because it as- sumes only online-dictionaries of general use and doesn't require the labor-intensive con- struction of domain-specific dictionaries. • The system is extendable by registering statis- tically acquired word correspondences into user dictionaries. 1In Japanese, there are no explicit delimiters between words. The first task for alignment is , therefore, to divide the text stream into words. We will treat hereafter Japanese-English transla- tions although the proposed method is language in- dependent. The construction of the paper is as follows. First, Section 2 offers an overview of our alignment system. Section 3 describes the entire alignment algorithm in detail. Section 4 reports experimental results for various kinds of Japanese-English texts including newspaper editorials, scientific papers and critiques on economics. The evaluation is performed from two points of view: precision-recall of alignment and word correspondences acquired during alignment. Section 5 concerns related works and Section 6 con- cludes the paper. 2 System Overview Japanese text word seg~=~oa & pos tagging English text Word Correspondences ............................................................... : word anchor correspondence counting & setting ] 1 I AUgnment Result I Figure 1: Overview of the Alignment System Figure 1 overviews our alignment system. The input to the system is a pair of Japanese and En- glish texts, one the translation of the other. First, sentence boundaries are found in both texts using finite state transducers. The texts are then part- of-speech (POS) tagged and separated into origi- nal form words z. Original forms of English words are determined by 80 rules using the POS infor- mation. From the word sequences, we extract only nouns, adjectives, adverbs verbs and unknown words (only in Japanese) because Japanese and English closed words are different and impede text align- ment. These pre-processing operation can be easily implemented with regular expressions. 2We use in this phase the JUMAN morphological analyzing system (Kurohashi et al., 1994) for tagging Japanese texts and Brill's transformation-based tagger (Brill, 1992; Brill, 1994) for tagging English texts (JU- MAN: ftp://ftp.aist-nara.ac.jp/pub/nlp/tools/juman/ Brih ftp://ftp.cs.jhu.edu/pub/brill). We would like to thank all people concerned for providing us with the tools. 132 The initial state of the algorithm is a set of al- ready known anchors (sentence pairs). These are de- termined by article boundaries, section boundaries and paragraph boundaries. In the most general case, initial anchors are only the first and final sentence pairs of both texts as depicted in Figure 2. Pos- sible sentence correspondences are determined from the anchors. Intuitively, the number of possible cor- respondences for a sentence is small near anchors, while large between the anchors. In this phase, the most important point is that each set of possible sentence correspondences should include the correct correspondence. The main task of the system is to find anchors from the possible sentence correspondences by us- ing two kinds of word correspondences: statistical word correspondences and word correspondences as held in a bilingual dictionary 3. By using both cor- respondences, the sentence pair whose correspon- dences exceeds a pre-defined threshold is judged as an anchor. These newly found anchors make word correspondences more precise in the subsequent ses- sion. By repeating this anchor setting process with threshold reduction, sentence correspondences are gradually determined from confident pairs to non- confident pairs. The gradualism of the algorithm makes it robust because anchor-setting errors in the last stage of the algorithm have little effect on over- all performance. The output of the algorithm is the alignment result (a sequence of anchors) and word correspondences as by-products. English English Japanese Japanese Initial State [ Eaglish Figure 2: Alignment Process SAdding to the bilingual dictionary of general use, users can reuse their own dictionaries created in previous s e s s i o n s . 3 Algorithms 3.1 Statistics Used In this section, we describe the statistics used to decide word correspondences. From many similar- ity metrics applicable to the task, we choose mu- tual information and t-score because the relaxation of parameters can be controlled in a sophisticated manner. Mutual information represents the similar- ity on the occurrence distribution and t-score rep- resents the confidence of the similarity. These two parameters permit more effective relaxation than the single parameter used in conventional methods(Kay and Roscheisen, 1993). Our basic data structure is the alignable sen- tence matrix (ASM) and the anchor matrix (AM). ASM represents possible sentence correspondences and consists of ones and zeros. A one in ASM in- dicates the intersection of the column and row con- stitutes a possible sentence correspondence. On the contrary, AM is introduced to represent how a sen- tence pair is supported by word correspondences. The i-j Element of AM indicates how many times the corresponding words appear in the i-j sentence pair. As alignment proceeds, the number of ones in ASM reduces, while the elements of AM increase. Let pi be a sentence set comprising the ith Japanese sentence and its possible English corre- spondences as depicted in Figure 3. For example, P2 is the set comprising Jsentence2, Esentence2 and Esentencej, which means Jsentence2 has the pos- sibility of aligning with Esentence2 or Esentencej. The pis can be directly derived from ASM. ex P2 P3 Jsentence I © Esentencel Jsentence 2 Esentence2 Jsentence 3 Esentence3 • • , ° • • , • ° • ° , ° ° , ° , , , • • • , PM Jsentence Esentence N Figure 3: Possible Sentence Correspondences We introduce the contingency matrix (Fung and Church, 1994) to evaluate the similarity of word oc- currences. Consider the contingency matrix shown Table 1, between Japanese word wjp n and English word Weng. The contingency matrix shows: (a) the number of pis in which both wjp, and w~ng were found, (b) the number of pis in which just w~.g was found, (c) the number of pis in which just wjp, was 133 found, (d) the number of pis in which neither word was found. Note here that pis overlap each other and w~,~ 9 may be double counted in the contingency matrix. We count each w~,,~ only once, even if it occurs more than twice in pls. ] Wjpn Weng I a b I c d Table 1: Contingency Matrix If Wjpn and weng are good translations of one an- other, a should be large, and b and c should be small. In contrast, if the two are not good translations of each other, a should be small, and b and c should be large. To make this argument more precise, we introduce mutual information: log prob(wjpn, Weng) prob( w p. )prob( won9 ) The probabilities are: a+c a+c prob(wjpn) - a T b + c W d - Y a+b a+b pr ob( w eng ) - a+b+c+d - M a a prob( wjpn , Weng ) -- a+b+c+d- M Unfortunately, mutual information is not reliable when the number of occurrences is small. Many words occur just once which weakens the statistics approach. In order to avoid this, we employ t-score, defined below, where M is the number of Japanese sentences. Insignificant mutual information values are filtered out by thresholding t-score. For exam- ple, t-scores above 1.65 are significant at the p > 0.95 confidence level. t ~ prob(wjpn, Weng) - prob(wjpn)prob(weng) ~/-~prob( wjpn , Weng ) 3.2 Basic Alignment Algorithm Our basic algorithm is an iterative adjustment of the Anchor Matrix (AM) using the Alignable Sentence Matrix (ASM). Given an ASM, mutual information and t-score are computed for all word pairs in possi- ble sentence correspondences. A word combination exceeding a predefined threshold is judged as a word correspondence. In order to find new anchors, we combine these statistical word correspondences with the word correspondences in a bilingual dictionary. Each element of AM, which represents a sentence pair, is updated by adding the number of word cor- respondences in the sentence pair. A sentence pair containing more than a predefined number of corre- sponding words is determined to be a new anchor. The detailed algorithm is as follows. 3.2.1 Constructing Initial ASM This step constructs the initial ASM. If the texts contain M and N sentences respectively, the ASM is an M x N matrix. First, we decide a set of an- chors using article boundaries, section boundaries and so on. In the most general case, initial anchors are the first and last sentences of both texts as de- picted in Figure 2. Next, possible sentence corre- spondences are generated. Intuitively, true corre- spondences are close to the diagonal linking the two anchors. We construct the initial ASM using such a function that pairs sentences near the middle of the two anchors with as many as O(~/~) (L is the number of sentences existing between two anchors) sentences in the other text because the maximum deviation can be stochastically modeled as O(~rL) (Kay and Roscheisen, 1993). The initial ASM has little effect on the alignment performance so long as it contains all correct sentence correspondences. 3.2.2 Constructing AM This step constructs an AM when given an ASM and a bilingual dictionary. Let thigh, tlow, Ihigh and Izow be two thresholds for t-score and two thresholds for mutual information, respectively. Let ANC be the minimal number of corresponding words for a sentence pair to be judged as an anchor. First, mutual information and t-score are com- puted for all word pairs appearing in a possible sen- tence correspondence in ASM. We use hereafter the word correspondences whose mutual information ex- ceeds Itow and whose t-score exceeds ttow. For all possible sentence correspondences Jsentencei and Esentencej (any pair in ASM), the following op- erations are applied in order. 1. If the following three conditions hold, add 3 to the i-j element of AM. (1) Jsentencei and Esentencej contain a bilingual dictionary word correspondence (wjpn and w,ng). (2) w~na does not occur in any other English sentence that is a possible translation of Jsentencei. (3) Jsentencei and Esentencej do not cross any sentence pair that has more than ANC word correspondences. 2. If the following three conditions hold, add 3 to the i-j element of AM. (1) Jsentencei and Esentencej contain a stochastic word corre- spondence (wjpn and w~na) that has mutual information Ihig h and whose t-score exceeds thigh. (2) w~g does not occur in any other English sentence that is a possible translation of Jsentencei. (3) Jsentencei and Esentencej do not cross any sentence pair that has more than ANC word correspondences. 3. If the following three conditions hold, add 1 to the i-j element of AM. (1) Jsentencei and Esentencej contain a stochastic word corre- spondence (wjp~ and we~g) that has mutual 134 information Itoto and whose t-score exceeds ttow. (2) w~na does not occur in any other English sentence that is a possible translation of Jsentencei. (3) Jsentencei and Esentencej does not cross any sentence pair that has more than ANC word correspondences. The first operation deals with word correspon- dences in the bilingual dictionary. The second op- eration deals with stochastic word correspondences which are highly confident and in many cases involve domain specific keywords. These word correspon- dences are given the value of 3. The third operation is introduced because the number of highly confi- dent corresponding words are too small to align all sentences. Although word correspondences acquired by this step are sometimes false translations of each other, they play a crucial role mainly in the final iterations phase. They are given one point. 3.2.3 Adjusting ASM This step adjusts ASM using the AM constructed by the above operations. The sentence pairs that have at least ANC word correspondences are deter- mined to be new anchors. By using the new set of anchors, a new ASM is constructed using the same method as used for initial ASM construction. Our algorithm implements a kind of relaxation by gradually reducing flow, Izow and ANC, which en- ables us to find confident sentence correspondences first. As a result, our method is more robust than dynamic programing techniques against the shortage of word-correspondence knowledge. 4 Experimental Results In this section, we report the result of experiments on aligning sentences in bilingual texts and on sta- tistically acquired word correspondences. The texts for the experiment varied in length and genres as summarized in Table 2. Texts 1 and 2 are editorials taken from 'Yomiuri Shinbun' and its English ver- sion 'Daily Yomiuri'. This data was distributed elec- trically via a WWW server 4. The first two texts clar- ify the systems's performance on shorter texts. Text 3 is an essay on economics taken from a quarterly publication of The International House of Japan. Text 4 is a scientific survey on brain science taken from 'Scientific American' and its Japanese version 'Nikkei Science '5. Jpn and Eng in Table2 represent the number of sentences in the Japanese and English texts respectively. The remaining table entries show 4The Yomiuri data can be obtained from www.yomiuri.co.jp. We would like to thank Yomiuri Shinbun Co. for permitting us to use the data. ~We obtained the data from paper version of the mag- azine by using OCR. We would like to thank Nikkei Sci- ence Co. for permitting us to use the data. categories of matches by manual alignment and in- dicate the difficulty of the task. Our evaluation focuses on much smaller texts than those used in other study(Brown and others, 1993; Gale and Church, 1993; Wu, 1994; Fung, 1995; Kay and Roscheisen, 1993) because our main targets are well-separated articles. However, our method will work on larger and noisy sets too, by using word anchors rather than using sentence boundaries as segment boundaries. In such a case, the method constructing initial ASM needs to be modified. We briefly report here the computation time of our method. Let us consider Text 4 as an exam- ple. After 15 seconds for full preprocessing, the first iteration took 25 seconds with tto~ = 1.55 and Izow = 1.8. The rest of the algorithm took 20 sec- onds in all. This experiment was performed on a SPARC Station 20 Model tIS21. From the result, we may safely say that our method can be applied to voluminous corpora. 4.1 Sentence Alignment Table 3 shows the performance on sentence align- ments for the texts in Table 2. Combined, Statis- tics and Dictionary represent the methods using both statistics and dictionary, only statistics and only dictionary, respectively. Both Combined and Dictionary use a CD-ROM version of a Japanese- English dictionary containing 40 thousands entries. Statistics repeats the iteration by using statistical corresponding words only. This is identical to Kay's method (Kay and Roscheisen, 1993) except for the statistics used. Dictionary performs the iteration of the algorithm by using corresponding words of the bilingual dictionary. This delineates the cover- age of the dictionary. The parameter setting used for each method was the optimum as determined by empirical tests. In Table 3, PRECISION delineates how many of the aligned pairs are correct and RECALL delineates how many of the manual alignments we included in systems output. Unlike conventional sentence- chunk based evaluations, our result is measured on the sentence-sentence basis. Let us consider a 3-1 matching. Although conventional evaluations can make only one error from the chunk, three errors may arise by our evaluation. Note that our evalua- tion is more strict than the conventional one, espe- cially for difficult texts, because they contain more complex matches. For Text 1 and Text 2, both the combined method and the dictionary method perform much better than the statistical method. This is ob- viously because statistics cannot capture word- correspondences in the case of short texts. Text 3 is easy to align in terms of both the com- plexity of the alignment and the vocabularies used. All methods performed well on this text. For Text 4, Combined and Statistics perform 135 1 Root out guns at all costs 26 28 24 2 0 0 2 Economy ]acing last hurdle 36 41 25 7 2 0 3 Pacific Asia in the Post-Cold-War World 134 124 114 0 10 0 4 Visualizing the Mind 225 214 186 6 15 1 Table 2: Test Texts II Combined Text PRECISION I RECALL 1 96.4% 96.3% 2 95.3% 93.1% 3 96.5% 97.1% 4 91.6% 93.8% Statistics PRECISION RECALL 65.0% 48.5% 61.3% 49.6% 87.3% 85.1% 82.2% 79.3% Dictionary PRECISION RECALL 89.3% 88.9% 87.2% 75.1% 86.3% 88.2% 74.3% 63.8% Table 3: Result of Sentence Alignment much better than Dictionary. The reason for this is that Text 4 concerns brain science and the bilingual dictionaries of general use did not contain domain specific keywords. On the other hand, the combined and statistical methods well capture the keywords as described in the next section. Note here that Combined performs better than Statistics in the case of longer texts, too. There is clearly a limitation in the amount of word correspondences that can be captured by statistics. In summary, the performance of Combined is better than either Statistics or Dictionary for all texts, regardless of text length and the domain. correspondences were not used. Although these word correspondences are very ef- fective for sentence alignment task, they are unsat- isfactory when regarded as a bilingual dictionary. For example, ' 7 7 Y ~' ~ ~ ~n.MR I ' in Japanese is the translation of 'functional MRI'. In Table 4, the correspondence of these compound nouns was cap- tured only in their constituent level. (Haruno et al., 1996) proposes an efficient n-gram based method to extract bilingual collocations from sentence aligned bilingual corpora. 5 Related Work 4.2 Word Correspondence In this section, we will demonstrate how well the pro- posed method captured domain specific word corre- spondences by using Text 4 as an example. Table 4 shows the word correspondences that have high mu- tual information. These are typical keywords con- cerning the non-invasive approach to human brain analysis. For example, NMR, MEG, PET, CT, MRI and functional MRI are devices for measuring brain activity from outside the head. These technical terms are the subjects of the text and are essential for alignment. However, none of them have their own entry in the bilingual dictionary, which would strongly obstruct the dictionary method. It is interesting to note that the correct Japanese translation of 'MEG' is ' ~{i~i~]'. The Japanese mor- phological analyzer we used does not contain an en- try for ' ~i~i[~' and split it into a sequence of three characters ' ~',' ~' and ' []'. Our system skillfully combined ' ~i' and ' []' with 'MEG', as a result of statistical acquisition. These word correspondences greatly improved the performance for Text 4. Thus, the statistical method well captures the domain spe- cific keywords that are not included in general-use bilingual dictionaries. The dictionary method would yield false alignments if statistically acquired word Sentence alignment between Japanese and English was first explored by Sato and Murao (Murao, 1991). They found (character or word) length-based ap- proaches were not appropriate due to the structural difference of the two languages. They devised a dynamic programming method based on the num- ber of corresponding words in a hand-crafted bilin- gual dictionary. Although some results were promis- ing, the method's performance strongly depended on the domain of the texts and the dictionary entries. (Utsuro et al., 1994) introduced a statistical post- processing step to tackle the problem. He first ap- plied Sato's method and extracted statistical word correspondences from the result of the first path. Sato's method was then reiterated using both the ac- quired word correspondences and the hand-crafted dictionary. His method involves the following two problems. First, unless the hand-crafted dictionary contains domain specific key words, the first path yields false alignment, which in turn leads to false statistical correspondences. Because it is impossible in general to cover key words in all domains, it is inevitable that statistics and hand-crafted bilingual dictionaries must be used at the same time. 136 [ English Mutual InFormation I Japanese ~)T.,t.~4"- NMB. PET ~5 N5 N5 recordin~ rea~ recordin~ 3.68 3.51 neuron 3.51 film 3.51 ~lucose 3.51 incrense 3.~1 MEG 3.51 resolution 3.43 electrical 3.43 group 3.39 3.39 electrical 3.39 ~:enerate 3.32 provide 3.33 MEG 3.33 noun 3.17 NMB. 3.17 functional 3.17 equipment 3.17 organ compound water radioactive PET spatial such metabolism verb scientist wnter water mappin| take university thousht compound label task radioactivity visual noun si|nal present I) 7"/L,~Z 4 .& time ~xY dan~6~e a.ut oradiogrsphy ability CT auditory mental MRI CT ,b MR ! 3.15 3.10 3.10 3.10 3.10 :}.10 3.10 3.06 3.04 2.9E 2.98 2.98 2.92 2.92 2.92 2.90 2,82 2,82 2,82 2.77 2.77 2.77 2.77 2.72 2.69 2.69 2.67 2.63 2.63 2.19 2.05 1.8 Table 4: Statistically Acquired Keywords The proposed method involves iterative alignment which simultaneously uses both statistics and a bilingual dictionary. Second, their score function is not reliable espe- cially when the number of corresponding words con- tained in corresponding sentences is small. Their method selects a matching type (such as 1-1, 1-2 and 2-1) according to the number of word correspon- dences per contents word. However, in many cases, there are a few word translations in a set of corre- sponding sentences. Thus, it is essential to decide sentence alignment on the sentence-sentence basis. Our iterative approach decides sentence alignment level by level by counting the word correspondences between a Japanese and an English sentence. (Fung and Church, 1994; Fung, 1995) proposed methods to find Chinese-English word correspon- dences without aligning parallel texts. Their mo- tivation is that structurally different languages such as Chinese-English and Japanese-English are diffi- cult to align in general. Their methods bypassed aligning sentences and directly acquired word cor- respondences. Although their approaches are ro- bust for noisy corpora and do not require any in- formation source, aligned sentences are necessary for higher level applications such as well-grained translation template acquisition (Matsumoto et as., 1993; Smadja et al., 1996; Haruno et al., 1996) and example-based translation (Sato and Nagao, 1990). Our method performs accurate alignment for such use by combining the detailed word correspon- dences: statistically acquired word correspondences and those from a bilingual dictionary of general use. (Church, 1993) proposed char_align that makes use of n-grams shared by two languages. This kind of matching techniques will be helpful in our dictionary-based approach in the following situation: Entries of a bilingual dictionary do not completely match the word in the corpus but partially do. By using the matching technique, we can make the most of the information compiled in bilingual dictionaries. 6 Conclusion We have described a text alignment method for structurally different languages. Our iterative method uses two kinds of word correspondences at the same time: word correspondences acquired by statistics and those of a bilingual dictionary. By combining these two types of word correspondences, the method covers both domain specific keywords not included in the dictionary and the infrequent words not detected by statistics. As a result, our method outperforms conventional methods for texts of different lengths and different domains. Acknowledgement We would like to thank Pascale Fung and Takehito Ut- suro for helpful comments and discussions. References Eric Brill. 1992. A simple rule-based part of speech tagger. In Proc. Third Con/erence on Apolied Natural Language Processing, pages 152-155. Eric Brill. 1994. Some advances in transformation-based part of speech tagging. In Proc. 1Pth AAAI, pages 722-727. P F Brown et al. 1991. Aligning sentences in parallel corpora. In the 29th Annual Meeting of ACL, pages 169-176. P F Brown et al. 1993. The mathematics of statisti- cal machine translation. Computational Linguistics, 19(2):263-311, June. 137 S F Chen. 1993. Aligning sentences in bilingual corpora using lexical information. In the 31st Annual Meeting of ACL, pages 9-16. K W Church. 1993. Char_align: A program for align- ing parallel texts at the character level. In the 31st Annual Meeting of ACL, pages 1-8. Ido Dagan and Ken Church. 1994. Termight: identifying and translating technical terminology. In Proc. Fourth Conference on Apolied Natural Language Processing, pages 34-40. Pascale Fung and K W Church. 1994. K-vec: A new approach for aligning parallel texts. In Proc. 15th COLING, pages 1096-1102. Pascale Fung. 1995. A pattern matching method for finding noun and proper nouns translations from noisy parallel corpora. In Proc. 33rd ACL, pages 236-243. W A Gale and K W Church. 1993. A program for align- ing sentences in bilingual corpora. Computational Linguistics, 19(1):75-102, March. Masahiko Haruno, Satoru Ikehara, and Takefumi Ya- mazaki. 1996. Learning Bilingual Collocations by Word-Level Sorting,. In Proc. 16th COLING. Hiroyuki Kaji, Yuuko Kida, and Yasutsugu Morimoto. 1992. Learning translation templates from bilingaul text. In Proc. 14th COLING, pages 672-678. Martin Kay and Martin Roscheisen. 1993. Text- translation alignment. Computational Linguistics, 19(1):121-142, March. Akira Kumano and Hideki Hirakawa. 1994. Building an MT dictionary from parallel texts based on linguisitic and statistical information. In Proc. 15th COLING, pages 76-81. Julian Kupiec. 1993. An algorithm for finding noun phrase correspondences in bilingual corpora. In the 31st Annual Meeting of A CL, pages 17-22. Sadao Kurohashi, Toshihisa Nakamura, Yuji Mat- sumoto, and Makoto Nagao. 1994. Improvements of Japanese morphological analyzer juman. In Proc. In- ternational Workshop on Sharable Natural Language Resources, pages 22-28. Yuji Matsumoto, Hiroyuki Ishimoto, and Takehito Ut- suro. 1993. Structural matching of parallel texts. In the 31st Annual Meeting of ACL, pages 23-30. H. Murao. 1991. Studies on bilingual text alignment. Bachelor Thesis, Kyoto University (in Japanese). Satoshi Sato and Makoto Nagao. 1990. Toward memory- based translation. In Proc. 13th COLING, pages 247- 252. Satoshi Sato. 1992. CTM: an example-based translation aid system. In Proc. l$th COLING, pages 1259-1263. Frank Smadja, Kathleen McKeown, and Vasileios Hatzi- vassiloglou. 1996. Translating collocations for bilin- gual lexicons: A statistical approach. Computational Linguistics, 22(1):1-38, March. Takehito Utsuro, Hiroshi Ikeda Masaya Yamane, Yuji Matsumoto, and Makoto Nagao. 1994. Bilingual text matching using bilingual dictionary and statistics. In Proc. 15th COLING, pages 1076-1082. Dekai Wu. 1994. Aligning a parallel English-Chinese corpus statistically with lexical criteria. In the 3And Annual Meeting of ACL, pages 80-87. 138 | 1996 | 18 |
An Iterative Algorithm to Build Chinese Language Models Xiaoqiang Luo Center for Language and Speech Processing The Johns Hopkins University 3400 N. Charles St. Baltimore, MD21218, USA xiao@j hu. edu Salim Roukos IBM T. J. Watson Research Center Yorktown Heights, NY 10598, USA roukos©wat son. ibm. com Abstract ° • We present an iterative procedure to build a Chinese language model (LM). We seg- ment Chinese text into words based on a word-based Chinese language model. How- ever, the construction of a Chinese LM it- self requires word boundaries. To get out of the chicken-and-egg problem, we propose an iterative procedure that alternates two operations: segmenting text into words and building an LM. Starting with an initial segmented corpus and an LM based upon it, we use a Viterbi-liek algorithm to seg- ment another set of data. Then, we build an LM based on the second set and use the resulting LM to segment again the first cor- pus. The alternating procedure provides a self-organized way for the segmenter to de- tect automatically unseen words and cor- rect segmentation errors. Our prelimi- nary experiment shows that the alternat- ing procedure not only improves the accu- racy of our segmentation, but discovers un- seen words surprisingly well. The resulting word-based LM has a perplexity of 188 for a general Chinese corpus. 1 Introduction In statistical speech recognition(Bahl et al., 1983), it is necessary to build a language model(LM) for as- signing probabilities to hypothesized sentences. The LM is usually built by collecting statistics of words over a large set of text data. While doing so is straightforward for English, it is not trivial to collect statistics for Chinese words since word boundaries are not marked in written Chinese text. Chinese is a morphosyllabic language (DeFrancis, 1984) in that almost all Chinese characters represent a single syllable and most Chinese characters are also mor- phemes. Since a word can be multi-syllabic, it is gen- erally non-trivial to segment a Chinese sentence into words(Wu and Tseng, 1993). Since segmentation is a fundamental problem in Chinese information pro- cessing, there is a large literature to deal with the problem. Recent work includes (Sproat et al., 1994) and (Wang et al., 1992). In this paper, we adopt a statistical approach to segment Chinese text based on an LM because of its autonomous nature and its capability to handle unseen words. As far as speech recognition is concerned, what is needed is a model to assign a probability to a string of characters. One may argue that we could bypass the segmentation problem by building a character- based LM. However, we have a strong belief that a word-based LM would be better than a character- based 1 one. In addition to speech recognition, the use of word based models would have value in infor- mation retrieval and other language processing ap- plications. If word boundaries are given, all established tech- niques can be exploited to construct an LM (Jelinek et al., 1992) just as is done for English. Therefore, segmentation is a key issue in building the Chinese LM. In this paper, we propose a segmentation al- gorithm based on an LM. Since building an LM it- self needs word boundaries, this is a chicken-and-egg problem. To get out of this, we propose an iterative procedure that alternates between the segmentation of Chinese text and the construction of the LM. Our preliminary experiments show that the iterative pro- cedure is able to improve the segmentation accuracy and more importantly, it can detect unseen words automatically. In section 2, the Viterbi-like segmentation algo- rithm based on a LM is described. Then in sec- tion section:iter-proc we discuss the alternating pro- cedure of segmentation and building Chinese LMs. We test the segmentation algorithm and the alter- nating procedure and the results are reported in sec- I A character-based trigram model has a perplexity of 46 per character or 462 per word (a Chinese word has an average length of 2 characters), while a word-based trigram model has a perplexity 188 on the same set of data. While the comparison would be fairer using a 5- gram character model, that the word model would have a lower perplexity as long as the coverage is high. 139 tion 4. Finally, the work is summarized in section 5. 2 segmentation based on LM In this section, we assume there is a word-based Chi- nese LM at our disposal so that we are able to com- pute the probability of a sentence (with word bound- aries). We use a Viterbi-like segmentation algorithm based on the LM to segment texts. Denote a sentence S by C1C~.. "C,,-1Cn, where each Ci (1 < i < n } is a Chinese character. To seg- ment a sentence into words is to group these char- acters into words, i.e. S = C:C2...C,-:C, (1) = (c:...c,,,)(c,,,+:...c,,,) (2) • . . (3) = w:w2...w,, (4) where xk is the index of the last character in k ~h word wk, i,e wk = Cxk_l+:'"Cxk(k = 1,2,-.-,m), and of course, z0 = 0, z,~ = n. Note that a segmentation of the sentence S can be uniquely represented by an integer sequence z:,.- -, zrn, so we will denote a segmentation by its corresponding integer sequence thereafter. Let G(S) = {(=:... : <_ <_... _< _< (5) be the set of all possible segmentations of sentence S. Suppose a word-based LM is given, then for a segmentation g(S) -" (z:...xm) e G(S), we can assign a score to g(S) by L(g(S)) = logPg(w:'"Wm) (6) m = ~--~logPa(wi[hi) (7) /=1 where w i = C=~_,+:...C~(j = 1,2,-..,m), and hi is understood as the history words w:...wi-t. In this paper the trigram model(Jelinek et al., 1992) is used and therefore hi = wi-2wi-: Among all possible segmentations, we pick the one g* with the highest score as our result. That is, g* = arg g~Ga~S) L(g(S)) (8) = arg max logPg(wl...wm) (9) gea(S) Note the score depends on segmentation g and this is emphasized by the subscript in (9). The optimal segmentation g* can be obtained by dynamic pro- gramming. With a slight abuse of notation, let L(k) be the max accumulated score for the first k charac- ters. L(k) is defined for k = 1, 2,..., n with L(1) = 0 and L(g*) = L(n). Given {L(i) : 1 < i < k-l}, L(k) can be computed recursively as follows: L(k)-- max [L(i)-t-logP(Ci+:...C~]hi)] (10) :<i_<k-: where hi is the history words ended with the i th character Ci. At the end of the recursion, we need to trace back to find the segmentation points. There- fore, it's necessary to record the segmentation points in (10). Let p(k) be the index of the last character in the preceding word. Then V(k) = arg :<sm.<~x :[L(i ) + log P(Ci+:... Ck ]hi)] (11) that is, Cp(k)+: "" • Ck comprises the last word of the optimal segmentation up to the k 'h character. A typical example of a six-character sentence is shown in table 1. Since p(6) = 4, we know the last word in the optimal segmentation is C5C6. Since p(4) = 3, the second last word is C4. So on and so forth. The optimal segmentation for this sentence is (61)(C2C3)(C4)(65C6) • Table 1: A segmentation example chars I C: C2 C3 C4 C5 C6 k I 1 2 3 4 5 6 p(k) 0 1 1 3 3 4 The searches in (10) and (11) are in general time- consuming. Since long words are very rare in Chi- nese(94% words are with three or less characters (Wu and Tseng, 1993)), it won't hurt at all to limit the search space in (10) and (11) by putting an up- per bound(say, 10) to the length of the exploring word, i.e, impose the constraint i >_ ma¢l, k - d in (10) and (11), where d is the upper bound of Chinese word length. This will speed the dynamic program- ming significantly for long sentences. It is worth of pointing out that the algorithm in (10) and (11) could pick an unseen word(i.e, a word not included in the vocabulary on which the LM is built on) in the optimal segmentation provided LM assigns proper probabilities to unseen words. This is the beauty of the algorithm that it is able to handle unseen words automatically. 3 Iterative procedure to build LM In the previous section, we assumed there exists a Chinese word LM at our disposal. However, this is not true in reality. In this section, we discuss an it- erative procedure that builds LM and automatically appends the unseen words to the current vocabulary. The procedure first splits the data into two parts, set T1 and T2. We start from an initial segmenta- tion of the set T1. This can be done, for instance, by a simple greedy algorithm described in (Sproat et al., 1994). With the segmented T1, we construct a LMi on it. Then we segment the set T2 by using the LMi and the algorithm described in section 2. At the same time, we keep a counter for each unseen word in optimal segmentations and increment the counter whenever its associated word appears in an 140 optimal segmentation. This gives us a measure to tell whether an unseen word is an accidental charac- ter string or a real word not included in our vocab- ulary. The higher a counter is, the more likely it is a word. After segmenting the set T2, we add to our vocabulary all unseen words with its counter greater than a threshold e. Then we use the augmented vocabulary and construct another LMi+I using the segmented T2. The pattern is clear now: LMi+I is used to segment the set T1 again and the vocabulary is further augmented. To be more precise, the procedure can be written in pseudo code as follows. Step 0: Initially segment the set T1. Construct an LM LMo with an initial vocabu- lary V0. set i=1. Step 1: Let j=i mod 2; For each sentence S in the set Tj, do 1.1 segment it using LMi-1. 1.2 for each unseen word in the optimal seg- mentation, increment its counter by the number of times it appears in the optimal segmentation. Step 2: Let A=the set of unseen words with counter greater than e. set Vi = ~-1 U A. Construct another LMi using the segmented set and the vocabulary ~. Step 3: i--i+l and goto step 1. Unseen words, most of which are proper nouns, pose a serious problem to Chinese text segmenta- tion. In (Sproat et al., 1994) a class based model was proposed to identify personal names. In (Wang et al., 1992), a title driven method was used to identify personal names. The iterative procedure proposed here provides a self-organized way to detect unseen words, including proper nouns. The advantage is that it needs little human intervention. The proce- dure provides a chance for us to correct segmenting errors. 4 Experiments and Evaluation 4.1 Segmentation Accuracy Our first attempt is to see how accurate the segmen- tation algorithm proposed in section 2 is. To this end, we split the whole data set ~ into two parts, half for building LMs and half reserved for testing. The trigram model used in this experiment is the stan- dard deleted interpolation model described in (Je- linek et al., 1992) with a vocabulary of 20K words. Since we lack an objective criterion to measure the accuracy of a segmentation system, we ask three ~The corpus has about 5 million characters and is coarsely pre-segmented. native speakers to segment manually 100 sentences picked randomly from the test set and compare them with segmentations by machine. The result is summed in table 2, where ORG stands for the orig- inal segmentation, P1, P2 and P3 for three human subjects, and TRI and UNI stand for the segmen- tations generated by trigram LM and unigram LM respectively. The number reported here is the arith- metic average of recall and precision, as was used in n_~ (Sproat et al., 1994), i.e., 1/2(~-~ + n2), where nc is the number of common words in both segmenta- tions, nl and n2 are the number of words in each of the segmentations. Table 2: Segmentation Accuracy ORG P1 P2 ORG P1 85.9 P2 79.1 90.9 P3 87.4 85.7 82.2 P3 TRI 94.2 85.3 80.1 85.6 UNI 91.2 87.4 82.2 85.7 We can make a few remarks about the result in table 2. First of all, it is interesting to note that the agreement of segmentations among human subjects is roughly at the same level of that be- tween human subjects and machine. This confirms what reported in (Sproat et al., 1994). The major disagreement for human subjects comes from com- pound words, phrases and suffices. Since we don't give any specific instructions to human subjects, one of them tends to group consistently phrases as words because he was implicitly using seman- tics as his segmentation criterion. For example, he segments thesentence 3 dao4 jial li2 chil dun4 fan4(see table 3) as two words dao4 j±al l±2(go home) and chil dun4 :fem4(have a meal) because the two "words" are clearly two semantic units. The other two subjects and machine segment it as dao4 / jial li2/ chil/ dtm4 / fern4. Chinese has very limited morphology (Spencer, 1991) in that most grammatical concepts are con- veyed by separate words and not by morphological processes. The limited morphology includes some ending morphemes to represent tenses of verbs, and this is another source of disagreement. For exam- ple, for the partial sentence zuo4 were2 le, where le functions as labeling the verb zuo4 wa.u2 as "per- fect" tense, some subjects tend to segment it as two words zuo4 ~an2/ le while the other treat it as one single word. Second, the agreement of each of the subjects with either the original, trigram, or unigram segmenta- tion is quite high (see columns 2, 6, and 7 in Table 2) and appears to be specific to the subject. 3Here we use Pin Yin followed by its tone to represent a character. 141 Third, it seems puzzling that the trigram LM agrees with the original segmentation better than a unigram model, but gives a worse result when com- pared with manual segmentations. However, since the LMs are trained using the presegmented data, the trigram model tends to keep the original segmen- tation because it takes the preceding two words into account while the unigram model is less restricted to deviate from the original segmentation. In other words, if trained with "cleanly" segmented data, a trigram model is more likely to produce a better seg- mentation since it tends to preserve the nature of training data. 4.2 Experiment of the iterative procedure In addition to the 5 million characters of segmented text, we had unsegmented data from various sources reaching about 13 million characters. We applied our iterative algorithm to that corpus. Table 4 shows the figure of merit of the resulting segmentation of the 100 sentence test set described earlier. After one iteration, the agreement with the original segmentation decreased by 3 percentage points, while the agreement with the human segmen- tation increased by less than one percentage point. We ran our computation intensive procedure for one iteration only. The results indicate that the impact on segmentation accuracy would be small. However, the new unsegmented corpus is a good source of au- tomatically discovered words. A 20 examples picked randomly from about 1500 unseen words are shown in Table 5. 16 of them are reasonably good words and are listed with their translated meanings. The problematic words are marked with "?". 4.3 Perplexity of the language model After each segmentation, an interpolated trigram model is built, and an independent test set with 2.5 million characters is segmented and then used to measure the quality of the model. We got a per- plexity 188 for a vocabulary of 80K words, and the alternating procedure has little impact on the per- plexity. This can be explained by the fact that the change of segmentation is very little ( which is re- flected in table reftab:accuracy-iter ) and the addi- tion of unseen words(1.5K) to the vocabulary is also too little to affect the overall perplexity. The merit of the alternating procedure is probably its ability to detect unseen words. 5 Conclusion In this paper, we present an iterative procedure to build Chinese language model(LM). We segment Chinese text into words based on a word-based Chi- nese language model. However, the construction of a Chinese LM itself requires word boundaries. To get out of the chicken-egg problem, we propose an iterative procedure that alternates two operations: segmenting text into words and building an LM. Starting with an initial segmented corpus and an LM based upon it, we use Viterbi-like algorithm to segment another set of data. Then we build an LM based on the second set and use the LM to seg- ment again the first corpus. The alternating proce- dure provides a self-organized way for the segmenter to detect automatically unseen words and correct segmentation errors. Our preliminary experiment shows that the alternating procedure not only im- proves the accuracy of our segmentation, but dis- covers unseen words surprisingly well. We get a per- plexity 188 for a general Chinese corpus with 2.5 million characters 4 6 Acknowledgment The first author would like to thank various mem- bers of the Human Language technologies Depart- ment at the IBM T.J Watson center for their en- couragement and helpful advice. Special thanks go to Dr. Martin Franz for providing continuous help in using the IBM language model tools. The authors would also thank the comments and insight of two anonymous reviewers which help improve the final draft. References Richard Sproat, Chilin Shih, William Gale and Nancy Chang. 1994. A stochastic finite-state word segmentation algorithm for Chinese. In Pro- ceedings of A GL 'Y~ , pages 66-73 Zimin Wu and Gwyneth Tseng 1993. Chinese Text Segmentation for Text Retrieval: Achievements and Problems Journal of the American Society for Information Science, 44(9):532-542. John DeFrancis. 1984. The Chinese Language. Uni- versity of Hawaii Press, Honolulu. Frederick Jelinek, Robert L. Mercer and Salim Roukos. 1992. Principles of Lexical Language Modeling for Speech recognition. In Advances in Speech Signal Processing, pages 651-699, edited by S. Furui and M. M. Sondhi. Marcel Dekker Inc., 1992 L.R Bahl, Fred Jelinek and R.L. Mercer. 1983. A Maximum Likelihood Approach to Continu- ous Speech Recognition. In IEEE Transactions on Pattern Analysis and Machine Intelligence, 1983,5(2):179-190 Liang-Jyh Wang, Wei-Chuan Li, and Chao-Huang Chang. 1992. Recognizing unregistered names for mandarin word identification. In Proceedings of COLING-92, pages 1239-1243. COLING 4Unfortunately, we could not find a report of Chinese perplexity for comparison in the published literature con- cerning Mandarin speech recognition 142 Andrew Spencer. 1992. Morphological theory : an introduction to word structure in generative grammar pages 38-39. Oxford, UK ; Cambridge, Mass., USA. Basil Blackwell, 1991. Table 3: Segmentation of phrases Chinese [ dao4 jial li2 chil dun4 fan4 Meaning I go home eat a meal Table 4: Segmentation of accuracy after one itera- tion ~ TR0 TR1 .920 .890 .863 .877 .817 .832 .850 .849 Table 5: Examples of unseen words PinYin kui2 er2 he2 shi4 lu4 yinl dai4 shou2 d~o3 ren4 zhong4 ji4 jian3 zi4 hai4 shuangl bao3 ji4 dongl zi3 jiaol xiaol long2 shi2 1i4 bo4 h~i3 du4 shanl shangl ban4 liu6 ha, J4 sa4 he4 le4 ku~i4 xun4 cheng4 jing3 hu~ng2 du2 ba3 lian2 he2 dao3 Meaning last name of former US vice president cassette of audio tape (abbr)pretect (the) island first name or p~rt of a phrase (abbr) discipline monitoring ? double guarantee (abbr) Eastern He Bei province purple glue personal name ? ? (abbr) commercial oriented six (types of) harms t r,xnslat ed no, me fast news train cop yellow poison ? a (biological) jargon 143 | 1996 | 19 |
A Model-Theoretic Framework for Theories of Syntax James Rogers Institute for Research in Cognitive Science University of Pennsylvania Suite 400C, 3401 Walnut Street Philadelphia, PA 19104 j rogers©linc, cis. upenn, edu Abstract A natural next step in the evolution of constraint-based grammar formalisms from rewriting formalisms is to abstract fully away from the details of the grammar mechanism--to express syntactic theories purely in terms of the properties of the class of structures they license. By fo- cusing on the structural properties of lan- guages rather than on mechanisms for gen- erating or checking structures that exhibit those properties, this model-theoretic ap- proach can offer simpler and significantly clearer expression of theories and can po- tentially provide a uniform formalization, allowing disparate theories to be compared on the basis of those properties. We dis- cuss L2,p, a monadic second-order logical framework for such an approach to syn- tax that has the distinctive virtue of be- ing superficially expressive--supporting di- rect statement of most linguistically sig- nificant syntactic properties--but having well-defined strong generative capacity-- languages are definable in L2K,p iff they are strongly context-free. We draw examples from the realms of GPSG and GB. 1 Introduction Generative grammar and formal language theory share a common origin in a procedural notion of grammars: the grammar formalism provides a gen- eral mechanism for recognizing or generating lan- guages while the grammar itself specializes that mechanism for a specific language. At least ini- tially there was hope that this relationship would be informative for linguistics, that by character- izing the natural languages in terms of language- theoretic complexity one would gain insight into the structural regularities of those languages. More- over, the fact that language-theoretic complexity classes have dual automata-theoretic characteriza- tions offered the prospect that such results might provide abstract models of the human language fac- ulty, thereby not just identifying these regularities, but actually accounting for them. Over time, the two disciplines have gradually be- come estranged, principally due to a realization that the structural properties of languages that charac- terize natural languages may well not be those that can be distinguished by existing language-theoretic complexity classes. Thus the insights offered by for- mal language theory might actually be misleading in guiding theories of syntax. As a result, the em- phasis in generative grammar has turned from for- malisms with restricted generative capacity to those that support more natural expression of the observed regularities of languages. While a variety of dis- tinct approaches have developed, most of them can be characterized as constrain~ based--the formalism (or formal framework) provides a class of structures and a means of precisely stating constraints on their form, the linguistic theory is then expressed as a sys- tem of constraints (or principles) that characterize the class of well-formed analyses of the strings in the language. 1 As the study of the formal properties of classes of structures defined in such a way falls within domain of Model Theory, it's not surprising that treatments of the meaning of these systems of constraints are typically couched in terms of formal logic (Kasper and Rounds, 1986; Moshier and Rounds, 1987; Kasper and Rounds, 1990; Gazdar et al., 1988; John- son, 1988; Smolka, 1989; Dawar and Vijay-Shanker, 1990; Carpenter, 1992; Keller, 1993; Rogers and Vijay-Shanker, 1994). While this provides a model-theoretic interpre- tation of the systems of constraints produced by these formalisms, those systems are typi- cally built by derivational processes that employ extra-logical mechanisms to combine constraints. More recently, it has become clear that in many cases these mechanisms can be replaced with or- dinary logical operations. (See, for instance: 1This notion of constraint-based includes not only the obvious formalisms, but the formal framework of GB as well. 10 Johnson (1989), Stabler, Jr. (1992), Cornell (1992), Blackburn, Gardent, and Meyer-Viol (1993), Blackburn and Meyer-Viol (1994), Keller (1993), Rogers (1994), Kracht (1995), and, anticipating all of these, Johnson and Postal (1980).) This ap- proach abandons the notions of grammar mecha- nism and derivation in favor of defining languages as classes of more or less ordinary mathematical struc- tures axiomatized by sets of more or less ordinary logical formulae. A grammatical theory expressed within such a framework is just the set of logical con- sequences of those axioms. This step completes the detachment of generative grammar from its proce- dural roots. Grammars, in this approach, are purely declarative definitions of a class of structures, com- pletely independent of mechanisms to generate or check them. While it is unlikely that every theory of syntax with an explicit derivational component can be captured in this way, ~ for those that can the logical re-interpretation frequently offers a simpli- fied statement of the theory and clarifies its conse- quences. But the accompanying loss of language-theoretic complexity results is unfortunate. While such results may not be useful in guiding syntactic theory, they are not irrelevant. The nature of language-theoretic complexity hierarchies is to classify languages on the basis of their structural properties. The languages in a class, for instance, will typically exhibit cer- tain closure properties (e.g., pumping lemmas) and the classes themselves admit normal forms (e.g., rep- resentation theorems). While the linguistic signifi- cance of individual results of this sort is open to de- bate, they at least loosely parallel typical linguistic concerns: closure properties state regularities that are exhibited by the languages in a class, normal forms express generalizations about their structure. So while these may not be the right results, they are not entirely the wrong kind of results. More- over, since these classifications are based on struc- tural properties and the structural properties of nat- ural language can be studied more or less directly, there is a reasonable expectation of finding empiri- cal evidence falsifying a hypothesis about language- theoretic complexity of natural languages if such ev- idence exists. Finally, the fact that these complexity classes have automata-theoretic characterizations means that re- sults concerning the complexity of natural languages will have implications for the nature of the human language faculty. These automata-theoretic charac- terizations determine, along one axis, the types of resources required to generate or recognize the lan- 2Whether there are theories that cannot be captured, at least without explicitly encoding the derivations, is an open question of considerable theoretical interest, as is the question of what empirical consequences such an essential dynamic character might have. 11 guages in a class. The regular languages, for in- stance, can be characterized by finite-state (string) automata--these languages can be processed using a fixed amount of memory. The context-sensitive languages, on the other had, can be characterized by linear-bounded automata--they can be processed using an amount of memory proportional to the length of the input. The context-free languages are probably best characterized by finite-state tree automata--these correspond to recognition by a col- lection of processes, each with a fixed amount of memory, where the number of processes is linear in the length of the input and all communication be- tween processes is completed at the time they are spawned. As a result, while these results do not necessarily offer abstract models of the human lan- guage faculty (since the complexity results do not claim to characterize the human languages, just to classify them), they do offer lower bounds on cer- tain abstract properties of that faculty. In this way, generative grammar in concert with formal language theory offers insight into a deep aspect of human cognition--syntactic processing--on the basis of ob- servable behavior--the structural properties of hu- man languages. In this paper we discuss an approach to defining theories of syntax based on L 2 (Rogers, 1994), a K,P monadic second-order language that has well-defined generative capacity: sets of finite trees are defin- able within L 2 iff they are strongly context-free K,P in a particular sense. While originally introduced as a means of establishing language-theoretic com- plexity results for constraint-based theories, this lan- guage has much to recommend it as a general frame- work for theories of syntax in its own right. Be- ing a monadic second-order language it can capture the (pure) modal languages of much of the exist- ing model-theoretic syntax literature directly; hav- ing a signature based on the traditional linguistic relations of domination, immediate domination, lin- ear precedence, etc. it can express most linguistic principles transparently; and having a clear charac- terization in terms of generative capacity, it serves to re-establish the close connection between genera- tive grammar and formal language theory that was lost in the move away from phrase-structure gram- mars. Thus, with this framework we get both the advantages of the model-theoretic approach with re- spect to naturalness and clarity in expressing linguis- tic principles and the advantages of the grammar- based approach with respect to language-theoretic complexity results. We look, in particular, at the definitions of a single aspect of each of GPSG and GB. The first of these, Feature Specification Defaults in GPSG, are widely assumed to have an inherently dynamic character. In addition to being purely declarative, our reformal- ization is considerably simplified wrt the definition in Gasdar et al. (1985), 3 and does not share its mis- leading dynamic flavor. 4 We offer this as an example of how re-interpretations of this sort can inform the original theory. In the second example we sketch a definition of chains in GB. This, again, captures a presumably dynamic aspect of the original theory in a static way. Here, though, the main significance of the definition is that it forms a component of a full- scale treatment of a GB theory of English S- and D-Structure within L 2 This full definition estab- K,P" lishes that the theory we capture licenses a strongly context-free language. More importantly, by exam- ining the limitations of this definition of chains, and in particular the way it fails for examples of non- context-free constructions, we develop a character- ization of the context-free languages that is quite natural in the realm of GB. This suggests that the apparent mismatch between formal language theory and natural languages may well have more to do with the unnaturalness of the traditional diagnostics than a lack of relevance of the underlying structural prop- erties. Finally, while GB and GPSG are fundamentally distinct, even antagonistic, approaches to syntax, their translation into the model-theoretic terms of L 2 allows us to explore the similarities between K,P the theories they express as well as to delineate ac- tual distinctions between them. We look briefly at two of these issues. Together these examples are chosen to illustrate the main strengths of the model-theoretic approach, at least as embodied in L2K,p, as a framework for studying theories of syntax: a focus on structural properties themselves, rather than on mechanisms for specifying them or for generating or checking structures that exhibit them, and a language that is expressive enough to state most linguistically sig- nificant properties in a natural way, but which is restricted enough to have well-defined strong gener- ative capacity. 2 L~,p--The Monadic Second-Order Language of Trees L2K,p is the monadic second-order language over the signature including a set of individual constants (K), a set of monadic predicates (P), and binary predicates for immediate domination (,~), domina- tion (,~*), linear precedence (-~) and equality (..~). The predicates in P can be understood both as picking out particular subsets of the tree and as (non-exclusive) labels or features decorating the tree. Models for the language are labeled tree do- 3We will refer to Gazdar et al. (1985) as GKP&S 4We should note that the definition of FSDs in GKP&S is, in fact, declarative although this is obscured by the fact that it is couched in terms of an algorithm for checking models. mains (Gorn, 1967) with the natural interpretation of the binary predicates. In Rogers (1994) we have shown that this language is equivalent in descrip- tive power to SwS--the monadic second-order the- ory of the complete infinitely branching tree--in the sense that sets of trees are definable in SwS iff they are definable in L 2 This places it within a hi- K,P" erarchy of results relating language-theoretic com- plexity classes to the descriptive complexity of their models: the sets of strings definable in S1S are ex- actly the regular sets (Biichi, 1960), the sets of fi- nite trees definable in SnS, for finite n, are the rec- ognizable sets (roughly the sets of derivation trees of CFGs) (Doner, 1970), and, it can be shown, the sets of finite trees definable in SwS are those gener- ated by generalized CFGs in which regular ,expres- sions may occur on the rhs of rewrite rules (Rogers, 1996b). 5 Consequently, languages are definable in L2K,p iff they are strongly context-free in the mildly generalized sense of GPSG grammars. In restricting ourselves to the language of L 2 K,P we are restricting ourselves to reasoning in terms of just the predicates of its signature. We can expand this by defining new predicates, even higher-order predicates that express, for instance, properties of or relations between sets, and in doing so we can use monadic predicates and individual constants freely since we can interpret these as existentially bound variables. But the fundamental restriction of L 2 K,P is that all predicates other than monadic first-order predicates must be explicitly defined, that is, their definitions must resolve, via syntactic substitution, 2 into formulae involving only the signature of LK, P. 3 Feature Specification Defaults in GPSG We now turn to our first application--the def- inition of Feature Specification Defaults (FSDs) in GPSG. 6 Since GPSG is presumed to license (roughly) context-free languages, we are not con- cerned here with establishing language-theoretic complexity but rather with clarifying the linguis- tic theory expressed by GPSG. FSDs specify con- ditions on feature values that must hold at a node in a licensed tree unless they are overridden by some other component of the grammar; in particular, un- less they are incompatible with either a feature spec- ified by the ID rule licensing the node (inherited fea- tures) or a feature required by one of the agreement principles--the Foot Feature Principle (FFP), Head Feature Convention (HFC), or Control Agreement Principle (CAP). It is the fact that the default holds 5There is reason to believe that this hierarchy can be extended to encompass, at least, a variety of mildly context-sensitive languages as well. 6A more complete treatment of GPSG in L 2 I¢.,P can be found in Rogers (1996c). 12 just in case it is incompatible with these other com- ponents that gives FSDs their dynamic flavor. Note, though, in contrast to typical applications of default logics, a GPSG grammar is not an evolving theory. The exceptions to the defaults are fully determined when the grammar is written. If we ignore for the moment the effect of the agreement principles, the defaults are roughly the converse of the ID rules: a non-default feature occurs iff it is licensed by an ID rule. It is easy to capture ID rules in L 2 For instance K,P" the rule: VP , HI5], NP, NP can be expressed: IDh(x, yl, Y2, Y3) -= Children(x, Yl, Y2, Y3) A VP(x)A H(yl) A (SUBCAT, 5)(Yl) A NP(y2) A NP(y3), where Children(z, Yl, Y~, Y3) holds iff the set of nodes that are children of x are just the Yi and VP, (SUBCAT, 5), etc. are all members of p.7 A se- quence of nodes will satisfy ID5 iff they form a local tree that, in the terminology of GKP&S, is induced by the corresponding ID rule. Using such encodings we can define a predicate Free/(x) which is true at a node x iff the feature f is compatible with the inherited features of x. The agreement principles require pairs of nodes occurring in certain configurations in local trees to agree on certain classes of features. Thus these prin- ciples do not introduce features into the trees, but rather propagate features from one node to another, possibly in many steps. Consequently, these prin- ciples cannot override FSDs by themselves; rather every violation of a default must be licensed by an inherited feature somewhere in the tree. In order to account for this propagation of features, the def- inition of FSDs in GKP&S is based on identifying pairs of nodes that co-vary wrt the relevant features in all possible extensions of the given tree. As a re- suit, although the treatment in GKP&S is actually declarative, this fact is far from obvious. Again, it is not difficult to define the configura- tions of local trees in which nodes are required to agree by FFP, CAP, or HFC in L 2 Let the predi- K,P" cate Propagatey(z, y) hold for a pair of nodes z and y iff they are required to agree on f by one of these principles (and are, thus, in the same local tree). Note that Propagate is symmetric. Following the terminology of GKP&S, we can identify the set of nodes that are prohibited from taking feature f by the combination of the ID rules, FFP, CAP, and HFC as the set of nodes that are privileged wrt f. This includes all nodes that are not Free for f as well 7We will not elaborate here on the encoding of cat- egories in L 2 K,P, nor on non-finite ID schema like the iterating co-ordination schema. These present no signif- icant problems. as any node connected to such a node by a sequence of Propagate/ links. We, in essence, define this in- ductively. P' (X) is true of a set iff it includes all ] nodes not Free for f and is closed wrt Propagate/. PrivSet] (X) is true of the smallest such set. P; (x) - (Vx)[- Frees (x) X(x)] ^ (Vx)[(3y)[X(y) A Propagate] (x, y)] ---* X(x)] PrivSetl(X) = P)(X) A (VY)[P) (Y) --~ Subset(X, Y)]. There are two things to note about this definition. First, in any tree there is a unique set satisfying PrivSet/(X) and this contains exactly those nodes not Free for f or connected to such a node by Propagate]. Second, while this is a first-order in- ductive property, the definition is a second-order ex- plicit definition. In fact, the second-order quantifi- cation of L 2 allows us to capture any monadic K,P first-order inductively or implicitly definable prop- erty explicitly. Armed with this definition, we can identify indi- viduals that are privileged wrt f simply as the mem- bers of PrivSetl.s Privileged] (x) = (3X)[PrivSety (X) A X(z)]. One can define Privileged_,/(x) which holds when- ever x is required to take the feature f along similar lines. These, then, let us capture FSDs. For the default [-INV], for instance, we get: (¥x)[-~Privileged[_ INV](X) ""+ [-- INV](x)]. For [BAR0] D,,~ [PAS] (which says that [Bar 0] nodes are, by default, not marked passive), we get: (Vz)[ ([BAR 0](x) A ~Privileged_,[pAs](X)) -~[PAS](x)]. The key thing to note about this treatment of FSDs is its simplicity relative to the treatment of GKP&S. The second-order quantification allows us to reason directly in terms of the sequence of nodes extending from the privileged node to the local tree that actually licenses the privilege. The immediate benefit is the fact that it is clear that the property of satisfying a set of FSDs is a static property of labeled trees and does not depend on the particular strategy employed in checking the tree for compliance. SWe could, of course, skip the definition of PrivSet/ and define Privilegedy(x) as (VX)[P'(X) ---* Z(x)], but we prefer to emphasize the inductive nature of the definition. 13 4 Chains in GB The key issue in capturing GB theories within L 2 K,P is the fact that the mechanism of free-indexation is provably non-definable. Thus definitions of prin- ciples that necessarily employ free-indexation have no direct interpretation in L 2 (hardly surprising, K,P as we expect GB to be capable of expressing non- context-free languages). In many cases, though, ref- erences to indices can be eliminated in favor of the underlying structural relationships they express. 9 The most prominent example is the definition of the chains formed by move-a. The fundamental problem here is identifying each trace with its an- tecedent without referencing their index. Accounts of the licensing of traces that, in many cases of movement, replace co-indexation with government relations have been offered by both Rizzi (1990) and Manzini (1992). The key element of these ac- counts, from our point of view, is that the antecedent of a trace must be the closest antecedent-governor of the appropriate type. These relationships are easy to capture in L 2 For A-movement, for instance, K,P" we have: A-Antecedent-Governs(x, y) -~A-pos(x) A C-Commands(x, y) A F.Eq(x, y) A --x is a potential antecedent in an A-position -~(3z)[Intervening-Barrier(z, x, y)] A --no barrier intervenes -~(Bz)[Spec(z) A-~A-pos(z) A C-Commands(z, x) A Intervenes(z, x, y)] --minimality is respected where F.Eq(x, y) is a conjunction of biconditionals that assures that x and y agree on the appropriate features and the other predicates are are standard GB notions that are definable in L 2 K,P" Antecedent-government, in Rizzi's and Manzini's accounts, is the key relationship between adjacent members of chains which are identified by non- referential indices, but plays no role in the definition of chains which are assigned a referential index3 ° Manzini argues, however, that referential chains can- not overlap, and thus we will never need to distin- guish multiple referential chains in any single con- text. Since we can interpret any bounded number of indices simply as distinct labels, there is no difficulty in identifying the members of referential chains in L 2 On these and similar grounds we can extend K,P" these accounts to identify adjacent members of ref- erential chains, and, at least in the case of English, 9More detailed expositions of the interpretation 2 of GB in LK,p can be found in Rogers (1996a), Rogers (1995), and Rogers (1994). 1°This accounts for subject/object asymmetries. of chains of head movement and of rightward move- ment. This gives us five mutually exclusive relations which we can combine into a single link relation that must hold between every trace and its antecedent: Link(x,y) - A-Link(z, y) V A-Ref-Link(x, y) V A---Ref-Link(x, y) V X°-Link(x, y) V Right-Link(x, y). The idea now is to define chains as sequences of nodes that are linearly ordered by Link, but before we can do this there is still one issue to resolve. While minimality ensures that every trace must have a unique antecedent, we may yet admit a single an- tecedent that licenses multiple traces. To rule out this possibility, we require chains to be closed wrt the link relation, i.e., every chain must include every node that is related by Link to any node already in the chain. Our definition, then, is in essence the def- inition, in GB terms, of a discrete linear order with endpoints, augmented with this closure property. Chain(X) -- (3!x)[X(x) A Target(x)] A --X contains exactly one Target (3!x)[X(x) A Base(x)] A --and one Base (Vx)[X(x) A -~Warget(x) ---* (3!y)[Z(y) A Link(y,x)]] A --All non-Target have a unique an- tecedent in X (Vx)[X(x) A-~Base(x) --~ (3!y)[X(y) A Link(x, y)]] A --All non-Base have a unique suc- cessor in X (Vx, y)[X(x) A (Link(x, y) V Link(y, x)) ---* X(y)] --X is closed wrt the Link relation Note that every node will be a member of exactly one (possibly trivial) chain. The requirement that chains be closed wrt Link means that chains cannot overlap unless they are of distinct types. This definition works for English be- cause it is possible, in English, to resolve chains into boundedly many types in such a way that no two chains of the same type ever overlap. In fact, it fails only in cases, like head-raising in Dutch, where there are potentially unboundedly many chains that may overlap a single point in the tree. Thus, this gives us a property separating GB theories of movement that license strongly context-free languages from those that potentially don't--if we can establish a fixed bound on the number of chains that can overlap, then the definition we sketch here will suffice to capture the theory in L 2 and, consequently, the K,P theory licenses only strongly context-free languages. 14 This is a reasonably natural diagnostic for context- freeness in GB and is close to common intuitions of what is difficult about head-raising constructions; it gives those intuitions theoretical substance and provides a reasonably clear strategy for establishing context-freeness. this distinction is; one particularly interesting ques- tion is whether it has empirical consequences. It is only from the model-theoretic perspective that the question even arises. 6 Conclusion 5 A Comparison and a Contrast Having interpretations both of GPSG and of a GB account of English in L 2 provides a certain K,P amount of insight into the distinctions between these approaches. For example, while the explanations of filler-gap relationships in GB and GPSG are quite dramatically dissimilar, when one focuses on the structures these accounts license one finds some sur- prising parallels. In the light of our interpretation of antecedent-government, one can understand the role of minimality in l~izzi's and Manzini's accounts as eliminating ambiguity from the sequence of relations connecting the gap with its filler. In GPSG this con- nection is made by the sequence of agreement rela- tionships dictated by the Foot Feature Principle. So while both theories accomplish agreement between filler and gap through marking a sequence of ele- ments falling between them, the GB account marks as few as possible while the GPSG account marks every node bf the spine of the tree spanning them. In both cases, the complexity of the set of licensed structures can be limited to be strongly context-free iff the number of relationships that must be distin- guished in a given context can be bounded. One finds a strong contrast, on the other hand, in the way in which GB and GPSG encode language universals. In GB it is presumed that all princi- ples are universal with the theory being specialized to specific languages by a small set of finitely vary- ing parameters. These principles are simply prop- erties of trees. In terms of models, one can un- derstand GB to define a universal language--the set of all analyses that can occur in human lan- guages. The principles then distinguish particular sub-languages--the head-final or the pro-drop lan- guages, for instance. Each realized human language is just the intersection of the languages selected by the settings of its parameters. In GPSG, in contrast, many universals are, in essence, closure properties that must be exhibited by human languages--if the language includes trees in which a particular config- uration occurs then it includes variants of those trees in which certain related configurations occur. Both the ECPO principle and the metarules can be under- stood in this way. Thus while universals in GB are properties of trees, in GPSG they tend to be proper- ties of sets of trees. This makes a significant differ- ence in capturing these theories model-theoretically; in the GB case one is defining sets of models, in the GPSG case one is defining sets of sets of models. It is not at all clear what the linguistic significance of We have illustrated a general formal framework for expressing theories of syntax based on axiomatiz- ing classes of models in L 2 This approach has a K,P* number of strengths. First, as should be clear from our brief explorations of aspects of GPSG and GB~ re-formalizations of existing theories within L 2 K,P can offer a clarifying perspective on those theories, and, in particular, on the consequences of individ- ual components of those theories. Secondly, the framework is purely declarative and focuses on those aspects of language that are more or less directly observable--their structural properties. It allows us to reason about the consequences of a theory with- out hypothesizing a specific mechanism implement- ing it. The abstract properties of the mechanisms that might implement those theories, however, are not beyond our reach. The key virtue of descrip- tive complexity results like the characterizations of language-theoretic complexity classes discussed here and the more typical characterizations of computa- tional complexity classes (Gurevich, 1988; Immer- man, 1989) is that they allow us to determine the complexity of checking properties independently of how that checking is implemented. Thus we can use such descriptive complexity results to draw conclu- sions about those abstract properties of such mech- anisms that are actually inferable from their observ- able behavior. Finally, by providing a uniform repre- sentation for a variety of linguistic theories, it offers a framework for comparing their consequences. Ul- timately it has the potential to reduce distinctions between the mechanisms underlying those theories to distinctions between the properties of the sets of structures they license. In this way one might hope to illuminate the empirical consequences of these dis- tinctions, should any, in fact, exist. References Blackburn, Patrick, Claire Gardent, and Wilfried Meyer-Viol. 1993. Talking about trees. In EACL 93, pages 21-29. European Association for Com- putational Linguistics. Blackburn, Patrick and Wilfried Meyer-Viol. 1994. Linguistics, logic, and finite trees. Bulletin of the IGPL, 2(1):3-29, March. Biichi, J. R. 1960. Weak second-order arithmetic and finite automata. Zeitschrift fiir malhemalis- che Logik und Grundlagen der Mathematik, 6:66- 92. 15 Carpenter, Bob. 1992. The Logic of Typed Fea- ture Structures; with Applications to Unification Grammars, Logic Programs and Constraint Reso- lution. Number 32 in Cambridge Tracts in The- oretical Computer Science. Cambridge University Press. Cornell, Thomas Longacre. 1992. Description Theory, Licensing Theory, and Principle-Based Grammars and Parsers. Ph.D. thesis, University of California Los Angeles. Dawar, Anuj and K. Vijay-Shanker. 1990. An inter- pretation of negation in feature structure descrip- tions. Computational Linguistics, 16(1):11-21. Doner, John. 1970. Tree acceptors and some of their applications. Journal of Computer and System Sciences, 4:406-451. Gazdar, Gerald, Ewan Klein, Geoffrey Pullum, and Ivan Sag. 1985. Generalized Phrase Structure Grammar. Harvard University Press. Gazdar, Gerald, Geoffrey Pullum, Robert Carpen- ter, Ewan Klein, T. E. Hukari, and R. D. Levine. 1988. Category structures. Computational Lin- guistics, 14:1-19. Gorn, Saul. 1967. Explicit definitions and linguistic dominoes. In John F. Hart and Satoru Takasu, editors, Systems and Computer Science, Proceed- ings of the Conference held at Univ. of Western Ontario, 1965. Univ. of Toronto Press. Gurevich, Yuri. 1988. Logic and the challenge of computer science. In E. BSrger, editor, Current Trends in Theoretical Computer Science. Com- puter Science Press, chapter 1, pages 1-57. Immerman, Neil. 1989. Descriptive and compu- tational complexity. In Proceedings of Symposia in Applied Mathematics, pages 75-91. American Mathematical Society. Johnson, David E. and Paul M. Postal. 1980. Are Pair Grammar. Princeton University Press, Princeton, New Jersey. Johnson, Mark. 1988. Attribute- Value Logic and the Theory of Grammar. Number 16 in CSLI Lecture Notes. Center for the Study of Language and In- formation, Stanford, CA. Johnson, Mark. 1989. The use of knowledge of language. Journal of Psycholinguistic Research, 18(1):105-128. Kasper, Robert T. and William C. Rounds. 1986. A logical semantics for feature structures. In Pro- ceedings of the 2~th Annual Meeting of the Asso- ciation for Computational Linguistics. Kasper, Robert T. and William C. Rounds. 1990. The logic of unification in grammar. Linguistics and Philosophy, 13:35-58. Keller, Bill. 1993. Feature Logics, Infinitary De- scriptions and Grammar. Number 44 in CSLI Lecture Notes. Center for the Study of Language and Information. Kracht, Marcus. 1995. Syntactic codes and gram- mar refinement. Journal of Logic, Language, and Information, 4:41-60. Manzini, Maria Rita. 1992. Locality: A Theory and Some of Its Empirical Consequences. MIT Press, Cambridge, Ma. Moshier, M. Drew and William C. Rounds. 1987. A logic for partially specified data structures. In ACM Symposium on the Principles of Program- ming Languages. Rizzi, Luigi. 1990. Relativized Minimality. MIT Press. Rogers, James. 1994. Studies in the Logic of Trees with Applications to Grammar Formalisms. Ph.D. dissertation, Univ. of Delaware. Rogers, James. 1995. On descriptive complexity, language complexity, and GB. In Patrick Black- burn and Maarten de Rijke, editors, Specifying Syntactic Structures. In Press. Also available as IRCS Technical Report 95-14. cmp-lg/9505041. Rogers, James. 1996a. A Descriptive Approach to Language-Theoretic Complexity. Studies in Logic, Language, and Information. CSLI Publications. To appear. Rogers, James. 1996b. The descriptive complexity of local, recognizable, and generalized recogniz- able sets. Technical report, IRCS, Univ. of Penn- sylvania. In Preparation. Rogers, James. 1996c. Grammarless phrase- structure grammar. Under Review. Rogers, James and K. Vijay-Shanker. 1994. Obtain- ing trees from their descriptions: An application to tree-adjoining grammars. Computational Intel- ligence, 10:401-421. Smolka, Gert. 1989. A feature logic with subsorts. LILOG Report 33, IBM Germany, Stuttgart. Stabler, Jr., Edward P. 1992. The Logical Approach to Syntax. Bradford. 16 | 1996 | 2 |
Pattern-Based Context-Free Grammars for Machine Translation Koichi Takeda Tokyo Research Laboratory, IBM Research 1623-14 Shimotsuruma, Yamato, Kanagawa 242, Japan Phone: 81-462-73-4569, 81-462-73-7413 (FAX) takeda@trl, vnet. ibm. com Abstract This paper proposes the use of "pattern- based" context-free grammars as a basis for building machine translation (MT) sys- tems, which are now being adopted as per- sonal tools by a broad range of users in the cyberspace society. We discuss ma- jor requirements for such tools, including easy customization for diverse domains, the efficiency of the translation algorithm, and scalability (incremental improvement in translation quality through user interac- tion), and describe how our approach meets these requirements. 1 Introduction With the explosive growth of the World-Wide Web (WWW) as information source, it has become rou- tine for Internet users to access textual data written in foreign languages. In Japan, for example, a dozen or so inexpensive MT tools have recently been put on the market to help PC users understand English text in WWW home pages. The MT techniques em- ployed in the tools, however, are fairly conventional. For reasons of affordability, their designers appear to have made no attempt to tackle the well-known problems in MT, such as how to ensure the learnabil- ity of correct translations and facilitate customiza- tion. As a result, users are forced to see the same kinds of translation errors over and over again, ex- cept they in cases where they involve merely adding a missing word or compound to a user dictionary, or specifying one of several word-to-word translations as a correct choice. There are several alternative approaches that might eventually liberate us from this limitation on the usability of MT systems: Unification-based grammar for- malisms and lexical-semantics formalisms (see LFG (Kaplan and Bresnan, 1982), HPSG (Pollard and Sag, 1987), and Generative Lexicon (Pustejovsky, 1991), for example) have been proposed to facili- tate computationally precise description of natural- language syntax and semantics. It is possible that, with the descriptive power of these grammars and lexicons, individual usages of words and phrases may be defined specifically enough to give correct trans- lations. Practical implementation of MT systems based on these formalisms, on the other hand, would not be possible without much more efficient parsing and disambiguation algorithms for these formalisms and a method for building a lexicon that is easy even for novices to use. Corpus-based or example-based MT (Sato and Nagao, 1990; Sumita and Iida, 1991) and statisti- cal MT (Brown et al., 1993) systems provide the easiest customizability, since users have only to sup- ply a collection of source and target sentence pairs (a bilingual corpus). Two open questions, however, have yet to be satisfactorily answered before we can confidently build commercial MT systems based on these approaches: • Can the system be used for various domains without showing severe degradation of transla- tion accuracy? • What is the minimum number of examples (or training data) required to achieve reasonable MT quality for a new domain? TAG-based MT (Abeill~, Schabes, and Joshi, 1990) 1 and pattern-based translation (Maruyama, 1993) share many important properties for successful implementation in practical MT systems, namely: • The existence of a polynomial-time parsing al- gorithm • A capability for describing a larger domain of locality (Schabes, Abeill~, and Joshi, 1988) • Synchronization (Shieber and Schabes, 1990) of the source and target language structures Readers should note, however, that the pars- 1 See LTAG (Schabes, AbeiU~, and Joshi, 1988) (Lex- icalized TAG) and STAG (Shieber and Schabes, 1990) (Synchronized TAG) for each member of the TAG (Tree Adjoining Grammar) family. 144 ing algorithm for TAGs has O(IGIn6) 2 worst case time complexity (Vijay-Shanker, 1987), and that the "patterns" in Maruyama's approach are merely context-free grammar (CFG) rules. Thus, it has been a challenge to find a framework in which we can enjoy both a grammar formalism with better descriptive power than CFG and more efficient pars- ing/generation algorithms than those of TAGs. 3 In this paper, we will show that there exists a class of "pattern-based" grammars that is weakly equivalent to CFG (thus allowing the CFG parsing algorithms to be used for our grammars), but that it facilitates description of the domain of locality. Furthermore, we will show that our framework can be extended to incorporate example-based MT and a powerful learning mechanism. 2 Pattern-Based Context-Free Grammars Pattern-based context-free grammars (PCFG) con- sists of a set of translation patterns. A pattern is a pair of CFG rules, and zero or more syntactic head and link constraints for nonterminal symbols. For example, the English-French translation pattern 4 NP:I miss:V:2 NP:3 ---* S:2 S:2 ~-- NP:3 manquer:V:2 h NP:I essentially describes a synchronized 5 pair consisting of a left-hand-side English CFG rule (called a source rule) NP V NP --~ S and a French CFG rule (called a target rule) S ~ NP V h NP accompanied by the following constraints. 1. Head constraints: The nonterminal symbol V in the source rule must have the verb miss as a syntactic head. The symbol V in the target rule must have the verb manquer as a syntactic head. The head of symbol S in the source (target) rule is identical to the head of symbol V in the source (target) rule as they are co-indexed. 2. Link constraints: Nonterminal symbols in source and target CFG rules are linked if they 2Where ]G] stands for the size of grammar G, and n is the length of an input string. 3Lexicalized CFG, or Tree Insertion Grammar (TIG) (Schabes and Waters, 1995), has been recently intro- duced to achieve such efficiency and lexicalization. 4and its inflectional variants -- we will discuss inflec- tions and agreement issues later. 5The meaning of the word "synchronized" here is ex- actly the same as in STAG (Shieber and Schabes, 1990). See also bilingual signs (Tsujii and Fujita, 1991) for a discussion of the importance of combining the appropri- ate domain of locality and synchronization. are given the same index ":i". Linked nonter- minal must be derived from a sequence of syn- chronized pairs. Thus, the first NP (NP:I) in the source rule corresponds to the second NP (NP:I) in the target rule, the Vs in both rules correspond to each other, and the second NP (NP:3) in the source rule corresponds to the first NP (NP:3) in the target rule. The source and target rules are called CFG skele- ton of the pattern. The notion of a syntactic head is similar to that used in unification grammars, al- though the heads in our patterns are simply encoded as character strings rather than as complex feature structures. A head is typically introduced 6 in preter- minal rules such as leave ---* V V *-- partir where two verbs, "leave" and "partir," are associated with the heads of the nonterminal symbol V. This is equivalently expressed as leave:l --~ V:I V:I ~ partir:l which is physically implemented as an entry of an English-French lexicon. A set T of translation patterns is said to accept an input s iff there is a derivation sequence Q for s using the source CFG skeletons of T, and every head constraint associated with the CFG skeletons in Q is satisfied. Similarly, T is said to translate s iff there is a synchronized derivation sequence Q for s such that T accepts s, and every head and link constraint associated with the source and target CFG skeletons in Q is satisfied. The derivation Q then produces a translation t as the resulting sequence of terminal symbols included in the target CFG skeletons in Q. Translation of an input string s essentially consists of the following three steps: 1. Parsing s by using the source CFG skeletons 2. Propagating link constraints from source to tar- get CFG skeletons to build a target CFG deriva- tion sequence 3. Generating t from the target CFG derivation sequence The third step is a trivial procedure when the target CFG derivation is obtained. Theorem 1 Let T be a PCFG. Then, there exists a CFG GT such that for two languages L(T) and L(GT) accepted by T and GT, respectively, L(T) = L(GT) holds. That is, T accepts a sentence s iff GT accepts s. Proof: We can construct a CFG GT as follows: 1. GT has the same set of terminal symbols as T. 6A nonterminal symbol X in a source or target CFG rule X --* X1 ... Xk can only be constrained to have one of the heads in the RHS X1 ... X~. Thus, monotonicity of head constraints holds throughout the parsing process. 145 2. For each nonterminal symbol X in T, GT in- eludes a set of nonterminal symbols {X~ ]w is either a terminal symbol in T or a special sym- bol e}. 3. For each preterminal rule X:i --+ wl:l w2:2 ... wk:k (1 < i < k), GT includes z Xwi --~ wl w2 ... wk (1 < i < k). If X is not co-indexed with any of wl, GT in- cludes Xe ~Wl w2 ... Wk. 4. For each source CFG rule with head constraints (hi, h2, ..., hk) and indexes (il, i2,..., ik), Y :ij ---* hl :Xl :il ... hk :Xk :ik (1 <_ j < k), GT includes Yhj ---* Xhl Xh2 ... Xhk. If Y is not co-indexed with any of its children, we have Y~ --* Xh~ Xh2 ... Xhk. If Xj has no head constraint in the above rule, GT includes a set of (N + 1) rules, where Xhj above is replaced with Xw for every terminal symbol w and Xe (Yhj will also be replaced if it is co-indexed with Xj).s Now, L(T) C_ L(GT) is obvious, since GT can simu- late the derivation sequence in T with corresponding rules in GT. L(GT) C L(T) can be proven, with mathematical induction, from the fact that every valid derivation sequence of GT satisfies head con- straints of corresponding rules in T. [3 Proposition 1 Let a CFG G be a set of source CFG skeletons in T. Then, L(T) C n(c). Since a valid derivation sequence in T is always a valid derivation sequence in G, the proof is immedi- ate. Similarly, we have Proposition 2 Let a CFG H be a subset of source CFG skeletons in T such that a source CFG skeleton k is in H iffk has no head constraints associated with it. Then, L(H) C L(T). THead constraints ate trivially satisfied or violated in preterminal rules. Hence, we assume, without loss of generality, that no head constraint is given in pretetmi- nal rules. We also assume that "X ---* w" implies "X:I w:l". STherefore, a single rule in T can be mapped to as many as (N + 1) k rules in GT, where N is the number of terminal symbols in T. GT could be exponentially larger than T. Two CFGs G and H define the range of CFL L(T). These two CFGs can be used to measure the "de- fault" translation quality, since idioms and colloca- tional phrases are typically translated by patterns with head constraints. Theorem 2 Let a CFG G be a set of source CFG skeletons in T. Then, L(T) C L(G) is undecidable. Proof" The decision problem, L(T) C L(G), of two CFLs such that L(T) C L(G) is solvable iff L(T) = L(G) is solvable. This includes a known un- decidable problem, L(T) = E*?, since we can choose a grammar U with L(U) = E*, nullify the entire set of rules in U by defining T to be a vacuous set {S:I a:Sb:l, Sb:l --+ b:Su:l} U U (Sv and S are start symbols in U and T, respectively), and, finally, let T further include an arbitrary CFG F. L(G) = E* is obvious, since G has {S --* Sb, Sb --* Sv} U U. Now, we have L(G) = L(T) iff L(F) = E*. [3 Theorem 2 shows that the syntactic coverage of T is, in general, only computable by T itself, even though T is merely a CFL. This may pose a serious problem when a grammar writer wishes to know if there is a specific expression that is only acceptable by using at least one pattern with head constraints, for which the answer is "no" iff L(G) = L(T). One way to trivialize this problem is to let T include a pattern with a pair of pure CFG rules for every pat- tern with head constraints, which guarantees that L(H) = L(T) = L(G). In this case, we know that the coverage of "default" patterns is always identi- cal to L(T). Although our "patterns" have no more theoreti- cal descriptive power than CFG, they can provide considerably better descriptions of the domain of lo- cality than ordinary CFG rules. For example, be:V:l year:NP:2 old ---* VP:I VP:I *- avoir:V:l an:NP:2 can handle such NP pairs as "one year" and "un an," and "more than two years" and "plus que deux ans," which would have to be covered by a large number of plain CFG rules. TAGs, on the other hand, are known to be "mildly context-sensitive" grammars, and they can capture a broader range of syntactic dependencies, such as cross-serial dependencies. The computational complexity of parsing for TAGs, how- ever, is O(IGIn6), which is far greater than that of CFG parsing. Moreover, defining a new STAG rule is not as easy for the users as just adding an entry into a dictionary, because each STAG rule has to be specified as a pair of tree structures. Our patterns, on the other hand, concentrate on specifying linear ordering of source and target constituents, and can be written by the users as easily as 9 9By sacrificing linguistic accuracy for the description of syntactic structures. 146 to leave * -- de quitter * to be year:* old = d'avoir an:* Here, the wildcard "*" stands for an NP by default. The preposition "to" and "de" are used to specify that the patterns are for VP pairs, and "to be" is used to show that the phrase is the BE-verb and its complement. A wildcard can be constrained with a head, as in "house:*" and "maison:*". The internal representations of these patterns are as follows: leave:V:l NP:2 ~ VP:I VP:I ~-- quitter:V:l NP:2 be:V:l year:NP:2 old --+ VP:I VP:I ~ avoir:V:l an:NP:2 These patterns can be associated with an explicit nonterminal symbol such as "V:*" or "ADJP:*" in addition to head constraints (e.g., "leave:V:*'). By defining a few such notations, these patterns can be successfully converted into the formal represen- tations defined in this section. Many of the diver- gences (Doff, 1993) in source and target language expressions are fairly collocational, and can be ap- propriately handled by using our patterns. Note the simplicity that results from using a notation in which users only have to specify the surface ordering of words and phrases. More powerful grammar for- malisms would generally require either a structural description or complex feature structures. 3 The Translation Algorithm The parsing algorithm for translation patterns can be any of known CFG parsing algorithms includ- ing CKY and Earley algorithms 1° At this stage, head and link constraints are ignored. It is easy to show that the number of target charts for a sin- gle source chart increases exponentially if we build target charts simultaneously with source charts. For example, the two patterns A:I B:2 ~ B:2 B:2 ~-- A:I B:2, and A:I B:2 --~ B:2 A:I ~- B:2 A:I will generate the following 2 n synchronized pairs of charts for the sequence of (n+l) nonterminal sym- bols AAA...AB, for which no effective packing of the target charts is possible. (A (A... (A B))) with (A (A... (A B))) (A (A... (A B))) with ((A ... (A B)) A) iA (A... (A S))) with (((B A) A)... A) Our strategy is thus to find a candidate set of source charts in polynomial time. We therefore apply heuristic measurements to identify the most promising patterns for generating translations. In 1°Our prototype implementation was based on the Earley algorithm, since this does not require lexicaliza- tion of CFG rules. this sense, the entire translation algorithm is not guaranteed to run in polynomial time. Practically, a timeout mechanism and a process for recovery from unsuccessful translation (e.g., applying the idea of fitted parse (Jensen and Heidorn, 1983) to target CFG rules) should be incorporated into the transla- tion algorithm. Some restrictions on patterns must be imposed to avoid infinitely many ambiguities and arbitrarily long translations. The following patterns are there- fore not allowed: 1. A--*XY~--B 2. A + X Y ~-C1...B...C~ if there is a cycle of synchronized derivation such that A--+ X...--~ A and B (or Cl...B...Ck) --* Y...-+ B, where A, B, X, and Y are nonterminal symbols with or without head and link constraints, and C's are either terminal or nonterminal symbols. The basic strategy for choosing a candidate derivation sequence from ambiguous parses is as follows. 11 A simplified view of the Earley algorithm (Earley, 1970) consists of three major components, predict(i), complete(i), and scan(i), which are called at each position i = 0, 1,..., n in an input string I = sls2...sn. Predict(i) returns a set of currently ap- plicable CFG rules at position i. Complete(i) com- bines inactive charts ending at i with active charts that look for the inactive charts at position i to pro- duce a new collection of active and inactive charts. Scan(i) tries to combine inactive charts with the symbol si+l at position i. Complete(n) gives the set of possible parses for the input I. Now, for every inactive chart associated with a nonterminal symbol X for a span of (i~) (1 ~ i, j <_ n), there exists a set P of patterns with the source CFG skeleton, ... --* X. We can define the fol- lowing ordering of patterns in P; this gives patterns with which we can use head and link constraints for building target charts and translations. These can- didate patterns can be arranged and associated with the chart in the complete() procedure. 1. Prefer a pattern p with a source CFG skeleton X --~ X1...X~ over any other pattern q with the same source CFG skeleton X --~ X1 ..' Xk, such that p has a head constraint h:Xi if q has h:Xi (i = 1,...,k). The pattern p is said to be more specific than q. For example, p = 11 This strategy is similar to that of transfer-driven MT (TDMT) (Furuse and Iida, 1994). TDMT, however, is based on a combination of declarative/procedural knowl- edge sources for MT, and no clear computational prop- erties have been investigated. 147 "leave:V:1 house:NP --+ VP:I" is preferred to q = "leave:V:l NP --* VP:I". 2. Prefer a pattern p with a source CFG skeleton to any pattern q that has fewer terminal sym- bols in the source CFG skeleton than p. For example, prefer "take:V:l a walk" to "take:V:l NP" if these patterns give the VP charts with the same span. 3. Prefer a pattern p which does not violate any head constraint over those which violate a head constraint. 4. Prefer the shortest derivation sequence for each input substring. A pattern for a larger domain of locality tends to give a shorter derivation se- quence. These preferences can be expressed as numeric values (cost) for patterns. 12 Thus, our strategy fa- vors lexicalized (or head constrained) and colloca- tional patterns, which is exactly what we are go- ing to achieve with pattern-based MT. Selection of patterns in the derivation sequence accompanies the construction of a target chart. Link constraints are propagated from source to target derivation trees. This is basically a bottom-up procedure. Since the number M of distinct pairs (X,w), for a nonterminal symbol X and a subsequence w of input string s, is bounded by Kn 2, we can compute the m- best choice of pattern candidates for every inactive chart in time O(ITIKn 3) as claimed by Maruyama (Maruyama, 1993), and Schabes and Waters (Sch- abes and Waters, 1995). Here, K is the number of distinct nonterminal symbols in T, and n is the size of the input string. Note that the head constraints associated with the source CFG rules can be incor- porated in the parsing algorithm, since the number of triples (X,w,h), where h is a head of X, is bounded by Kn 3. We can modify the predict(), complete(), and scan() procedures to run in O([T[Kn 4) while checking the source head constraints. Construction of the target charts, if possible, on the basis of the m best candidate patterns for each source chart takes O(Kn~m) time. Here, m can be larger than 2 n if we generate every possible translation. The reader should note critical differences between lexicalized grammar rules (in the sense of LTAG and TIG) and translation patterns when they are used for MT. Firstly, a pattern is not necessarily lexicalized. An economical way of organizing translation patterns is to include non-lexicalized patterns as "default" translation rules. 12A similar preference can be defined for the tar- get part of each pattern, but we found many counter- examples, where the number of nontermina] symbols shows no specificity of the patterns, in the target part of English-to-Japanese translation patterns. Therefore, only the head constraint violation in the target part is accounted for in our prototype. Secondly, lexicalization might increase the size of STAG grammars (in particular, compositional gram- mar rules such as ADJP NP --* NP) considerably when a large number of phrasal variations (adjec- tives, verbs in present participle form, various nu- meric expressions, and so on) multiplied by the num- ber of their translations, are associated with the ADJP part. The notion of structure sharing (Vijay- Shanker and Schabes, 1992) may have to be ex- tended from lexical to phrasal structures, as well as from monolingual to bilingual structures. Thirdly, a translation pattern can omit the tree structure of a collocation, and leave it as just a se- quence of terminal symbols. The simplicity of this helps users to add patterns easily, although precise description of syntactic dependencies is lost. 4 Features and Agreements Translation patterns can be enhanced with unifica- tion and feature structures to give patterns addi- tional power for describing gender, number, agree- ment, and so on. Since the descriptive power of unification-based grammars is considerably greater than that of CFG (Berwick, 1982), feature struc- tures have to be restricted to maintain the efficiency of parsing and generation algorithms. Shieber and Schabes briefly discuss the issue (Shieber and Sch- abes, 1990). We can also extend translation patterns as follows: Each nonterminal node in a pattern can be associated with a fixed-length vector of bi- nary features. This will enable us to specify such syntactic de- pendencies as agreement and subcategorization in patterns. Unification of binary features, however, is much simpler: unification of a feature-value pair succeeds only when the pair is either (0,0) or (1,1/. Since the feature vector has a fixed length, unifica- tion of two feature vectors is performed in a constant time. For example, the patterns 13 V:I:+TRANS NP:2 --* VP:I VP:I V:I:+TRANS NP:2 V:I:+INTRANS --+ VP:I VP:I ~- V:I:+INTRANS are unifiable with transitive and intransitive verbs, respectively. We can also distinguish local and head features, as postulated in HPSG. Simplified version of verb subcategorization is then encoded as VP:I:+TRANS-OBJ NP:2 --* VP:I:+OBJ VP:I:+OBJ ~-VP:I:+TRANS-OBJ NP:2 where "-OBJ" is a local feature for head VPs in LIISs, while "+OBJ" is a local feature for VPs in 13Again, these patterns can be mapped to a weakly equivalent set of CFG rules. See GPSG (Gazdar, Pul- lum, and Sag, 1985) for more details. 148 the RHSs. Unification of a local feature with +OBJ succeeds since it is not bound. Agreement on subjects (nominative NPs) and finite-form verbs (VPs, excluding the BE verb) is disjunctively specified as NP : 1 : +NOMI+3RD+SG VP : 2 : +FIN+3SG NP : 1 : +NOMI+3RD+PL VP : 2 : +FIN-3SG NP : 1 : +NOMI-3RD VP : 2 : +FIN-3SG NP : 1 : +NOMI VP : 2 : +FIN+PAST which is collectively expressed as NP : 1 : *AGRS VP : 2 : *AGRV Here, *AGRS and *AGRV are a pair of aggregate unification specifiers that succeeds only when one of the above combinations of the feature values is unifiable. Another way to extend our grammar formalism is to associate weights with patterns. It is then possi- ble to rank the matching patterns according to a lin- ear ordering of the weights rather than the pairwise partial ordering of patterns described in the previ- ous section. In our prototype system, each pattern has its original weight, and according to the prefer- ence measurement described in the previous section, a penalty is added to the weight to give the effective weight of the pattern in a particular context. Pat- terns with the least weight are to be chosen as the most preferred patterns. Numeric weights for patterns are extremely use- ful as means of assigning higher priorities uniformly to user-defined patterns. Statistical training of pat- terns can also be incorporated to calculate such weights systematically (Fujisaki et al., 1989). Figure I shows a sample translation of the input "He knows me well," using the following patterns. NP:I:*AGRS VP:I:*AGRS ~ S:I S:I ~- NP:I:*AGRS VP:I:*AGRS ... (a) VP:I ADVP:2 ~ VP:I VP:I ~ VP:I ADVP:2 ... (b) know:VP:l:+OBJ well --+ VP:I VP:I ~-- connaitre:VP:h+OBJ bien ... (c) V:I NP:2 --~ VP:I:+OBJ VP:I:+OBJ *-- V:I NP:2:-PRO ... (d) V:I NP:2 --+ VP:I:+OBJ VP:I:+OBJ ~ NP:2:+PRO V:I ... (e) To simplify the example, let us assume that we have the following preterminal rules: he --~ NP:+PRO+NOMI+3RD+SG NP:+PRO+NOMI+3RD+SG ~ il ... (f) me --+ NP:+PRO+CAUS+SG-3RD NP:+PRO+CAUS+SG-3RD ,--- me ... (g) knows --+ V:+FIN+3SG V:+FIN+3SG ,-- salt ... (h) knows --~ V:+FIN+3SG V:+FIN+3SG ~-- connait ... (i) Input: He knows me well Phase 1: Source Analysis [0 i] He ---> (f) NP (active arc [0 1] (a) NP.VP) [1 23 knows ---> (h) V, (i) V (active arcs [I 2] (d) V.NP, [1 2] (e) V.NP) [2 3] me ---> (g) NP (inactive arcs [I 3] (d) V NP, [i 3] (e) V NP) [I 3] knows me ---> (d), (e) VP (inactive arc [0 3] (a) NP VP, active arcs [I 3] (b) VP.well, [i 3] (c) VP.ADVP) [0 3] He knows me ---> (a) S [3 4] well ---> (j) ADVP, (k) ADVP (inactive arcs [I 4] (b) VP ADVP, [i 4] (c) VP ADVP) [i 4] knows me well ---> (b), (c) VP (inactive arc [0 4] (a) NP VP) [0 4] He knows me well ---> (a) S Phase 2: Constraint Checking [0 I] He ---> (f) NP [1 2] knows ---> (i) V, (j) V [2 3] me ---> (g) NP [I 3] knows me ---> (e) VP (pattern (d) fails) [0 3] He knows me ---> (a) S [3 4] well ---> (i) ADVP, (j) ADVP [i 4] knows me well ---> (b), (c) VP (preference ordering (c), (b)) [0 4] He knows me well ---> (a) S Phase 3: Target Generation [0 4] He knows me well ---> (a) S [0 1] He ---> il [I 4] knows me well ---> (c) VP well ---> bien [I 3] knows me ---> (e) VP [1 2] knows ---> connait (h) violates a head constraint [2 3] me ---> me Translation: il me connait bien Figure 1: Sample Translation well --* ADVP ADVP ~-- bien ... (j) well --~ ADVP ADVP ~-- beaucoup ... (k) In the above example, the Earley-based algorithm with source CFG rules is used in Phase 1. In Phase 2, head and link constraints are examined, and unifi- cation of feature structures is performed by using the charts obtained in Phase 1. Candidate patterns are ordered by their weights and preferences. Finally, in Phase 3, the target charts are built to generate translations based on the selected patterns. 5 Integration of Bilingual Corpora Integration of translation patterns with translation examples, or bilingual corpora, is the most impor- tant extension of our framework. There is no dis- 149 crete line between patterns and bilingual corpora. Rather, we can view them together as a uniform set of translation pairs with varying degrees of lex- icalization. Sentence pairs in the corpora, however, should not be just added as patterns, since they are often redundant, and such additions contribute to neither acquisition nor refinement of non-sentential patterns. Therefore, we have been testing the integration method with the following steps. Let T be a set of translation patterns, B be a bilingual corpus, and (s,t) be a pair of source and target sentences. 1. [Correct Translation] IfT can translate s into t, do nothing. 2. [Competitive Situation] If T can translate s into t' (t ~ t~), do the following: (a) [Lexicalization] If there is a paired deriva- tion sequence Q of (s,t) in T, create a new pattern p' for a pattern p used in Q such that every nonterminal symbol X in p with no head constraint is associated with h:X in q, where the head h is instantiated in X of p. Add p~ to T if it is not already there. Repeat the addition of such patterns, and assign low weights to them until the refined sequence Q becomes the most likely trans- lation of s. For example, add leave:VP: 1 :+OBJ considerably:ADVP:2 -* VP:I VP:I *- laisser:VP:l:+OBJ con- sid@rablement:ADVP:2 if the existing VP ADVP pattern does not give a correct translation. (b) [Addition of New Patterns] If there is no such paired derivation sequence, add specific patterns, if possible, for idioms and collocations that are missing in T, or add the pair (s,t) to T as a translation pattern. For example, add leave:VP:l:+OBJ behind --* VP:I VP:I *-- laisser:VP:l:+OBJ if the phrase "leave it behind" is not cor- rectly translated. 3. [Translation Failure] If T cannot translate s at all, add the pair (s,t) to T as a translation pattern. The grammar acquisition scheme described above has not yet been automated, but has been manually simulated for a set of 770 English-Japanese simple sentence pairs designed for use in MT system eval- uation, which is available from JEIDA (the Japan Electronic Industry Development Association) ((the Japan Electronic Industry Development Associa- tion), 1995), including: #100: Any question will be welcomed. ~200: He kept calm in the face of great danger. #300: He is what is called "the man in the news". ~400: Japan registered a trade deficit of $101 million, reflecting the country's eco- nomic sluggishness, according to govern- ment figures. #500: I also went to the beach 2 weeks earlier. At an early stage of grammar acquisition, [Addition of New Patterns] was primarily used to enrich the set T of patterns, and many sentences were un- ambiguously and correctly translated. At a later stage, however, JEIDA sentences usually gave sev- eral translations, and [Lexicalization] with care- ful assignment of weights was the most critical task. Although these sentences are intended to test a sys- tem's ability to translate one basic linguistic phe- nomenon in each simple sentence, the result was strong evidence for our claim. Over 90% of JEIDA sentences were correctly translated. Among the fail- ures were: ~95: I see some stamps on the desk . #171: He is for the suggestion, but I'm against it. ~244: She made him an excellent wife. #660: He painted the walls and the floor white. Some (prepositional and sentential) attachment am- biguities needs to be resolved on the basis of seman- tic information, and scoping of coordinated struc- tures would have to be determined by using not only collocational patterns but also some measures of bal- ance and similarities among constituents. 6 Conclusions and Future Work Some assumptions about patterns should be re- examined when we extend the definition of patterns. The notion of head constraints may have to be ex- tended into one of a set membership constraint if we need to handle coordinated structures (Kaplan and Maxwell III, 1988). Some light-verb phrases cannot be correctly translated without "exchanging" several feature values between the verb and its object. A similar problem has been found in be-verb phrases. Grammar acquisition and corpus integration are fundamental issues, but automation of these pro- cesses (Watanabe, 1993) is still not complete. Devel- opment of an efficient translation algorithm, not just an efficient parsing algorithm, will make a significant contribution to research on synchronized grammars, including STAGs and our PCFGs. Acknowledgments Hideo Watanabe designed and implemented a pro- totype MT system for pattern-based CFGs, while Shiho Ogino developed a Japanese generator of the 150 prototype. Their technical discussions and sugges- tions greatly helped me shape the idea of pattern- based CFGs. I would also like to thank Taijiro Tsutsumi, Masayuki Morohashi, Hiroshi Nomiyama, Tetsuya Nasukawa, and Naohiko Uramoto for their valuable comments. Michael McDonald, as usual, helped me write the final version. References Abeill@, A., Y. Schabes, and A. K. Joshi. 1990. "Using Lexicalized Tags for Machine Translation". In Proc. of the 13th International Conference on Computational Linguistics, volume 3, pages 1-6, Aug. Berwick, R.C. 1982. "Computational Complex- ity and Lexical-Functional Grammar". American Journal of Computational Linguistics, pages 97- 109, July-Dec. Brown, P. F., S. A. Della Pietra, V. J. Della Pietra, and R. L. Mercer. 1993. "The Mathematics of Statistical Machine Translation: Parametric Es- timation". Computational Linguistics, 19(2):263- 311, June. Dorr, B. J. 1993. "Machine Translation: A View from the Lexicon". The MIT Press, Cambridge, Mass. Earley, J. 1970. "An Efficient Context-free Pars- ing Algorithm". Communications of the ACM, 6(8):94-102, February. Fujisaki, T., F. Jelinek, J. Cocke, E. Black, and T. Nishino. 1989. "A Probabilistie Parsing Method for Sentence Disambiguation". In Proc. of the International Workshop on Parsing Tech- nologies, pages 85-94, Pittsburgh, Aug. Furuse, O. and H. Iida. 1994. "Cooperation be- tween Transfer and Analysis in Example-Based Framework". In Proc. of the 15th International Conference on Computational Linguistics, pages 645-651, Aug. Gazdar, G., G. K. Pullum, and I. A. Sag. 1985. "Generalized Phrase Structure Grammar". Har- vard University Press, Cambridge, Mass. Jensen, K. and G. E. Heidorn. 1983. "The Fit- ted Parse: 100% Parsing Capability in a Syntactic Grammar of English". In Proc. of the 1st Confer- ence on Applied NLP, pages 93-98. Kaplan, R. and J. Bresnan. 1982. "Lexical- Functional Grammar: A Formal System for Generalized Grammatical Representation". In J. Bresnan, editor, "Mental Representation of Grammatical Relations". MIT Press, Cambridge, Mass., pages 173-281. Kaplan, R. M. and J. T. Maxwell III. 1988. "Constituent Coordination in Lexical-Functional Grammar". In Proc. of the 12th International Conference on Computational Linguistics, pages 303-305, Aug. Maruyama, H. 1993. "Pattern-Based Translation: Context-Free Transducer and Its Applications to Practical NLP". In Proc. of Natural Language Pa- cific Rim Symposium (NLPRS' 93), pages 232- 237, Dec. Pollard, C. and I. A. Sag. 1987. "An Information- Based Syntax and Semantics, Vol.1 Fundamen- tals". CSLI Lecture Notes, Number 13. Pustejovsky, J. 1991. "The Generative Lexi- con". Computational Linguistics, 17(4):409-441, December. Sato, S. and M. Nagao. 1990. "Toward Memory- based Translation". In Proc. of the 13th Interna- tional Conference on Computational Linguistics, pages 247-252, Helsinki, Aug. Schabes, Y., A. Abeill~, and A. K. Joshi. 1988. "Parsing Algorithm with 'lexicalized' grammars: Application to tree adjoining grammars". In Proc. of the 12th International Conference on Compu- tational Linguistics, pages 578-583, Aug. Schabes, Y. and R. C. Waters. 1995. "Tree In- sertion Grammar: A Cubic-Time, Parsable For- malism that Lexicalizes Context-Free Grammar without Changing the Trees Produced". Compu- tational Linguistics, 21(4):479-513, Dec. Shieber, S. M. and Y. Schabes. 1990. "Synchronous Tree-Adjoining Grammars". In Proc. of the 13th International Conference on Computational Lin- guistics, pages 253-258, August. Sumita, E. and H. Iida. 1991. "Experiments and Prospects of Example-Based Machine Transla- tion". In Proc. of the 29th Annual Meeting of the Association for Computational Linguistics, pages 185-192, Berkeley, June. JEIDA (the Japan Electronic Industry Develop- ment Association). 1995. "Evaluation Standards for Machine Translation Systems (in Japanese)". 95-COMP-17, Tokyo. Tsujii, J. and K. Fujita. 1991. "Lexical Transfer based on Bilingual Signs". In Proc. of the 5th European ACL Conference. Vijay-Shanker, K. 1987. "A Study of Tree Ad- joining Grammars". Ph.D. thesis, Department of Computer and Information Science, University of Pennsylvania. Vijay-Shanker, K. and Y. Schabes. 1992. "Struc- ture Sharing in Lexicalized Tree-Adjoining Gram- mars". In Proc. of the 14th International Con- ference on Computational Linguistics, pages 205- 211, Aug. Watanabe, H. 1993. "A Method for Extract- ing Translation Patterns from Translation Exam- ples". In Proc. of 5th Intl. Conf. on Theoretical and Methodological Issues in Machine Translation of Natural Languages, pages 292-301, July. 151 | 1996 | 20 |
A Polynomial-Time Algorithm for Statistical Machine Translation Dekai Wu HKUST Department of Computer Science University of Science and Technology Clear Water Bay, Hong Kong dekai©cs, ust. hk Abstract We introduce a polynomial-time algorithm for statistical machine translation. This algorithm can be used in place of the expensive, slow best-first search strate- gies in current statistical translation ar- chitectures. The approach employs the stochastic bracketing transduction gram- mar (SBTG) model we recently introduced to replace earlier word alignment channel models, while retaining a bigram language model. The new algorithm in our experi- ence yields major speed improvement with no significant loss of accuracy. 1 Motivation The statistical translation model introduced by IBM (Brown et al., 1990) views translation as a noisy channel process. Assume, as we do throughout this paper, that the input language is Chinese and the task is to translate into English. The underlying generative model, shown in Figure 1, contains a stochastic English sentence generator whose output is "corrupted" by the translation channel to produce Chinese sentences. In the IBM system, the language model employs simple n-grams, while the transla- tion model employs several sets of parameters as discussed below. Estimation of the parameters has been described elsewhere (Brown et al., 1993). Translation is performed in the reverse direction from generation, as usual for recognition under gen- erative models. For each Chinese sentence c that is to be translated, the system must attempt to find the English sentence e* such that: (1) e* = argmaxPr(elc ) e (2) = argmaxPr(cle ) Pr(e) e In the IBM model, the search for the optimal e* is performed using a best-first heuristic "stack search" similar to A* methods. One of the primary obstacles to making the statis- tical translation approach practical is slow speed of translation, as performed in A* fashion. This price is paid for the robustness that is obtained by using very flexible language and translation models. The lan- guage model allows sentences of arbitrary order and the translation model allows arbitrary word-order permutation. The models employ no structural con- straints, relying instead on probability parameters to assign low probabilities to implausible sentences. This exhaustive space, together with massive num- ber of parameters, permits greater modeling accu- racy. But while accuracy is enhanced, translation ef- ficiency suffers due to the lack of structure in the hypothesis space. The translation channel is char- acterized by two sets of parameters: translation and alignment probabilities3 The translation probabil- ities describe lexical substitution, while alignment probabilities describe word-order permutation. The key problem is that the formulation of alignment probabilities a(ilj, V, T) permits the Chinese word in position j of a length-T sentence to map to any po- sition i of a length-V English sentence. So V T align- ments are possible, yielding an exponential space with correspondingly slow search times. Note there are no explicit linguistic grammars in the IBM channel model. Useful methods do exist for incorporating constraints fed in from other pre- processing modules, and some of these modules do employ linguistic grammars. For instance, we previ- ously reported a method for improving search times in channel translation models that exploits bracket- ing information (Wu and Ng, 1995). If any brackets for the Chinese sentence can be supplied as addi- tional input information, produced for example by a preprocessing stage, a modified version of the A*- based algorithm can follow the brackets to guide the search heuristically. This strategy appears to pro- duces moderate improvements in search speed and slightly better translations. Such linguistic-preprocessing techniques could 1Various models have been constructed by the IBM team (Brown et al., 1993). This description corresponds to one of the simplest ones, "Model 2"; search costs for the more complex models are correspondingly higher. 152 stochastic English generator English i Chinese strings I noisy strings [ channel i J k direction of generative model ---~-~ < -- direction of translation Figure 1: Channel translation model. also be used with the new model described below, but the issue is independent of our focus here. In this paper we address the underlying assumptions of core channel model itself which does not directly use linguistic structure. A slightly different model is employed for a word alignment application by Dagan et al. (Da- gan, Church, and Gale, 1993). Instead of alignment probabilities, offset probabilities o(k) are employed, where k is essentially the positional distance between the English words aligned to two adjacent Chinese words: (3) k = i - (A(jpreo) + (j - jp~ev)N) where jpr~v is the position of the immediately pre- ceding Chinese word and N is a constant that nor- malizes for average sentence lengths in different lan- guages. The motivation is that words that are close to each other in the Chinese sentence should tend to be close in the English sentence as well. The size of the parameter set is greatly reduced from the lil x IJl x ITI x Iv I parameters of the alignment probabilities, down to a small set of Ikl parameters. However, the search space remains the same. The A*-style stack-decoding approach is in some ways a carryover from the speech recognition archi- tectures that inspired the channel translation model. It has proven highly effective for speech recognition in both accuracy and speed, where the search space contains no order variation since the acoustic and text streams can be assumed to be linearly aligned. But in contrast, for translation models the stack search alone does not adequately compensate for the combinatorially more complex space that results from permitting arbitrary order variations. Indeed, the stack-decoding approach remains impractically slow for translation, and has not achieved the same kind of speed as for speech recognition. The model we describe in this paper, like Dagan et al.'s model, encourages related words to stay to- gether, and reduces the number of parameters used to describe word-order variation. But more impor- tantly, it makes structural assumptions that elimi- nate large portions of the space of alignments, based on linguistic motivatations. This greatly reduces the search space and makes possible a polynomial-time optimization algorithm. 2 ITG and BTG Overview The new translation model is based on the recently introduced bilingual language modeling approach. Specifically, the model employs a bracketing trans- duction grammar or BTG (Wu, 1995a), which is a special case of inversion transduction grammars or ITGs (Wu, 1995c; Wu, 1995c; Wu, 1995b; Wu, 1995d). These formalisms were originally developed for the purpose of parallel corpus annotation, with applications for bracketing, alignment, and segmen- tation. This paper finds they are also useful for the translation system itself. In this section we summa- rize the main properties of BTGs and ITGs. An ITG consists of context-free productions where terminal symbols come in couples, for example x/y, where z is a Chinese word and y is an English trans- lation of x. 2 Any parse tree thus generates two strings, one on the Chinese stream and one on the English stream. Thus, the tree: (1) [~/I liST/took [--/a $:/e ~t/book]Np ]vP [,,~/for ~/you]pp ]vP Is produces, for example, the mutual translations: (2) a. [~ [[ST [--*~]NP ]vP [~]PP ]vP Is [W6 [[nA le [yi b~n shfi]Np ]vp [g@i ni]pp ]vP ]s b. [I [[took [a book]Np ]vP [for you]pp ]vP Is An additional mechanism accommodates a con- servative degree of word-order variation between the two languages. With each production of the gram- mar is associated either a straight orientation or an inverted orientation, respectively denoted as follows: VP --~ [VP PP] VP ---* (VP PP) In the case of a production with straight orien- tation, the right-hand-side symbols are visited left- to-right for both the Chinese and English streams. But for a production with inverted orientation, the 2Readers of the papers cited above should note that we have switched the roles of English and Chinese here, which helps simplify the presentation of the new trans- lation algorithm. 153 BTG all matchings ratio 1 1 1.000 1 1 1 1200 2 2 2 1.000 3 6 6 1.000 4 22 24 0.917 5 90 120 0.750 6 394 720 0.547 7 1806 5040 0.358 8 8558 40320 0.212 9 41586 362880 0.115 10 206098 3628800 0.057 11 1037718 39916800 0.026 12 5293446 479001600 0.011 13 27297738 6227020800 0.004 14 142078746 87178291200 0.002 15 745387038 1307674368000 0.001 16 3937603038 20922789888000 0.000 Figure 2: Number of legal word alignments between sentences of length f, with and without the BTG restriction. right-hand-side symbols are visited left-to-right for Chinese and right-to-left for English. Thus, the tree: (3) [~/I ([,.~/for ~/you]pp [$~'/took [--/a ak/e ~idt/book]Np ]vp )vP ]s produces translations with different word order: (4) a. [~J~ [[,,~*l~]pp [~Y [--2[~-~]Np ]VP ]VP ]S b. [I [[took [a book]Np ]vP [for you]pp ]vP ]s In the special case of BTGs which are employed in the model presented below, there is only one un- differentiated nonterminal category (aside from the start symbol). Designating this category A, this means all non-lexical productions are of one of these two forms: A ---+ [AA...A] A ---+ (AA...A} The degree of word-order flexibility is the criti- cal point. BTGs make a favorable trade-off between efficiency and expressiveness: constraints are strong enough to allow algorithms to operate efficiently, but without so much loss of expressiveness as to hinder useful translation. We summarize here; details are given elsewhere (Wu, 1995b). With regard to efficiency, Figure 2 demonstrates the kind of reduction that BTGs obtain in the space of possible alignments. The number of possible alignments, compared against the unrestricted case where any English word may align to any Chinese position, drops off dramatically for strings longer than four words. (This table makes the simplifica- tion of counting only 1-1 matchings and is merely representative.) With regard to expressiveness, we believe that al- most all variation in the order of arguments in a syntactic frame can be accommodated, a Syntac- tic frames generally contain four or fewer subcon- stituents. Figure 2 shows that for the case of four subconstituents, BTGs permit 22 out of the 24 pos- sible alignments. The only prohibited arrangements are "inside-out" transformations (Wu, 1995b), which we have been unable to find any examples of in our corpus. Moreover, extremely distorted alignments can be handled by BTGs (Wu, 1995c), without re- sorting to the unrestricted-alignment model. The translation expressiveness of BTGs is by no means perfect. They are nonetheless proving very useful in applications and are substantially more fea- sible than previous models. In our previous corpus analysis applications, any expressiveness limitations were easily tolerable since degradation was graceful. In the present translation application, any expres- siveness limitation simply means that certain trans- lations are not considered. For the remainder of the paper, we take advantage of a convenient normal-form theorem (Wu, 1995a) that allows us to assume without loss of generality that the BTG only contains the binary-branching form for the non-lexicM productions. 4 3 BTG-Based Search for the Original Models A first approach to improving the translation search is to limit the allowed word alignment patterns to those permitted by a BTG. In this case, Equation (2) is kept as the objective function and the translation channel can be parameterized similarly to Dagan et al. (Dagan, Church, and Gale, 1993). The effect of the BTG restriction is just to constrain the shapes of the word-order distortions. A BTG rather than ITG is used since, as we discussed earlier, pure channel translation models operate without explicit gram- mars, providing no constituent categories around which a more sophisticated ITG could be structured. But the structural constraints of the BTG can im- prove search efficiency, even without differentiated constituent categories. Just as in the baseline sys- tem, we rely on the language and translation models to take up the slack in place of an explicit grammar. In this approach, an O(T 7) algorithm similar to the one described later can be constructed to replace A* search. 3Note that these points are not directed at free word- order languages. But in such languages, explicit mor- phological inflections make role identification and trans- lation easier. 4But see the conclusion for a caveat. 154 However we do not feel it is worth preserving off- set (or alignment or distortion) parameters simply for the sake of preserving the original translation channel model. These parameterizations were only intended to crudely model word-order variation. In- stead, the BTG itself can be used directly to proba- bilistically rank alternative alignments, as described next. 4 Replacing the Channel Model with a SBTG The second possibility is to use a stochastic brack- eting transduction grammar (SBTG) in the channel model, replacing the translation model altogether. In a SBTG, a probability is associated with each pro- duction. Thus for the normal-form BTG, we have: The translation lexicon is encoded in productions of a T ] g [AA] aO A -+ (A A) b(x,y) A ~ x/y 5(~ e) A ~ z/e b(qu) A --+ ely for all x, y lexical translations for all x Chinese vocabulary for all y English vocabulary the third kind. The latter two kinds of productions allow words of either Chinese or English to go un- matched. The SBTG assigns a probability Pr(c, e, q) to all generable trees q and sentence-pairs. In principle it can be used as the translation channel model by normalizing with Pr(e) and integrating out Pr(q) to give Pr(cle ) in Equation (2). In practice, a strong language model makes this unnecessary, so we can instead optimize the simpler Viterbi approximation (4) e* = argmaxPr(c, e, q) Pr(e) e To complete the picture we add a bigram model ge~-lej = g(ej lej_l) for the English language model Pr(e). Offset, alignment, or distortion parameters are entirely eliminated. A large part of the im- plicit function of such parameters--to prevent align- ments where too many frame arguments become separated--is rendered unnecessary by the BTG's structural constraints, which prohibit many such configurations altogether. Another part of the pa- rameters' ~urpose is subsumed by the SBTG's prob- abilities at] and a0, which can be set to prefer straight or inverted orientation depending on the language pair. As in the original models, the lan- guage model heavily influences the remaining order- ing decisions. Matters are complicated by the presence of the bi- gram model in the objective function (which word- alignment models, as opposed to translation models, do not need to deal with). As in our word-alignment model, the translation algorithm optimizes Equa- tion (4) via dynamic programming, similar to chart parsing (Earley, 1970) but with a probabilistic ob- jective function as for HMMs (Viterbi, 1967). But unlike the word-alignment model, to accommodate the bigram model we introduce indexes in the recur- rence not only on subtrees over the source Chinese string, but also on the delimiting words of the target English substrings. Another feature of the algorithm is that segmen- tation of the Chinese input sentence is performed in parallel with the translation search. Conven- tional architectures for Chinese NLP generally at- tempt to identify word boundaries as a preprocess- ing stage. 5 Whenever the segmentation preprocessor prematurely commits to an inappropriate segmenta- tion, difficulties are created for later stages. This problem is particularly acute for translation, since the decision as to whether to regard a sequence as a single unit depends on whether its components can be translated compositionally. This in turn often depends on what the target language is. In other words, the Chinese cannot be appropriately seg- mented except with respect to the target language of translation--a task-driven definition of correct seg- mentation. The algorithm is given below. A few remarks about the notation used: c~..t denotes the subse- quence of Chinese tokens cs+t, cs+2, • • • , ct. We use E(s..t) to denote the set of English words that are translations the Chinese word created by taking all tokens in c,..t together. E(s,t) denotes the set of English words that are translations of any of the Chinese words anywhere within c,..t. Note also that we assume the explicit sentence-start and sentence- end tokens co = <s> and CT+l = </s>, which makes the algorithm description more parsimonious. Fi- nally, the argmax operator is generalized to vector notation to accomodate multiple indices. 1. Initialization o • O<s<t<T 6~trr(~) = b~(c~..t/Y), :~ ~ E(s..-t) 2. Recursion For all s,t,y,z such that { -1_<s<t_<T+1 ~E(8,t) zEE(s,t) 6,~v~ maxrx[l xO x0 1 = ==~ tVstyz ~ Vstyz ~ VstyzJ 2 if 6 [1 "-6 0 and 6 [] 0 [] ~ty~ - st~ ,tyz > 6sty~ Ostyz : if 6 0 "~6 [] " and 6 0 o styz ! styz styz > 6styz otherwise 5Written Chinese contains no spaces to delimit words; any spaces in the earlier examples are artifacts of the parse tree brackets. 155 Category Correct Incorrect Original A* Bracket A* BTG-Channel 67.5 69.8 68.2 32.5 30.2 31.8 Figure 3: Translation accuracy (percentage correct). where 6[] a [ ] ,iv. = max ,~sSyY 6StZz gYZ s<S<t YeE(s,S) ZEE(S,t) [ ] [1 ~bstyz [1 uJ styz 6O styz argmax s<S<t YfE(s,S) ZEE(S,t) max s<S<t YeE(S,t) ZEE(s,S) a[] 6,syY 6stz~ gvz a 0 ~,sz~ 6StyY gYZ styz 0 Cstvz = argmax a 0 ~sszz(j) 6styy(k) gYz 0 s<s<t Wstyz YEE(S,t) zeE(,,s) 3. Reconstruction Initialize by setting the root of the parse tree to q0 = (-1, T- 1, <s>, </s>). The remaining descendants in the optimal parse tree are then given recursively for any q = (s,t, y, z) by: a probabilistic optimization problem. But perhaps most importantly, our goal is to constrain as tightly as possible the space of possible transduction rela- tionships between two languages with fixed word- order, making no other language-specific assump- tions; we are thus driven to seek a kind of language- universal property. In contrast, the ID/LP work was directed at parsing a single language with free word-order. As a consequence, it would be neces- sary to enumerate a specific set of linear-precedence (LP) relations for the language, and moreover the immediate-dominance (ID) productions would typi- cally be more complex than binary-branching. This significantly increases time complexity, compared to our BTG model. Although it is not mentioned in their paper, the time complexity for ID/LP pars- ing rises exponentially with the length of produc- tion right-hand-sides, due to the number of permuta- tions. ITGs avoid this with their restriction to inver- sions, rather than permutations, and BTGs further minimize the grammar size. We have also confirmed empirically that our models would not be feasible under general permutations. LEFT(q) RIGHT(q) NIL if t-s<1 (s,a [1 " ,,,[1~ ifOq [] = q , ,Y, ~Yq ) ~- (s,a 0q,w 0q,z~j if0q=0 NIL otherwise NIL if ~-,<1 = (g~],t,w~],z) if0q = [] (a~),t, y, ¢~)) if Oq = 0 NIL otherwise Assume the number of translations per word is bounded by some constant. Then the maximum size of E(s,t) is proportional to t - s. The asymptotic time complexity for the translation algorithm is thus bounded by O(T7). Note that in practice, actual performance is improved by the sparseness of the translation matrix. An interesting connection has been suggested to direct parsing for ID/LP grammars (Shieber, 1984), in which word-order variations would be accommo- dated by the parser, and related ideas for genera- tion of free word-order languages in the TAG frame- work (Joshi, 1987). Our work differs from the ID/LP work in several important respects. First, we are not merely parsing, but translating with a bigram lan- guage model. Also, of course, we are dealing with 5 Results The algorithm above was tested in the SILC transla- tion system. The translation lexicon was largely con- structed by training on the HKUST English-Chinese Parallel Bilingual Corpus, which consists of govern- mental transcripts. The corpus was sentence-aligned statistically (Wu, 1994); Chinese words and colloca- tions were extracted (Fung and Wu, 1994; Wu and Fung, 1994); then translation pairs were learned via an EM procedure (Wu and Xia, 1995). The re- sulting English vocabulary is approximately 6,500 words and the Chinese vocabulary is approximately 5,500 words, with a many-to-many translation map- ping averaging 2.25 Chinese translations per English word. Due to the unsupervised training, the transla- tion lexicon contains noise and is only at about 86% percent weighted precision. With regard to accuracy, we merely wish to demonstrate that for statistical MT, accuracy is not significantly compromised by substituting our effi- cient optimization algorithm. It is not our purpose here to argue that accuracy can be increased with our model. No morphological processing has been used to correct the output, and until now we have only been testing with a bigram model trained on extremely limited samples. A coarse evaluation of 156 Input: Output: Corpus: Input: Output: Corpus: Input: Output: Corpus: Input: Output: Corpus: Input: Output: Corpus: (Xigng g~mg de ~n dlng f~n r6ng shl w6 m~n sh~ng hu6 fgmg shi de zhi zh~.) Hong Kong's stabilize boom is us life styles's pillar. Our prosperity and stability underpin our way of life. (B6n g~ng de jing ji qian jing yfi zhSng gu6, t~ bi~ shl gu~ng dSng shrug de ring jl qi£n jing xi xi xi~ng gu~n.) Hong Kong's economic foreground with China, particular Guangdong province's economic foreground vitally interrelated. Our economic future is inextricably bound up with China, and with Guangdong Province in particular. (W6 wgm qu£n zhi chi ta de yl jign.) I absolutely uphold his views. I fully support his views. (Zh~ xi~ gn pdi k~ ji~ qi£ng w6 m~n rl hbu w~i chi jin r6ng w6n ding de n~ng li.) These arrangements can enforce us future kept financial stabilization's competency. These arrangements will enhance our ability to maintain monetary stability in the years to come. (Bh gub, w6 xihn zhi k~ yi k6n ding de shuS, w6 m~n ji~ng hul ti gSng w~i d£ d~o g~ xihng zhfi yho mfl biao su6 xfi de jing f~i.) However, I now can certainty's say, will provide for us attain various dominant goal necessary's current expenditure. The consultation process is continuing but I can confirm now that the necessary funds will be made available to meet the key targets. Figure 4: Example translation outputs. translation accuracy was performed on a random sample drawn from Chinese sentences of fewer than 20 words from the parallel corpus, the results of which are shown in Figure 3. We have judged only whether the correct meaning (as determined by the corresponding English sentence in the parallel cor- pus) is conveyed by the translation, paying particu- lar attention to word order, but otherwise ignoring morphological and function word choices. For com- parison, the accuracies from the A*-based systems are also shown. There is no significant difference in the accuracy. Some examples of the output are shown in Figure 4. On the other hand, the new algorithm has indeed proven to be much faster. At present we are unable to use direct measurement to compare the speed of the systems meaningfully, because of vast implemen- tational differences between the systems. However, the order-of-magnitude improvements are immedi- ately apparent. In the earlier system, translation of single sentences required on the order of hours (Sun Sparc 10 workstations). In contrast the new algo- rithm generally takes less than one minute--usually substantially less--with no special optimization of the code. 6 Conclusion We have introduced a new algorithm for the run- time optimization step in statistical machine trans- lation systems, whose polynomial-time complexity addresses one of the primary obstacles to practicality facing statistical MT. The underlying model for the algorithm is a combination of the stochastic BTG and bigram models. The improvement in speed does not appear to impair accuracy significantly. We have implemented a version that accepts ITGs rather than BTGs, and plan to experiment with more heavily structured models. However, it is im- portant to note that the search complexity rises ex- ponentially rather than polynomially with the size of the grammar, just as for context-free parsing (Bar- ton, Berwick, and Ristad, 1987). This is not relevant to the BTG-based model we have described since its grammar size is fixed; in fact the BTG's minimal grammar size has been an important advantage over more linguistically-motivated ITG-based models. 157 We have also implemented a generalized version that accepts arbitrary grammars not restricted to normal form, with two motivations. The pragmatic benefit is that structured grammars become easier to write, and more concise. The expressiveness ben- efit is that a wider family of probability distribu- tions can be written. As stated earlier, the normal form theorem guarantees that the same set of shapes will be explored by our search algorithm, regardless of whether a binary-branching BTG or an arbitrary BTG is used. But it may sometimes be useful to place probabilities on n-ary productions that vary with n in a way that cannot be expressed by com- posing binary productions; for example one might wish to encourage longer straight productions. The generalized version permits such strategies. Currently we are evaluating robustness extensions of the algorithm that permit words suggested by the language model to be inserted in the output sen- tence, which the original A* algorithms permitted. Acknowledgements Thanks to an anonymous referee for valuable com- ments, and to the SILC group members: Xuanyin Xia, Eva Wai-Man Fong, Cindy Ng, Hong-sing Wong, and Daniel Ka-Leung Chan. Many thanks Mso to Kathleen McKeown and her group for dis- cussion, support, and assistance. References Barton, G. Edward, Robert C. Berwick, and Eric Sven Ristad. 1987. Computational Complex- ity and Natural Language. MIT Press, Cambridge, MA. Brown, Peter F., John Cocke, Stephen A. DellaPi- etra, Vincent J. DellaPietra, Frederick Jelinek, John D. Lafferty, Robert L. Mercer, and Paul S. Roossin. 1990. A statistical approach to machine translation. Computational Linguistics, 16(2):29- 85. Brown, Peter F., Stephen A. DellaPietra, Vincent J. DellaPietra, and Robert L. Mercer. 1993. The mathematics of statisticM machine translation: Parameter estimation. Computational Linguis- tics, 19(2):263-311. Dagan, Ido, Kenneth W. Church, and William A. Gale. 1993. Robust bilingual word alignment for machine aided translation. In Proceedings of the Workshop on Very Large Corpora, pages 1-8, Columbus, OH, June. Earley, Jay. 1970. An efficient context-free pars- ing algorithm. Communications of the Associa- tion for Computing Machinery, 13(2):94-102. Fung, Pascale and Dekai Wu. 1994. Statistical aug- mentation of a Chinese machine-readable dictio- nary. In Proceedings of the Second Annual Work- shop on Very Large Corpora, pages 69-85, Kyoto, August. Joshi, Aravind K. 1987. Word-order variation in natural language generation. In Proceedings of AAAI-87, Sixth National Conference on Artificial Intelligence, pages 550-555. Shieber, Stuart M. 1984. Direct parsing of ID/LP grammars. Linguistics and Philosophy, 7:135- 154. Viterbi, Andrew J. 1967. Error bounds for convolu- tional codes and an asymptotically optimal decod- ing Mgorithm. IEEE Transactions on Information Theory, 13:260-269. Wu, Dekai. 1994. Aligning a parallel English- Chinese corpus statistically with lexical criteria. In Proceedings of the 32nd Annual Conference of the Association for Computational Linguistics, pages 80-87, Las Cruces, New Mexico, June. Wu, Dekai. 1995a. An algorithm for simultaneously bracketing parallel texts by aligning words. In Proceedings of the 33rd Annual Conference of the Association for Computational Linguistics, pages 244-251, Cambridge, Massachusetts, June. Wu, Dekai. 1995b. Grammarless extraction of phrasal translation examples from parallel texts. In TMI-95, Proceedings of the Sixth International Conference on Theoretical and Methodological Is- sues in Machine Translation, volume 2, pages 354-372, Leuven, Belgium, July. Wu, Dekai. 1995c. Stochastic inversion trans- duction grammars, with application to segmen- tation, bracketing, and alignment of parallel cor- pora. In Proceedings of IJCAL95, Fourteenth In- ternational Joint Conference on Artificial Intelli- gence, pages 1328-1334, Montreal, August. Wu, Dekai. 1995d. Trainable coarse bilingual gram- mars for parMlel text bracketing. In Proceed- ings of the Third Annual Workshop on Very Large Corpora, pages 69-81, Cambridge, Massachusetts, June. Wu, Dekai and Pascale Fung. 1994. Improving Chinese tokenization with linguistic filters on sta- tistical lexicM acquisition. In Proceedings of the Fourth Conference on Applied Natural Language Processing, pages 180-181, Stuttgart, October. Wu, Dekai and Cindy Ng. 1995. Using brackets to improve search for statistical machine transla- tion. In PACLIC-IO, Pacific Asia Conference on Language, Information and Computation, pages 195-204, Hong Kong, December. Wu, Dekai and Xuanyin Xia. 1995. Large-scale au- tomatic extraction of an English-Chinese lexicon. Machine Translation, 9(3-4):285-313. 158 | 1996 | 21 |
S. EMH. E: A Generalised Two-Level System George Anton Kiraz* Computer Laboratory University of Cambridge (St John's College) Email: George. KirazOcl. cam. ac. uk URL: http ://www. c1. cam. ac. uk/users/gkl05 Abstract This paper presents a generalised two- level implementation which can handle lin- ear and non-linear morphological opera- tions. An algorithm for the interpretation of multi-tape two-level rules is described. In addition, a number of issues which arise when developing non-linear grammars are discussed with examples from Syriac. 1 Introduction The introduction of two-level morphology (Kosken- niemi, 1983) and subsequent developments has made implementing computational-morphology models a feasible task. Yet, two-level formalisms fell short from providing elegant means for the description of non-linear operations such as infixation, circumfix- ation and root-and-pattern morphology} As a re- sult, two-level implementations - e.g. (Antworth, 1990; Karttunen, 1983; Karttunen and Beesley, 1992; Ritchie et al., 1992) - have always been bi- ased towards linear morphology. The past decade has seen a number of proposals for handling non-linear morphology; 2 however, none * Supported by a Benefactor Studentship from St John's College• This research was done under the super- vision of Dr Stephen G. Pulman. Thanks to the anony- mous reviewers for their comments. All mistakes remain mine. 1Although it is possible to express some classes of non-linear rules using standard two-level formalisms by means of ad hoc diacritics, e.g., infixation in (Antworth, 1990, p. 156), there are no means for expressing other classes as root-and-pattern phenomena. 2(Kay, 1987), (Kataja and Koskenniemi, 1988), (Beesley et al., 1989), (Lavie et al., 1990), (Beesley, 1990), (Beesley, 1991), (Kornai, 1991), (Wiebe, 1992), (Pulman and Hepple, 1993), (Narayanan and Hashem, 1993), and (Bird and Ellison, 1994). See (Kiraz, 1996) for a review. (apart from Beesley's work) seem to have been im- plemented over large descriptions, nor have they pro- vided means by which the grammarian can develop non-linear descriptions using higher level notation• To test the validity of one's proposal or formalism, minimally a medium-scale description is a desider- atum. SemHe 3 fulfils this requirement• It is a gen- eralised multi-tape two-level system which is being used in developing non-linear grammars. This paper (1) presents the algorithms behind SemHe; (2) discusses the issues involved in compil- ing non-linear descriptions; and (3) proposes exten- sion/solutions to make writing non-linear rules eas- ier and more elegant. The paper assumes knowledge of multi-tape two-level morphology (Kay, 1987; Ki- raz, 1994c). 2 Linguistic Descriptions The linguist provides SemHe with three pieces of data: a lexicon, two-level rules and word formation grammar• All entries take the form of Prolog terms. 4 (Identifiers starting with an uppercase letter denote variables, otherwise they are instantiated symbols•) A lexical entry is described by the term synword( <morpheme>, (category)). Categories are of the form (category_symbol) : [(f eature_attrl = value1>, <]eature_attrn = wlu n) ] a notational variant of the PATR-II category formal- ism (Shieber, 1986). 3The name SemHe (Syriac .semh~ 'rays') is not an acronym, but the title of a grammatical treatise writ- ten by the Syriac polymath (inter alia mathematician and grammarian) Bar 'EbrSy5 (1225-1286), viz. k tSb5 d.semh.~ 'The Book of Rays'. aWe describe here the terms which are relevant to this paper. For a full description, see (Kiraz, 1996). 159 - tl_alphabet(0, [k, t,b, a, el ). % surface alphabet tl_alphabet(1, [cl, c2, c3,v, ~] ). tl_alphabet(2, [k, t,b, ~] ). tl_alphabet (3, [a, e,~] ). % lexical alphabets tl_set(radical, [k,t,b]). tl_set(vowel, [a, el). tl_set(clc3, [cl, c3]). % variable sets tl_rule(R1, [[], [], []1, [[~], [~], [~]], [[], [], []], =>, [], [], [], [3,[[3,[3,[]]). tl_rule(R2, [[], [], [3], [[P], [C], []3, [[1, [], []3, =>, [], [C], [3, [clc3(P) ,radical(C)1, [[], [1, []]). tl_rule(R3, [[], [], []1, [[v], [1, IV]l, [[], [1, []1, =>, [], IV], [1, [vowel(V)], [[], [], [3]). tl_rule(R4, [[], [1, [1], [[v], [1, IV]l, [[c2,v], [], []], <=>, [1, [1, [], [vowel(V)], [[], [], []]). tLrule(Rb, [[1, [1, []1, [[c21, [C], [1], [[], [], []], <=>, [], [C], [], [radical(C) ], [ [], [root : [measure=p' al] ] , [] ] ). tl_rule(R6, [[], [], []], [[c2], [el, []], [[], [], []], <=>, [], [C,C], [], [radical(C)], [[], [root:[measure=pa''el]], []]). Listing 1 A two-level rule is described using a syntactic vari- ant of the formalism described by (Ruessink, 1989; Pulman and Hepple, 1993), including the extensions by (Kiraz, 1994c), tl_rule( <id),<LLC>, (Lex}, (RLC}, COp>, <LSC>, <RSC>, (variables>, (features)). The arguments are: (1) a rule identifier, id; (2) the left-lexical-context, LLC, the lexical center, Lex, and the right-lexical-context, RLC, each in the form of a list-of-lists, where the ith list represents the /th lex- ical tape; (3) an operator, => for optional rules or <=> for obligatory rules; (4) the left-surface-context, LSC, the surface center, Sur], and the right-surface- context, RSC, each in the form of a list; (5) a list of the variables used in the lexical and surface ex- pressions, each member in the form of a predicate indicating the set identifier (see in]ra) and an argu- ment indicating the variable in question; and (6) a set of features (i.e. category forms) in the form of a list-of-lists, where the ith item must unify with the feature-structure of the morpheme affected by the rule on the ith lexical tape. A lexical string maps to a surface string iff (1) they can be partitioned into pairs of lexical-surface subsequences, where each pair is licenced by a rule, and (2) no partition violates an obligatory rule. Alphabet declarations take the form tl_alphabet( ( tape> , <symbol_list)), and variable sets are described by the predicate tl_set({id), {symbol_list}). Word formation rules take the form of unification-based CFG rules, synrule(<identifier), (mother), [(daughter1},..., (daughtern}l). The following example illustrates the derivation of Syriac /ktab/5 'he wrote' (in the simple p'al measure) 6 from the pattern morpheme {cvcvc} 'ver- bal pattern', root {ktb} 'notion of writing', and vo- calism {a}. The three morphemes produce the un- derlying form */katab/, which surfaces as /ktab/ since short vowels in open unstressed syllables are deleted. The process is illustrated in (1)/ a ~'~ */katab/~ /ktab/ (1) c v c v c = I I L k t b The pa "el measure of the same verb, viz./katteb/, is derived by the gemination of the middle consonant (i.e. t) and applying the appropriate vocalism {ae}. The two-level grammar (Listing 1) assumes three lexical tapes. Uninstantiated contexts are denoted by an empty list. R1 is the morpheme boundary (= ~) rule. R2 and R3 sanction stem consonants and vowels, respectively. R4 is the obligatory vowel deletion rule. R5 and R6 map the second radical, [t], for p'al and pa"el forms, respectively. In this example, the lexicon contains the entries in (2). 8 (2) synword(clvc2vca,pattern : 0)- synword(ktb, root: [measure = M]). synword(aa, vocalism : [measure = p'al]). synword(ae, vocalism : [measure = pa"el]). Note that the value of 'measure' in the root entry is SSpirantization is ignored here; for a discussion on Syriac spirantization, see (Kiraz, 1995). 6Syriac verbs are classified under various measures (forms). The basic ones are: p'al, pa "el and 'a]'el. 7This analysis is along the lines of (McCarthy, 1981) - based on autosegmental phonology (Goldsmith, 1976). SSpreading is ignored here; for a discussion, see (Ki- raz, 1994c). 160 uninstantiated; it is determined from the feature val- ues in R5, R6 and/or the word grammar (see infra, §4.3). 3 Implementation There are two current methods for implement- ing two-level rules (both implemented in Semi{e): (1) compiling rules into finite-state automata (multi- tape transducers in our case), and (2) interpreting rules directly. The former provides better perfor- mance, while the latter facilitates the debugging of grammars (by tracing and by providing debugging utilities along the lines of (Carter, 1995)). Addi- tionally, the interpreter facilitates the incremental compilation of rules by simply allowing the user to toggle rules on and off. The compilation of the above formalism into au- tomata is described by (Grimley-Evans et al., 1996). The following is a description of the interpreter. 3.1 Internal Representation The word grammar is compiled into a shift-reduce parser. In addition, a first-and-follow algorithm, based on (Aho and Ullman, 1977), is applied to compute the feasible follow categories for each cat- egory type. The set of feasible follow categories, NextCats, of a particular category Cat is returned by the predicate FOLLOW(+Cat, -NextCats). Ad- ditionally, FOLLOW(bos, NextCats) returns the set of category symbols at the beginning of strings, and cos E NextCats indicates that Cat may occur at the end of strings. The lexical component is implemented as charac- ter tries (Knuth, 1973), one per tape. Given a list of lexical strings, Lex, and a list of lexical pointers, LexPtrs, the predicate LEXICAL-TRANSITIONS( q-Lex, +LexPtrs, - New Lex Ptrs, - LexC ats ) succeeds iff there are transitions on Lex from LexP- trs; it returns NewLexPtrs, and the categories, Lex- Cats, at the end of morphemes, if any. Two-level predicates are converted into an inter- nal representation: (1) every left-context expression is reversed and appended to an uninstantiated tail; (2) every right-context expression is appended to an uninstantiated tail; and (3) each rule is assigned a 6-bit 'precedence value' where every bit represents one of the six lexical and surface expressions. If an expression is not an empty list (i.e. context is spec- ified), the relevant bit is set. In analysis, surface expressions are assigned the most significant bits, while lexical expressions are assigned the least sig- nificant ones. In generation, the opposite state of affairs holds. Rules are then reasserted in the or- der of their precedence value. This ensures that rules which contain the most specified expressions are tested first resulting in better performance. 3.2 The Interpreter Algorithm The algorithms presented below are given in terms of prolog-like non-deterministic operations. A clause is satisfied iff all the conditions under it are satisfied. The predicates are depicted top-down in (3). (SemHe makes use of an earlier implementation by (Pulman and Hepple, 1993).) (3) Two-Level-Analysis l i I 1 l Invalid-partition ) In order to minimise accumulator-passing ar- guments, we assume the following initially-empty stacks: ParseStack accumulates the category struc- tures of the morphemes identified, and FeatureStack maintains the rule features encountered so far. ('+' indicates concatenation.) PARTITION partitions a two-level analysis into se- quences of lexical-surface pairs, each licenced by a rule. The base case of the predicate is given in List- ing 2, 9 and the recursive case in Listing 3. The recursive COERCE predicate ensures that no partition is violated by an obligatory rule. It takes three arguments: Result is the output of PARTITION (usually reversed by the calling predicate, hence, COERCE deals with the last partition first), PrevCats is a register which keeps track of the last morpheme category encountered, and Partition returns selected elements from Result. The base case of the predicate is simply COERCE([], _, []) - i.e., no more par- titions. The recursive case is shown in Listing 4. CurrentCats keeps track of the category of the mor- pheme which occures in the current partition. The invalidity of a partition is determined by INVALID- PARTITION (Listing 5). TwO-LEVEL-ANALYSIS (Listing 6) is the main predicate. It takes a surface string or lexical string(s) and returns a list of partitions and a 9For efficiency, variables appearing in left-context and centre expressions are evaluated after LEXICAL- TRANSITIONS since they will be fully instantiated then; only right-contexts are evaluated after the recursion. 161 PARTITION(SurfDone, SurfToDo, LexDone, LexToDo, LexPtrs, NextCats, Result) SurfToDo ---- [J & % surface string exhausted LexToDo = [ [], [] ,..-, [] ] & % all lexical strings exhausted LexPtrs = [rz,rt,-..,rt] & % all lexical pointers are at the root node eos E NextCats ~ % end-of-string Result = []. % output: no more results Listing 2 PARTITION( SurfDone, SurfToDo, LexDone, LexToDo, LexPtrs, NextCats, [ ResultHead I Resuit Tai~) there is tl_rule(Id, LLC, Lex, RLC, Op, LSC, Surf, RSC, Variables, Features) such that ( Op = (=> or <=>), LexDone = LLC, SurfDone -= LSC, SurfToDo = Surf + RSC and LexToDo = Lex + RLC) & LEXICAL-TRANSITIONS(Lex, LexPtrs, NewLexPtrs, LexCats) & push Features onto FeatureStack ~z % keep track of rule features if LexCats ¢ nil then % found a morpheme boundary? while FeatureStaek is not empty % unify rule and lexical features unify LexCats with (pop FeatureStaek) & push LexCats onto ParseStack ~z % update the parse stack if LexCats E NextCats then % get next category FOLLOW( LexCats, NewNextCats) end if ResultHead = Id/SurfDone/Surf/RSC/ LexDone/Lex/RL C/LexCats NewSurfDone = SurfDone + reverse Surf & % make new arguments ... NewSurfToDo = RSC & % ... and recurse NewLexDone = LexDone ÷ reverse Lex & NewLexToDo =- RLC & PARTITION( NewSurfDone, NewSurfToDo, NewLexDone, NewLex To Do, NewLexPtrs, NewNextCats, ResultTail) & for all SetId(Var) e Variables % check variables there is tLset(SetId, Set) such that Vat E Set. Listing 3 CoERcF~([Id/LSC/Surf/RSC/LLC//Lex//RLC//LexCats l ResultTai~, PrevCats, [Id/Surf//Lex l Partition Tai~) if LexCats yt nil then CurrentCats = LexCats else CurrentCats = PrevCats &: not INVALID-PARTITION(LSC~ Surf, RSC, LLC, Lex, RLC, CurrentCats) & CoERCE( Result Tail, CurrentCats, Partition TaiO. Listing 4 INVALID-PARTITION(LSC, Surf, RSC, LLC, Lex, RLC, Cats) there is tl_rule(Id, LLC, Lex, RLC, <=>, LSC, NotSur~, RSC, Variables, Features) such that NotSurf ¢ Surf for all Setld(Var) e Variables % check variables there is tl_set(SetId, Set) such that Vat E Set & unify Cats with Features & fail. Listing 5 162 TwO-LEVEL-ANALYSIS(?Surf, ? Lex, -Partition, -Parse) FOLLOW(bos, NextCats) &: PARTITION([], Surf, [[1, [] ,-", [11, Lex, [rt,rt,...,rt], NextCats, Result) CoERcE(reverse Result, nil, Partition) &: SHIFT-REDUCE( ParseStack, Parse). Listing 6 morphosyntactic parse tree. To analyse a sur- face form, one calls TwO-LEVEL-ANALYSIS(+Surf, -Lex, -Partition, -Parse). To generate a surface form, one calls TwO-LEVEL-ANALYSIS(-Surf, +Lex, -Partition, -Parse). 4 Developing Non-Linear Grammars When developing Semitic grammars, one comes across various issues and problems which normally do not arise with linear grammars. Some can be solved by known methods or 'tricks'; others require extensions in order to make developing grammars easier and more elegant. This section discuss issues which normally do not arise when compiling linear grammars. 4.1 Linearity vs. Non-Linearity In Semitic languages, non-linearity occurs only in stems. Hence, lexical descriptions of stems make use of three lexical tapes (pattern, root & vocalism), while those of prefixes and suffixes use the first lexi- cal tape. This requires duplicating rules when stat- ing lexical constraints. Consider rule R4 (Listing 1). It allows the deletion of the first stem vowel by the virtue of RLC (even if c2 was not indexed); hence /katab/--+ /ktab/. Now consider adding the suffix {eh} 'him/it': /katab/+{eh} ~/katbeh/, where the second stem vowel is deleted since deletion applies right-to-left; however, RLC can only cope with stem vowels. Rule R7 (Listing 7) is required. One might suggest placing constraints on surface expressions in- stead. However, doing so causes surface expressions to be dependent on other rules. Additionally, Lex in R4 and R7 deletes stem vow- els. Consider adding the prefix {wa} 'and': {wa} + /katab/ + {eh} --+ /wkatbeh/, where the prefix vowel is also deleted. To cope with this, two addi- tional rules like R4 and R7 are required, but with Lex = [[V], [], [1]. We resolve this by allowing the user to write ex- pansion rules of the from expand( (symbol), (expansion), (variables)). In our example, the expansion rules in (4) are needed. (4) expand(C, [[C], [], []], [radical(C)]). expand(C, [[c], [C], []], [radical(C)]). expand(V, [ [V], [], [11, [vowel (V) ]). expand(V, [[v], [], IV]l, [vowel(V)]). The linguist can then rewrite R4 as R8 (Listing 7), and expand it with the command expand(RS). This produces four rules of the form of R4, but with the following expressions for Lex and RLC: 1° Lex [[vl],[],[]] [[vl],[],[]] [ [v], [], [vl] ] [ [v], [], [vi]] 4.2 Vocalisation RLC [ [C,V2], [], [] ] [ [c, v], [C], [V2] ] [[C,V2],[], []] [ [c, v], [C], [V21 ] Orthographically, Semitic texts are written without short vowels. It was suggested by (Beesley et al., 1989, et. seq.) and (Kiraz, 1994c) to allow short vowels to be optionally deleted. This, however, puts a constraint on the grammar: no surface expres- sion can contain a vowel, lest the vowel is optionally deleted. We assume full vocalisation in writing rules. A second set of rules can allow the deletion of vowels. The whole grammar can be taken as the composition of the two grammars: e.g. {cvcvc},{ktb},{aa} --+ /ktab/-~ [ktab, ktb]. 4.3 Morphosyntactic Issues Finite-state models of two-level morphology im- plement morphotactics in two ways: using 'con- tinuation patterns/classes' (Koskenniemi, 1983; Antworth, 1990; Karttunen, 1993) or unification- based grammars (Bear, 1986; Ritchie et al., 1992). The former fails to provide elegant morphosyntactic parsing for Semitic languages, as will be illustrated in this section. 4.3.1 Stems and X-Theory A pattern, a root and a vocalism do not alway produce a free stem which can stand on its own. In Syriac, for example, some verbal forms are bound: they require a stem morpheme which indicates the measure in question, e.g. the prefix {~a} for a/'el 1°Note, however, that the expand command does not insert [~ randomly in context expressions. 163 tl_rule(RT, [[], [], []], [[v], [], [V]], [[c3,b,e], [], []], <=>, [], [], [], [vowel(V)], [[], [], []]). tl_rule(K8, [], [Vl], [C,V2], <=>, [], [], [], [vowel (Vl), vowel (V2), radical (C) ], [ [], [], [] ] ). Listing 7 synrule(rulel, synrule(rule2, synrule(rule3, synrule(rule4, synrule(rule5, synrule(rule6, synrule(rule7, synrule(rule8, stem: [X=-2, measure=M, measure=p' al I pa' ' el], [pattern: [], root : [measure=M,measure=p' al I pa' ' el], vocalism: [measure=M, measure=p' al ]pa' ' el] ]). stem: [X=-2,measure=M], [stem_affix: [measure=M], pattern: [], root: [measure=M], vocalism: [measure=M]]). stem: IX =- i, measure=M, mood=act], [st em: [bar= - 2, measure=M, mood=act ] ]). st em: IX=- I, measure=M, mood=pas s], [reflexive:[], stem: [X=-2,measure=S,mood=pass]]). st em: [X=O, measure=M, mood=MD, npg=s~3&m], [stem: IX=-1 ,measure=S,mood=MD] ]). stem: [X=O, measure=M ,mood=MD ,npg=NPG], [stem: IX=-1 ,measure=M ,mood=MD], vim: [type=surf, circum=no ,npg=NPG] ]). st em: IX=O, measure=M, mood=MD, npg=NPG], [vim: [t ype=pref, cir cure=no, npg=NPG], st em: [X=- I, measure=M, mood=MD] ]). stem: [X=O, measure=M ,mood=MD ,npg=NPG], [vim: [type=pref, circum=yes ,npg=NPG], stem: IX=-1 ,measure=M ,mood=MD], vim: [type=suf f, circum=yes, npg=NPG] ]). Listing 8 stems. Additionally, passive forms are marked by the reflexive morpheme {yet}, while active forms are not marked at all. This structure of stems can be handled hierarchi- cally using X-theory. A stem whose stem morpheme is known is assigned X=-2 (Rules 1-2 in Listing 8). Rules which indicate mood can apply only to stems whose measure has been identified (i.e. they have X=-2). The resulting stems are assigned X=-I (Rules 3-4 in Listing 8). The parsing of Syriac /~etkteb/ (from {~et}+/kateb/after the deletion of/a/by R4) appears in (5). n (5) reflexive sty2] Yet pattern root vocalism J J J cvcvc ktb ae Now free stems which may stand on their own can be assigned X=0. However, some stems require nIn the remaining examples, it is assumed that the lexicon and two-level rules are expanded to cater for the new material. verbal inflectional markers. 4.3.2 Verbal Inflectional Markers With respect to verbal inflexional markers (VIMs), there are various types of Semitic verbs: those which do not require a VIM (e.g. sing. 3rd masc.), and those which require a VIM in the form of a prefix (e.g. perfect), suffix (e.g. some imperfect forms), or circumfix (e.g. other imperfect forms). Each VIM is lexically marked inter alia with two features: 'type' which states whether it is a prefix or a suffix, and 'circum' which denotes whether it is a circumfix. Rules 5-8 (Listing 8) handle this. The parsing of Syriac /netkatbun/ (from {ne}+ {~et)+/katab/+{un}) appears in (6). (6) stem~ vim sty1] ne reflexive sty2] yet pattern root vocalism f f I cvcvc ktb aa vim I un 164 Verb Class Inflections Analysed 1st Analysis Subsequent Analysis Mean (sec/word) (sec/word) (sec/word) Strong 78 5.053 0.028 2.539 Initial n~n 52 6.756 0.048 3.404 Initial 5laph 57 4.379 0.077 2.228 Middle 51aph 67 5.107 0.061 2.584 Overall mean 63.5 5.324 0.054 2.689 Table 1 (Beesley et al., 1989) handle this problem by find- ing a logical expression for the prefix and suffix por- tions of circumfix morphemes, and use unification to generate only the correct forms - see (Sproat, 1992, p. 158). This approach, however, cannot be used here since, unlike Arabic, not all Syriac VIMs are in the form of circumfixes. 4.3.3 Interfacing with a Syntactic Parser A Semitic 'word' (string separated by word bound- ary) may in fact be a clause or a sentence. There- fore, a morphosyntactic parsing of a 'word' may be a (partial) syntactic parsing of a sentence in the form of a (partial) tree. The output of a morphologi- cal analyser can be structured in a manner suitable for syntactic processing. Using tree-adjoining gram- mars (Joshi, 1985) might be a possibility. 5 Performance To test the integrity, robustness and performance of the implementation, a two-level grammar of the most frequent words in the Syriac New Testament was compiled based on the data in (Kiraz, 1994b). The grammar covers most classes of verbal and nom- inal forms, in addition to prepositions, proper nouns and words of Greek origin. A wider coverage would involve enlarging the lexicon (currently there are 165 entries) and might triple the number of two-level rules (currently there are c. 50 rules). Table 1 provides the results of analysing verbal classes. The test for each class represents analysing most of its inflexions. The test was executed on a Sparc ELC computer. By constructing a corpus which consists only of the most frequent words, one can estimate the per- formance of analysing the corpus as follows, n 4 p _- 5.324n + ~i=1 0.05 (fi - 1) sec/word ~i~=l fi where n is the number of distinct words in the corpus and fi is the frequency of occurrence of the ith word. The SEDRA database (Kiraz, 1994a) provides such data. All occurrences of the 100 most frequent lex- emes in their various inflections (a total of 72,240 occurrences) can be analysed at the rate of 16.35 words/sec. (Performance will be less if additional rules are added for larger coverage.) The results may not seem satisfactory when com- pared with other prolog implementations of the same formalism (cf. 50 words/sec, in (Carter, 1995)). One should, however, keep in mind the complexity of Syr- iac morphology. In addition to morphological non- linearity, phonological conditional changes - conso- nantal and vocalic - occur in all stems, and it is not unusual to have more than five such changes per word. Once developed, a grammar is usually compiled into automata which provides better per- formance. 6 Conclusion This paper has presented a computational morphol- ogy system which is adequate for handling non-linear grammars. We are currently expanding the gram- mar to cover the whole of New Testament Syriac. One of our future goals is to optimise the prolog im- plementation for speedy processing and to add de- bugging facilities along the lines of (Carter, 1995). For useful results, a Semitic morphological anal- yser needs to interact with a syntactic parser in order to resolve ambiguities. Most non-vocalised strings give more than one solution, and some inflectional forms are homographs even if fully vocalised (e.g. in Syriac imperfect verbs: sing. 3rd masc. = plural 1st common, and sing. 3rd fern. = sing. 2nd masc.). We mentioned earlier the possibility of using TAGs. References Aho, A. and Ullman, J. (1977). Principles of Com- piler Design. Addison-Wesley. Antworth, E. (1990). PC-KIMMO: A two-Level Processor for Morphological Analysis. Occasional Publications in Academic Computing 16. Summer Institute of Linguistics, Dallas. Bear, J. (1986). A morphological recognizer with syntactic and phonological rules. In COLING-86, pages 272-6. 165 Beesley, K. (1990). Finite-state description of Ara- bic morphology. In Proceedings of the Second Cambridge Conference: Bilingual Computing in Arabic and English. Beesley, K. (1991). Computer analysis of Arabic morphology. In Comrie, B. and Eid, M., edi- tors, Perspectives on Arabic Linguistics III: Pa- pers from the Third Annual Symposium on Arabic Linguistics. Benjamins, Amsterdam. Beesley, K., Buckwalter, T., and Newton, S. (1989). Two-level finite-state analysis of Arabic morphol- ogy. In Proceedings of the Seminar on Bilingual Computing in Arabic and English. The Literary and Linguistic Computing Centre, Cambridge. Bird, S. and Ellison, T. (1994). One-level phonology. Computational Linguistics, 20(1):55-90. Carter, D. (1995). Rapid development of morpho- logical descriptions for full language processing systems. In EACL-95, pages 202-9. Goldsmith, J. (1976). Autosegmental Phonology. PhD thesis, MIT. Published as Autosegmental and Metrical Phonology, Oxford 1990. Grimley-Evans, E., Kiraz, G., and Pulman, S. (1996). Compiling a partition-based two-level for- malism. In COLING-96. Forthcoming. Joshi, A. (1985). Tree-adjoining grammars: How much context sensitivity is required to provide reasonable structural descriptions. In Dowty, D., Karttunen, L., and Zwicky, A., editors, Natural Language Parsing. Cambridge University Press. Karttunen, L. (1983). phological processor. 22:165-86. Kimmo: A general mor- Texas Linguistic Forum, Karttunen, L. (1993). Finite-state lexicon compiler. Technical report, Palo Alto Research Center, Xe- rox Corporation. Karttunen, L. and Beesley, K. (1992). Two-level rule compiler. Technical report, Palo Alto Research Center, Xerox Corporation. Kataja, L. and Koskenniemi, K. (1988). Finite state description of Semitic morphology. In COLING- 88, volume 1, pages 313-15. Kay, M. (1987). Nonconcatenative finite-state mor- phology. In EACL-87, pages 2-10. Kiraz, G. (1994a). Automatic concordance genera- tion of Syriac texts. In Lavenant, R., editor, VI Symposium Syriaeum 1992, Orientalia Christiana Analecta 247, pages 461-75. Pontificio Institutum Studiorum Orientalium. Kiraz, G. (1994b). Lexical Tools to the Syriac New Testament. JSOT Manuals 7. Sheffield Academic Press. Kiraz, G. (1994c). Multi-tape two-level morphology: a case study in Semitic non-linear morphology. In COLING-94, volume 1, pages 180-6. Kiraz, G. (1995). Introduction to Syriae Spirantiza- tion. Bar Hebraeus Verlag, The Netherlands. Kiraz, G. (1996). Computational Approach to Non- Linear Morphology. PhD thesis, University of Cambridge. Knuth, D. (1973). The Art of Computer Program- ming, volume 3. Addison-Wesley. Kornai, A. (1991). Formal Phonology. PhD thesis, Stanford University. Koskenniemi, K. (1983). Two-Level Morphology. PhD thesis, University of Helsinki. Lavie, A., Itai, A., and Ornan, U. (1990). On the applicability of two level morphology to the in- flection of Hebrew verbs. In Choueka, Y., editor, Literary and Linguistic Computing 1988: Proceed- ings of the 15th International Conference, pages 246-60. McCarthy, J. (1981). A prosodic theory of non- concatenative morphology. Linguistic Inquiry, 12(3):373-418. Narayanan, A. and Hashem, L. (1993). On abstract finite-state morphology. In EACL-93, pages 297- 304. Pulman, S. and Hepple, M. (1993). A feature-based formalism for two-level phonology: a description and implementation. Computer Speech and Lan- guage, 7:333-58. Ritchie, G., Black, A., Russell, G., and Pulman, S. (1992). Computational Morphology: Practical Mechanisms for the English Lexicon. MIT Press, Cambridge Mass. Ruessink, H. (1989). Two level formalisms. Techni- cal Report 5, Utrecht Working Papers in NLP. Shieber, S. (1986). An Introduction to Unification- Based Approaches to Grammar. CSLI Lecture Notes Number 4. Center for the Study of Lan- guage and Information, Stanford. Sproat, R. (1992). Morphology and Computation. MIT Press, Cambridge Mass. Wiebe, B. (1992). Modelling autosegmental phonol- ogy with multi-tape finite state transducers. Mas- ter's thesis, Simon Fraser University. 166 | 1996 | 22 |
INVITED TALK Head Automata and Bilingual Tiling: Translation with Minimal Representations Hiyan Alshawi AT&T Research 600 Mountain Avenue, Murray Hill, NJ 07974, USA [email protected] Abstract We present a language model consisting of a collection of costed bidirectional finite state automata associated with the head words of phrases. The model is suitable for incremental application of lexical asso- ciations in a dynamic programming search for optimal dependency tree derivations. We also present a model and algorithm for machine translation involving optimal "tiling" of a dependency tree with entries of a costed bilingual lexicon. Experimen- tal results are reported comparing methods for assigning cost functions to these mod- els. We conclude with a discussion of the adequacy of annotated linguistic strings as representations for machine translation. 1 Introduction Until the advent of statistical methods in the main- stream of natural language processing, syntactic and semantic representations were becoming pro- gressively more complex. This trend is now revers- ing itself, in part because statistical methods re- duce the burden of detailed modeling required by constraint-based grammars, and in part because sta- tistical models for converting natural language into complex syntactic or semantic representations is not well understood at present. At the same time, lex- ically centered views of language have continued to increase in popularity. We can see this in lexical- ized grammatical theories, head-driven parsing and generation, and statistical disambiguation based on lexical associations. These themes -- simple representations, statisti- cal modeling, and lexicalism -- form the basis for the models and algorithms described in the bulk of this paper. The primary purpose is to build effec- tive mechanisms for machine translation, the oldest and still the most commonplace application of non- superficial natural language processing. A secondary motivation is to test the extent to which a non-trivial language processing task can be carried out without complex semantic representations. In Section 2 we present reversible mono-lingual models consisting of collections of simple automata associated with the heads of phrases. These head automata are applied by an algorithm with admissi- ble incremental pruning based on semantic associa- tion costs, providing a practical solution to the prob- lem of combinatoric disambiguation (Church and Patil 1982). The model is intended to combine the lexical sensitivity of N-gram models (Jelinek et al. 1992) and the structural properties of statistical con- text free grammars (Booth 1969) without the com- putational overhead of statistical lexicalized tree- adjoining grammars (Schabes 1992, Resnik 1992). For translation, we use a model for mapping de- pendency graphs written by the source language head automata. This model is coded entirely as a bilingual lexicon, with associated cost parame- ters. The transfer algorithm described in Section 4 searches for the lowest cost 'tiling' of the target dependency graph with entries from the bilingual lexicon. Dynamic programming is again used to make exhaustive search tractable, avoiding the com- binatoric explosion of shake-and-bake translation (Whitelock 1992, Brew 1992). In Section 5 we present a general framework for as- sociating costs with the solutions of search processes, pointing out some benefits of cost functions other than log likelihood, including an error-minimization cost function for unsupervised training of the pa- rameters in our translation application. Section 6 briefly describes an English-Chinese translator em- ploying the models and algorithms. We also present experimental results comparing the performance of different cost assignment methods. Finally, we return to the more general discussion of representations for machine translation and other natural language processing tasks, arguing the case for simple representations close to natural language itself. 2 Head Automata Language Models 2.1 Lexieal and Dependency Parameters Head automata mono-lingual language models con- sist of a lexicon, in which each entry is a pair (w, m) of a word w from a vocabulary V and a head au- tomaton m (defined below), and a parameter table giving an assignment of costs to events in a genera- tive process involving the automata. 167 We first describe the model in terms of the familiar paradigm of a generative statistical model, present- ing the parameters as conditional probabilities. This gives us a stochastic version of dependency grammar (Hudson 1984). Each derivation in the generative statistical model produces an ordered dependency tree, that is, a tree in which nodes dominate ordered sequences of left and right subtrees and in which the nodes have la- bels taken from the vocabulary V and the arcs have labels taken from a set R of relation symbols. When a node with label w immediately dominates a node with label w' via an arc with label r, we say that w' is an r-dependent of the head w. The interpre- tation of this directed arc is that relation r holds between particular instances of w and w'. (A word may have several or no r-dependents for a particular relation r.) A recursive left-parent-right traversal of the nodes of an ordered dependency tree for a deriva- tion yields the word string for the derivation. A head automaton m of a lexical entry (w, m) de- fines possible ordered local trees immediately dom- inated by w in derivations. Model parameters for head automata, together with dependency parame- ters and lexical parameters, give a probability dis- tribution for derivations. A dependency parameter P( L w'lw, r') is the probability, given a head w with a dependent arc with label r', that w' is the r'-dependent for this arc. A lexical parameter P(m, qlr, t, w) is the probability that a local tree immediately dom- inated by an r-dependent w is derived by starting in state q of some automaton m in a lexieal entry (w, m). The model also includes lexieal parameters P(w,m, qlt>) for the probability that w is the head word for an entire derivation initiated from state q of automaton m. 2.2 Head Automata A head automaton is a weighted finite state machine that writes (or accepts) a pair of sequences of rela- tion symbols from R: ((rl... r,)). These correspond to the relations between a head word and the sequences of dependent phrases to its left and right (see Figure 1). The machine consists of a finite set q0, • • ", qs of states and an action ta- ble specifying the finite cost (non-zero probability) actions the automaton can undergo. There are three types of action for an automaton m: left transitions, right transitions, and stop ac- tions. These actions, together with associated prob- abilistic model parameters, are as follows. W Wl " " " Wk Wk+l " " " Wn Figure h Head automaton m scans left and right sequences of relations ri for dependents wi of w. • Left transition: if in state qi-1, m can write a symbol r onto the right end of the current left sequence and enter state qi with probability P(~, qi, rlqi-1, m). • Right transition: if in state qi-1, m can write a symbol r onto the left end of the current right sequence and enter state qi with proba- bility P(--* , qi, rlqi-1, m). • Stop: if in state q, m can stop with probabil- ity P(t31q , m), at which point the sequences are considered complete. For a consistent probabilistic model, the probabili- ties of all transitions and stop actions from a state q must sum to unity. Any state of a head automaton can be an initial state, the probability of a partic- ular initial state in a derivation being specified by lexical parameters. A derivation of a pair of sym- bol sequence thus corresponds to the selection of an initial state, a sequence of zero or more transitions (writing the symbols) and a stop action. The prob- ability, given an initial state q, that automaton m will a generate a pair of sequences, i.e. P((rl'.. rk), (rk+l"'' rn)Ira, q) is the product of the probabilities of the actions taken to generate the sequences. The case of zero transitions will yield empty sequences, correspond- ing to a leaf node of the dependency tree. From a linguistic perspective, head automata al- low for a compact, graded, notion of lexical subcate- gorization (Gazdar et al. 1985) and the linear order of a head and its dependent phrases. Lexical param- eters can control the saturation of a lexical item (for example a verb that is both transitive and intran- sitive) by starting the same automaton in different states. Head automata can also be used to code a grammar in which states of an automaton for word w corresponds to X-bar levels (Jaekendoff 1977) for phrases headed by w. Head automata are formally more powerful than finite state automata that accept regular languages in the following sense. Each head automaton defines a formal language with alphabet R whose strings are the concatenation of the left and right sequence pairs 168 written by the automaton. The class of languages defined in this way clearly includes all regular lan- guages, since strings of a regular language can be generated, for example, by a head automaton that only writes a left sequence. Head automata can also accept some non-regular languages requiring coordi- nation of the left and right sequences, for example the language anb ~ (requiring two states), and the language of palindromes over a finite alphabet. 2.3 Derivation Probability Let the probability of generating an ordered depen- dency subtree D headed by an r-dependent word w be P(D]w, r). The recursive process of generating this subtree proceeds as follows: 1. Select an initial state q of an automaton m for w with lexical probability P(m, q[r, ~, w). 2. Run the automaton m0 with initial state q to generate a pair of relation sequences with prob- ability P((rl... rk), (rk+l-"" r,,)lm, q). 3. For each relation ri in these sequences, select a dependent word wi with dependency probabil- ity P(l, wi[w, ri). 4. For each dependent wi, recursively generate a subtree with probability P(D~ Iwi, ri). We can now express the probability P(Do) for an entire ordered dependency tree derivation Do headed by a word w0 as P(Do) = P(wo, too, q0[ 1>) P( (rl . . . rl,), (rk+l " . . rnl Imo, qo) YIl <i<n P(l, wilwo, ri)P( Di Iwi, ri). In the translation application we search for the high- est probability derivation (or more generally, the N- highest probability derivations). For other purposes, the probability of strings may be of more interest. The probability of a string according to the model is the sum of the probabilities of derivations of ordered dependency trees yielding the string. In practice, the number of parameters in a head automaton language model is dominated by the de- pendency parameters, that is, O(]V]2]RI) parame- ters. This puts the size of the model somewhere in between 2-gram and 3-gram model. The similarly motivated link grammar model (Lafferty, Sleator and Temperley 1992) has O([VI 3) parameters. Un- like simple N-gram models, head automata models yield an interesting distribution of sentence lengths. For example, the average sentence length for Monte- Carlo generation with our probabilistic head au- tomata model for ATIS was 10.6 words (the average was 9.7 words for the corpus it was trained on). 3 Analysis and Generation 3.1 Analysis Head automaton models admit efficient lexically driven analysis (parsing) algorithms in which par- tial analyses are costed incrementally as they are constructed. Put in terms of the traditional parsing issues in natural language understanding, "seman- tic" associations coded as dependency parameters are applied at each parsing step allowing semanti- cally suboptimal analyses to be eliminated, so the analysis with the best semantic score can be identi- fied without scoring an exponential number of syn- tactic parses. Since the model is lexical, linguistic constructions headed by lexical items not present in the input are not involved in the search the way they are with typical top-down or predictive parsing strategies. We will sketch an algorithm for finding the lowest cost ordered dependency tree derivation for an input string in polynomial time in the length of the string. In our experimental system we use a more general version of the algorithm to allow input in the form of word lattices. The algorithm is a bottom-up tabular parser (Younger 1967, Early 1970) in which constituents are constructed "head-outwards" (Kay 1989, Sata and Stock 1989). Since we are analyzing bottom- up with generative model automata, the algorithm 'runs' the automata backwards. Edges in the parsing lattice (or "chart") are tuples representing partial or complete phrases headed by a word w from position i to position j in the string: (w,t,i,j,m,q,c). Here m is the head automaton for w in this deriva- tion; the automaton is in state q; t is the dependency tree constructed so far, and c is the cost of the par- tial derivation. We will use the notation C(zly ) for the cost of a model event with probability P(zIy); the assignment of costs to events is discussed in Sec- tion 5. Initialization: For each word w in the input be- tween positions i and j, the lattice is initialized with phrases {w,{},i,j,m,q$,c$) for any lexical entry (w, m) and any final state q! of the automaton m in the entry. A final state is one for which the stop action cost c! = C(DJq!, m) is finite. Transitions: Phrases are combined bottom-up to form progressively larger phrases. There are two types of combination corresponding to left and right transitions of the automaton for the word acting as the head in the combination. We will specify left combination; right combination is the mirror im- age of left combination. If the lattice contains two phrases abutting at position k in the string: 169 (Wl, tl, i, k, ml, ql, Cl) (W2, t2, k, j, ra2, q2, c2), and the parameter table contains the following finite costs parameters (a left v-transition of m2, a lexical parameter for wl, and an r-dependency parameter): c3 = C(~---, q2, rlq~, m2) c4 = C(ml, qiir, ~, Wx) c5 = C(l, wllw2, r), then build a new phrase headed by w2 with a tree t~ formed by adding tl to t~ as an r-dependent of w2: (w2, t~, i, j, m2, q~, cl + c2 + c3 + c4 -4- cs). When no more combinations are possible, for each phrase spanning the entire input we add the appro- priate start of derivation cost to these phrases and select the one with the lowest total cost. Pruning: The dynamic programming condition for pruning suboptimal partial analyses is as follows. Whenever there are two phrases p: (w,t,i,j,m,q,c) p' = (w, t', i, j, m, q, c'), and c ~ is greater than c, then we can remove p~ be- cause for any derivation involving p~ that spans the entire string, there will be a lower cost derivation involving p. This pruning condition is effective at curbing a combinatorial explosion arising from, for example, prepositional phrase attachment ambigui- ties (coded in the alternative trees t and t'). The worst case asymptotic time complexity of the analysis algorithm is O(min(n 2, IY12)n3), where n is the length of an input string and IVI is the size of the vocabulary. This limit can be derived in a simi- lar way to cubic time tabular recognition algorithms for context free grammars (Younger 1967) with the grammar related term being replaced by the term min(n 2, IVI 2) since the words of the input sentence also act as categories in the head automata model. In this context "recognition" refers to checking that the input string can be generated from the grammar. Note that our algorithm is for analysis (in the sense of finding the best derivation) which, in general, is a higher time complexity problem than recognition. 3.2 Generation By generation here we mean determining the low- est cost linear surface ordering for the dependents of each word in an unordered dependency structure re- sulting from the transfer mapping described in Sec- tion 4. In general, the output of transfer is a de- pendency graph and the task of the generator in- volves a search for a backbone dependency tree for the graph, if necessary by adding dependency edges to join up unconnected components of the graph. For each graph component, the main steps of the search process, described non-deterministically, are 1. Select a node with word label w having a finite start of derivation cost C(w, m, ql t>). 2. Execute a path through the head automaton m starting at state q and ending at state q' with a finite stop action cost C(Olq' , m). When mak- ing a transition with relation ri in the path, se- lect a graph edge with label ri from w to some previously unvisited node wi with finite depen- dency cost C(~,wilw, ri). Include the cost of the transition (e.g. C(---% ql, rilqi-1, m)) in the running total for this derivation. 3. For each dependent node wi, select a lexical en- try with cost C(mi, qilri, J., wi), and recursively apply the machine rni from state ql as in step 2. 4. Perform a left-parent-right traversal of the nodes of the resulting dependency tree, yield- ing a target string. The target string resulting from the lowest cost tree that includes all nodes in the graph is selected as the translation target string. The independence assump- tions implicit in head automata models mean that we can select lowest cost orderings of local depen- dency trees, below a given relation r, independently in the search for the lowest cost derivation. When the generator is used as part of the trans- lation system, the dependency parameter costs are not, in fact, applied by the generator. Instead, be- cause these parameters are independent of surface order, they are applied earlier by the transfer com- ponent, influencing the choice of structure passed to the generator. 4 Transfer Maps 4.1 Transfer Model Bilingual Lexicon The transfer model defines possible mappings, with associated costs, of dependency trees with source- language word node labels into ones with target- language word labels. Unlike the head automata monolingual models, the transfer model operates with unordered dependency trees, that is, it treats the dependents of a word as an unordered bag. The model is general enough to cover the common trans- lation problems discussed in the literature (e.g. Lin- dop and Tsujii 1991 and Dorr 1994) including many- to-many word mapping, argument switching, and head switching. A transfer model consists of a bilingual lexicon and a transfer parameter table. The model uses de- pendency tree fragments, which are the same as un- ordered dependency trees except that some nodes may not have word labels. In the bilingual lexicon, an entry for a source word wi (see top portion of Figure 2) has the form (wi, Hi, hi, Gi, fi) where Hi is a source language tree fragment, ni (the primary node) is a distinguished node of Hi with label wi, Gi is a target tree fragment, and fi is a 170 mapping function, i.e. a (possibly partial) function from the nodes of Hi to the nodes of Gi. The transfer parameter table specifies costs for the application of transfer entries. In a context- independent model, each entry has a single cost pa- rameter. In context-dependent transfer models, the cost function takes into account the identities of the labels of the arcs and nodes dominating wi in the source graph. (Context dependence is discussed fur- ther in Section 5.) The set of transfer parameters may also include costs for the null transfer entries for wi, for use in derivations in which wi is trans- lated by the entry for another word v. For example, the entry for v might be for translating an idiom involving wi as a modifier. Each entry in the bilingual lexicon specifies a way of mapping part of a dependency tree, specifi- cally that part "matching" (as explained below) the source fragment of the entry, into part of a target graph, as indicated by the target fragment. Entry mapping functions specify how the set of target frag- ments for deriving a translation are to be combined: whenever an entry is applied, a global node-mapping function is extended to include the entry mapping function. 4.2 Matching, Tiling, and Derivation Transfer mapping takes a source dependency tree S from analysis and produces a minimum cost deriva- tion of a target graph T and a (possibly partial) function f from source nodes to target nodes. In fact, the transfer model is applicable to certain types of source dependency graphs that are more general than trees, although the version of the head au- tomata model described here only produces trees. We will say that a tree fragment H matches an unordered dependency tree S if there is a function g (a matching function) from the nodes of H to the nodes of S such that • g is a total one-one function; • if a node n of H has a label, and that label is word w, then the word label for g(n) is also w; • for every arc in H with label r from node nl to node n2, there is an arc with label r from g(nz) to g(n2). Unlike first order unification, this definition of matching is not commutative and is not determinis- tic in that there may be multiple matching functions for applying a bilingual entry to an input source tree. A particular match of an entry against a dependency tree can be represented by the matching function g, a set of arcs A in S, and the (possibly context de- pendent) cost c of applying the entry. A tiling of a source graph with respect to a transfer model is a set of entry matches {(El, gz, A1, cl), • • ", (E~, gk, At, ck)} which is such that gi Figure 2: Transfer matching and mapping functions • k is the number of nodes in the source tree S. • Each Ei, 1 < i ~ k, is a bilingual entry (wi, Hi, hi, Gi, fil matching S with function gi (see Figure 2) and arcs Ai. • For primary nodes nl and nj of two distinct entries Ei and Ej, gi(ni) and gi(nj) are distinct. • The sets of edges Ai form a partition of the edges of S. • The images gi(Li) form a partition of the nodes of S, where Li is the set of labeled source nodes in the source fragment Hi of Ei. • ci is the cost of the match specified by the pa- rameter table. A tiling of S yields a costed derivation of a target dependency graph T as follows: • The cost of the derivation is the sum of the costs ci for each match in the tiling. • The nodes and arcs of T are composed of the nodes and arcs of the target fragments Gi for the entries Ei. • Let fi and fj be the mapping functions for en- tries Ei and Ej. For any node n of S for which target nodes fi(g[l(n)) and fj(g~l(n)) are de- fined, these two nodes are identified as a single node f(n) in T. The merging of target fragment nodes in the last condition has the effect of joining the target frag- ments in a consistent fashion. The node mapping function f for the entire tree thus has a different role from the alignment function in the IBM statis- tical translation model (Brown et al. 1990, 1993); the role of the latter includes the linear ordering of words in the target string. In our approach, tar- get word order is handled exclusively by the target monolingual model. 4.3 Transfer Algorithm The main transfer search is preceded by a bilingual lexicon matching phase. This leads to greater ef- ficiency as it avoids repeating matching operations 171 during the search phase, and it allows a static analy- sis of the matching entries and source tree to identify subtrees for which the search phase can safely prune out suboptimal partial translations. Transfer Configurations In order to apply tar- get language model relation costs incrementally, we need to distinguish between complete and incom- plete arcs: an arc is complete if both its nodes have labels, otherwise it is incomplete. The output of the lexicon matching phrase, and the partial derivations manipulated by the search phase are both in the form of transfer configurations (S,R,T,P,f,c,I) where S is the set of source nodes and arcs con- sumed so far in the derivation, R the remaining source nodes and arcs, f the mapping function built so far, T the set of nodes and complete arcs of the target graph, P the set of incomplete target arcs, c the partial derivation cost, and I a set of source nodes for which entries have yet to be applied. Lexical matching phase The algorithm for lexi- cal matching has a similar control structure to stan- dard unification algorithms, except that it can result in multiple matches. We omit the details. The lex- icon matching phase returns, for each source node i, a set of runtime entries. There is one runtime entry for each successful match and possibly a null entry for the node if the word label for i is included in successful matches for other entries. Runtime en- tries are transfer configurations of the form (Hi, ¢, Gi, Pi, fi, ci, {i}) in which Hi is the source fragment for the entry with each node replaced by its image under the applica- ble matching function; Gi the target fragment for the entry, except for the incomplete arcs Pi of this fragment; fi the composition of mapping function for the entry with the inverse of the matching func- tion; ci the cost of applying the entry in the context of its match with the source graph plus the cost in the target model of the arcs in Gi. Transfer Search Before the transfer search proper, the resulting runtime entries together with the source graph are analyzed to determine decom- position nodes. A decomposition node n is a source tree node for which it is safe to prune suboptimal translations of the subtree dominated by n. Specifi- cally, it is checked that n is the root node of all source fragments Hn of runtime entries in which both n and its node label are included, and that fn(n) is not dominated by (i.e. not reachable via directed arcs from) another node in the target graph Gn of such entries. Transfer search maintains a set M of active run- time entries. InitiMly, this is the set of runtime entries resulting from the lexicon matching phase. Overall search control is as follows: 1. Determine the set of decomposition nodes. 2. Sort the decomposition nodes into a list D such that if nl dominates n2 in S then n2 precedes nl in D. 3. If D is empty, apply the subtree transfer search (given below) to S, return the lowest cost solu- tion, and stop. 4. Remove the first decomposition node n from D and apply the subtree transfer search to the sub- tree S ~ dominated by n, to yield solutions (s', ¢, T', ¢, f', c', ¢). 5. Partition these solutions into subsets with the same word label for the node fl(n), and select the solution with lowest cost c' from each sub- set. 6. Remove from M the set of runtime entries for nodes in S ~. 7. For each selected subtree solution, add to M a new runtime entry (S', ¢, T', f', c', {n}). 8. Repeat from step 3. The subtree transfer search maintains a queue Q of configurations corresponding to partial deriva- tions for translating the subtree. Control follows a standard non-deterministic search paradigm: 1. Initialize Q to contain a single configuration (¢, R0, ¢, ¢, ¢, 0, I0) with the input subtree R0 and the set of nodes I0 in R0. 2. If Q is empty, return the lowest cost solution found and stop. 3. Remove a configuration iS, R, T, P, f, c, I) from the queue. 4. If R is empty, add the configuration to the set of subtree solutions. 5. Select a node i from I. 6. For each runtime entry (Hi, ¢, Gi, Pi, fi, cl, {i}) for i, if Hi is a subgraph of R, add to Q a con- figuration iS 0 Hi, R - Hi, T O Gi 0 G', P U Pi - G', fO fi, c +ci +cv, , I--{ i} ), where G' is the set of newly completed arcs (those in P t3 Pi with both node labels in T U Gi O P 0 Pi) and cg, is the cost of the arcs G' in the target language model. 7. For any source node n for which f(n) and fi(n) are both defined, merge these two target nodes. 8. Repeat from step 2. Keeping the arcs P separate in the configuration al- lows efficient incremental application of target de- pendency costs cv, during the search, so these costs are taken into account in the pruning step of the overall search control. This way we can keep the benefits of monolingual/bilingual modularity (Is- abelle and Macklovitch 1986) without the compu- tationM overhead of transfer-and-filter (Alshawi et al. 1992). 172 It is possible to apply the subtree search directly to the whole graph starting with the initial runtime entries from lexical matching. However, this would result in an exponential search, specifically a search tree with a branching factor of the order of the num- ber of matching entries per input word. Fortunately, long sentences typically have several decomposition nodes, such as the heads of noun phrases, so the search as described is factored into manageable com- ponents. 5 Cost Functions 5.1 Costed Search Processes The head automata model and transfer model were originally conceived as probabilistic models. In order to take advantage of more of the information avail- able in our training data, we experimented with cost functions that make use of incorrect translations as negative examples and also to treat the correctness of a translation hypothesis as a matter of degree. To experiment with different models, we imple- mented a general mechanism for associating costs to solutions of a search process. Here, a search process is conceptualized as a non-deterministic computa- tion that takes a single input string, undergoes a sequence of state transitions in a non-deterministic fashion, then outputs a solution string. Process states are distinct from, but may include, head au- tomaton states. A cost function for a search process is a real val- ued function defined on a pair of equivalence classes of process states. The first element of the pair, a context c, is an equivalence class of states before transitions. The second element, an event e, is an equivalence class of states after transitions. (The equivalence relations for contexts and events may be different.) We refer to an event-context pair as a choice, for which we use the notation (efc) borrowed from the special case of conditional prob- abilities. The cost of a derivation of a solution by the process is taken to be the sum of costs of choices involved in the derivation. We represent events and contexts by finite se- quences of symbols (typically words or relation sym- bols in the translation application). We write C(al'"anlbl'"bk) for the cost of the event represented by (al ..-a,~) in the context represented by(b1 ..-bk). "Backed off" costs can be computed by averag- ing over larger equivalence classes (represented by shorter sequences in which positions are eliminated systematically). A similar smoothing technique has been applied to the specific case of prepositional phrase attachment by Collins and Brooks (1995). We have used backed off costs in the translation ap- plication for the various cost functions described be- low. Although this resulted in some improvement in testing, so far the improvement has not been statis- tically significant. 5.2 Model Cost Functions Taken together, the events, contexts, and cost func- tion constitute a process cost model, or simply a model. The cost function specifies the model param- eters; the other components are the model structure. We have experimented with a number of model types, including the following. Probabilistic model: In this model we assume a probability distribution on the possible events for a context, that is, E~ P(elc) = 1. The cost parameters of the model are defined as: C(elc) = -ln(P(elc)). Given a set of solutions from executions of a process, let n+(e]e) be the number of times choice (e[c) was taken leading to acceptable solutions (e.g. correct translations) and n+(c) be the number of times con- text c was encountered for these solutions. We can then estimate the probabilistic model costs with C(elc ) ~ ln(n+(c)) -ln(n+(elc)). Discriminative model: The costs in this model are likelihood ratios comparing positive and negative solutions, for example correct and incorrect trans- lations. (See Dunning 1993 on the application of likelihood ratios in computational linguistics.) Let n-(elc ) be the count for choice (e]c) leading to neg- ative solutions. The cost function for the discrimi- native model is estimated as C(elc) ~ In(n- (elc)) -ln(n+(ele)). Mean distance model: In the mean distance model, we make use of some measure of goodness of a solu- tion ts for some input s by comparing it against an ideal solution is for s with a distance metric h: h(t,,i,) ~ d in which d is a non-negative real number. A param- eter for choice (e]c) in the distance model C(elc) = Eh(elc) is the mean value of h(t~,t~) for solutions t, pro- duced by derivations including the choice (eIc). Normalized distance model: The mean distance model does not use the constraint that a particular choice faced by a process is always a choice between events with the same context. It is also somewhat sensitive to peculiarities of the distance function h. With the same assumptions we made for the mean distance model, let Eh(c) be the average of h(t~, ts) for solutions derived from sequences of choices including the context c. The cost parameter for (elc) in the normalized distance model is 173 C(elc) = Bh(c) ' that is, the ratio of the expected distance for deriva- tions involving the choice and the expected distance for all derivations involving the context for that choice. Reflexive Training If we have a manually trans- lated corpus, we can apply the mean and normal- ized distance models to translation by taking the ideal solution t~ for translating a source string s to be the manual translation for s. In the absence of good metrics for comparing translations, we employ a heuristic string distance metric to compare word selection and word order in t~ and ~s. In order to train the model parameters without a manually translated corpus, we use a "reflexive" training method (similar in spirit to the "wake- sleep" algorithm, Hinton et al. 1995). In this method, our search process translates a source sen- tence s to ts in the target language and then trans- lates t~ back to a source language sentence #. The original sentence s can then act as the ideal solu- tion of the overall process. For this training method to be effective, we need a reasonably good initial model, i.e. one for which the distance h(s, #) is in- versely correlated with the probability that t~ is a good translation of s. 6 Experimental System We have built an experimental translation system using the monolingual and translation models de- scribed in this paper. The system translates sen- tences in the ATIS domain (Hirschman et al. 1993) between English and Mandarin Chinese. The trans- lator is in fact a subsystem of a speech translation prototype, though the experiments we describe here are for transcribed spoken utterances. (We infor- mally refer to the transcribed utterances as sen- tences.) The average time taken for translation of sentences (of unrestricted length) from the ATIS cor- pus was around 1.7 seconds with approximately 0.4 seconds being taken by the analysis algorithm and 0.7 seconds by the transfer algorithm. English and Chinese lexicons of around 1200 and 1000 words respectively were constructed. Alto- gether, the entries in these lexicons made reference to around 200 structurally distinct head automata. The transfer lexicon contained around 3500 paired graph fragments, most of which were used in both transfer directions. With this model structure, we tried a number of methods for assigning cost func- tions. The nature of the training methods and their corresponding cost functions meant that different amounts of training data could be used, as discussed further below. The methods make use of a supervised training set and an unsupervised training set, both sets be- ing chosen at random from the 20,000 or so ATIS sentences available to us. The supervised training set comprised around 1950 sentences. A subcollec- tion of 1150 of these sentences were translated by the system, and the resulting translations manually clas- sified as 'good' (800 translations) or 'bad' (350 trans- lations). The remaining 800 supervised training set sentences were hand-tagged for prepositional attach- ment points. (Prepositional phrase attachment is a major cause of ambiguity in the ATIS corpus, and moreover can affect English-Chinese translation, see Chen and Chen 1992.) The attachment informa- tion was used to generate additional negative and positive counts for dependency choices. The un- supervised training set consisted of approximately 13,000 sentences; it was used for automatic training (as described under 'Reflexive Training' above) by translating the sentences into Chinese and back to English. A. Qualitative Baseline: In this model, all choices were assigned the same cost except for irregular events (such as unknown words or partial analy- ses) which were all assigned a high penalty cost. This model gives an indication of performance based solely on model structure. B. Probabilistic: Counts for choices leading to good translations for sentences of the supervised train- ing corpus, together with counts from the manually assigned attachment points, were used to compute negated log probability costs. C. Discriminative: The positive counts as in the probabilistic method, together with corresponding negative counts from bad translations or incorrect attachment choices, were used to compute log likeli- hood ratio costs. D. Normalized Distance: In this fully automatic method, normalized distance costs were computed from reflexive translation of the sentences in the un- supervised training corpus. The translation runs were carried out with parameters from method A. E. Bootstrapped Normalized Distance: The same as method D except that the system used to carry out the reflexive translation was running with parame- ters from method C. Table 1 shows the results of evaluating the per- formance of these models for translating 200 unre- stricted length ATIS sentences into Chinese. This was a previously unseen test set not included in any of the training sets. Two measures of transla- tion acceptability are shown, as judged by a Chinese speaker. (In separate experiments, we verified that the judgments of this speaker were near the average of five Chinese speakers). The first measure, "mean- ing and grammar", gives the percentage of sentence translations judged to preserve meaning without the introduction of grammatical errors. For the second measure, "meaning preservation", grammatical er- rors were allowed if they did not interfere with mean- ing (in the sense of misleading the hearer). In the ta- ble, we have grouped together methods A and D for 174 Table 1: Translation performance of different cost assignment methods Method Meaning and Grammar (%) A' 29 71 D 37 71 B 46 82 C 52 83 E 54 83 Meaning Preservation (%) which the parameters were derived without human supervision effort, and methods B, C, and E which depended on the same amount of human supervision effort. This means that side by side comparison of these methods has practical relevance, even though the methods exploited different amounts of data. In the case of E, the supervision effort was used only as an oracle during training, not directly in the cost computations. We can see from Table 1 that the choice of method affected translation quality (meaning and grammar) more than it affected preservation of meaning. A possible explanation is that the model structure was adequate for most lexical choice decisions because of the relatively low degree of polysemy in the ATIS corpus. For the stricter measure, the differences were statistically significant, according to the sign test at the 5% significance level, for the following comparisons: C and E each outperformed B and D, and B and D each outperformed A. 7 Language Processing and Semantic Representations The translation system we have described employs only simple representations of sentences and phrases. Apart from the words themselves, the only sym- bols used are the dependency relations R. In our experimental system, these relation symbols are themselves natural language words, although this is not a necessary property of our models. Infor- mation coded explicitly in sentence representations by word senses and feature constraints in our pre- vious work (Alshawi 1992) is implicit in the mod- els used to derive the dependency trees and trans- lations. In particular, dependency parameters and context-dependent transfer parameters give rise to an implicit, graded notion of word sense. For language-centered applications like transla- tion or summarization, for which we have a large body of examples of the desired behavior, we can think of the task in terms of the formal problem of modeling a relation between strings based on exam- pies of that relation. By taking this viewpoint, we seem to be ignoring the intuition that most interest- ing natural language processing tasks (translation, summarization, interfaces) are semantic in nature. It is therefore tempting to conclude that an adequate treatment of these tasks requires the manipulation of artificial semantic representation languages with well-understood formal denotations. While the in- tuition seems reasonable, the conclusion might be too strong in that it rules out the possibility that natural language itself is adequate for manipulating semantic denotations. After all, this is the primary function of natural language. The main justification for artificial semantic rep- resentation languages is that they are unambiguous by design. This may not be as critical, or useful, as it might first appear. While it is true that nat- ural language is ambiguous and under-specified out of context, this uncertainty is greatly reduced by context to the point where further resolution (e.g. full scoping) is irrelevant to the task, or even the intended meaning. The fact that translation is in- sensitive to many ambiguities motivated the use of unresolved quasi-logical form for transfer (Alshawi et al. 1992). To the extent that contextual resolution is neces- sary, context may be provided by the state of the lan- guage processor rather than complex semantic rep- resentations. Local context may include the state of local processing components (such as our head au- tomata) for capturing grammatical constraints, or the identity of other words in a phrase for capturing sense distinctions. For larger scale context, I have argued elsewhere (Alshawi 1987) that memory ac- tivation patterns resulting from the process of car- rying out an understanding task can act as global context without explicit representations of discourse. Under this view, the challenge is how to exploit con- text in performing a task rather than how to map natural language phrases to expressions of a formal- ism for coding meaning independently of context or intended use. There is now greater understanding of the formal semantics of under-specified and ambiguous repre- sentations. In Alshawi 1996, I provide a denota- tional semantics for a simple under-specified lan- guage and argue for extending this treatment to a formal semantics of natural language strings as ex- pressions of an under-specified representation. In this paradigm, ordered dependency trees can be viewed as natural language strings annotated so that some of the implicit relations are more explicit. A milder form of this kind of annotation is a bracketed natural language string. We are not advocating an approach in which linguistic structure is ignored (as it is in the IBM translator described by Brown et al. 1990), but rather one in which the syntactic and semantic structure of a string is implicit in the way it is processed by an interpreter. One important advantage of using representations that are close to natural language itself is that it re- duces the degrees of freedom in specifying language and task models, making these models easier to ac- 175 quire automatically. With these considerations in mind, we have started to experiment with a version of the translator described here with even simpler representations and for which the model structure, not just the parameters, can be acquired automati- cally. Acknowledgments The work on cost functions and training methods was carried out jointly with Adam Buchsbaum who also customized the English model to ATIS and in- tegrated the translator into our speech translation prototype. Jishen He constructed the Chinese ATIS language model and bilingual lexicon and identified many problems with early versions of the transfer component. I am also grateful for advice and help from Don Hindle, Fernando Pereira, Chi-Lin Shih, Richard Sproat, and Bin Wu. References Alshawi, H. 1987. Memory and Context for Language Interpretation. Cambridge University Press, Cambridge, England. Alshawi, H. 1996. "Underspecified First Order Log- ics". In Semantic Ambiguity and Underspecification, edited by K. van Deemter and S. Peters, CSLI Publi- cations, Stanford, California. Alshawi, H. 1992. The Core Language Engine. MIT Press, Cambridge, Massachusetts. Alshawi, H., D. Carter, B. Gamback and M. Rayner. 1992. "Swedish-English QLF Translation". In H. A1- shawi (ed.) The Core Language Engine. MIT Press, Cambridge, Massachusetts. Booth, T. 1969. "Probabilistic Representation of For- real Languages". Tenth Annual IEEE Symposium on Switching and Automata Theory. Brew, C. 1992. "Letting the Cat out of the Bag: Gen- eration for Shake-and-Bake MT'. Proceedings of COL- ING92, the International Conference on Computational Linguistics, Nantes, France. Brown, P., J. Cocks, S. Della Pietra, V. Della Pietra, F. Jelinek, J. Lafferty, R. Mercer and P. Rossin. 1990. "A Statistical Approach to Machine Translation". Com- putational Linguistics 16:79-85. Brown, P.F., S.A. Della Pietra, V.J. Della Pietra, and R.L. Mercer. 1993. "The Mathematics of Statistical Machine Translation: Parameter Estimation". Compu- tational Linguistics 19:263-312. Chen, K.H. and H. H. Chen. 1992. "Attachment and Transfer of Prepositional Phrases with Constraint Prop- agation". Computer Processing of Chinese and Oriental Languages, Vol. 6, No. 2, 123-142. Church K. and R. PatH. 1982. "Coping with Syntactic Ambiguity or How to Put the Block in the Box on the Table". Computational Linguistics 8:139-149. Collins, M. and J. Brooks. 1995. "Prepositional Phrase Attachment through a Backed-Off Model." Pro- ceedings of the Third Workshop on Very Large Corpora, Cambridge, Massachusetts, ACL, 27-38. Dorr, B.J. 1994. "Machine Translation Divergences: A Formal Description and Proposed Solution". Compu- tational Linguistics 20:597-634. Dunning, T. 1993. "Accurate Methods for Statistics of Surprise and Coincidence." Computational Linguistics. 19:61-74. Early, J. 1970. "An Efficient Context-Free Parsing Algorithm". Communications of the ACM 14: 453-60. Gazdar, G., E. Klein, G.K. Pullum, and I.A.Sag. 1985. Generalised Phrase Structure Grammar. Black- well, Oxford. Hinton, G.E., P. Dayan, B.J. Frey and R.M. Neal. 1995. "The 'Wake-Sleep' Algorithm for Unsupervised Neural Networks". Science 268:1158-1161. Hudson, R.A. 1984. Word Grammar. Blackwell, Ox- ford. Hirschman, L., M. Bates, D. Dahl, W. Fisher, J. Garo- folo, D. Pallett, K. Hunicke-Smith, P. Price, A. Rud- nicky, and E. Tzoukermann. 1993. "Multi-Site Data Collection and Evaluation in Spoken Language Under- standing". In Proceedings of the Human Language Tech- nology Workshop, Morgan Kaufmann, San Francisco, 19-24. Isabelle, P. and E. Macklovitch. 1986. "Transfer and MT Modularity", Eleventh International Conference on Computational Linguistics, Bonn, Germany, 115-117. Jackendoff, R.S. 1977. X-bar Syntax: A Study of Phrase Structure. MIT Press, Cambridge, Mas- sachusetts. Jelinek, F., R.L. Mercer and S. Roukos. 1992. "Prin- ciples of Lexical Language Modeling for Speech Recog- nition". In S. Furui and M.M. Sondhi (eds.), Advances in Speech Signal Processing, Marcel Dekker, New York. Lafferty, J., D. Sleator and D. Temperley. 1992. "Grammatical Trigrams: A Probabilistic Model of Link Grammar". In Proceedings of the 199P AAAI Fall Sym- posium on Probabilistic Approaches to Natural Language, 89-97. Kay, M. 1989. "Head Driven Parsing". In Proceed- ings of the Workshop on Parsing Technologies, Pitts- burg, 1989. Lindop, J. and 3. Tsujii. 1991. "Complex Transfer in MT: A Survey of Examples". Technical Report 91/5, Centre for Computational Linguistics, UMIST, Manch- ester, UK. Resnik, P. 1992. "Probabilistic Tree-Adjoining Gram- mar as a Framework for Statistical Natural Language Processing". In Proceedings of COLING-9P, Nantes, France, 418-424. Sata, G. and O. Stock. 1989. "Head-Driven Bidi- rectional Parsing". In Proceedings of the Workshop on Parsing Technologies, Pittsburg, 1989. Schabes, Y. 1992. "Stochastic Lexicalized Tree- Adjoining Grammars". In Proceedings of COLING-9P, Nantes, France, 426-432. Whitelock, P.J. 1992. "Shake-and-Bake Translation". Proceedings of COLING92, the International Conference on Computational Linguistics, Nantes, France. Younger, D. 1967. Recognition and Parsing of Context-Free Languages in Time n 3. Information and Control, 10, 189-208. 176 | 1996 | 23 |
Parsing Algorithms and Metrics Joshua Goodman Harvard University 33 Oxford St. Cambridge, MA 02138 [email protected] Abstract Many different metrics exist for evaluating parsing results, including Viterbi, Cross- ing Brackets Rate, Zero Crossing Brackets Rate, and several others. However, most parsing algorithms, including the Viterbi algorithm, attempt to optimize the same metric, namely the probability of getting the correct labelled tree. By choosing a parsing algorithm appropriate for the evaluation metric, better performance can be achieved. We present two new algo- rithms: the "Labelled Recall Algorithm," which maximizes the expected Labelled Recall Rate, and the "Bracketed Recall Algorithm," which maximizes the Brack- eted Recall Rate. Experimental results are given, showing that the two new al- gorithms have improved performance over the Viterbi algorithm on many criteria, es- pecially the ones that they optimize. 1 Introduction In corpus-based approaches to parsing, one is given a treebank (a collection of text annotated with the "correct" parse tree) and attempts to find algo- rithms that, given unlabelled text from the treebank, produce as similar a parse as possible to the one in the treebank. Various methods can be used for finding these parses. Some of the most common involve induc- ing Probabilistic Context-Free Grammars (PCFGs), and then parsing with an algorithm such as the La- belled Tree (Viterbi) Algorithm, which maximizes the probability that the output of the parser (the "guessed" tree) is the one that the PCFG produced. This implicitly assumes that the induced PCFG does a good job modeling the corpus. There are many different ways to evaluate these parses. The most common include the Labelled Tree Rate (also called the Viterbi Criterion or Ex- act Match Rate), Consistent Brackets Recall Rate (also called the Crossing Brackets Rate), Consis- tent Brackets Tree Rate (also called the Zero Cross- ing Brackets Rate), and Precision and Recall. De- spite the variety of evaluation metrics, nearly all re- searchers use algorithms that maximize performance on the Labelled Tree Rate, even in domains where they are evaluating using other criteria. We propose that by creating algorithms that op- timize the evaluation criterion, rather than some related criterion, improved performance can be achieved. In Section 2, we define most of the evaluation metrics used in this paper and discuss previous ap- proaches. Then, in Section 3, we discuss the La- belled Recall Algorithm, a new algorithm that max- imizes performance on the Labelled Recall Rate. In Section 4, we discuss another new algorithm, the Bracketed Recall Algorithm, that maximizes perfor- mance on the Bracketed Recall Rate (closely related to the Consistent Brackets Recall Rate). Finally, we give experimental results in Section 5 using these two algorithms in appropriate domains, and com- pare them to the Labelled Tree (Viterbi) Algorithm, showing that each algorithm generally works best when evaluated on the criterion that it optimizes. 2 Evaluation Metrics In this section, we first define basic terms and sym- bols. Next, we define the different metrics used in evaluation. Finally, we discuss the relationship of these metrics to parsing algorithms. 2.1 Basic Definitions Let Wa denote word a of the sentence under consid- eration. Let w b denote WaW~+l...Wb-lWb; in partic- ular let w~ denote the entire sequence of terminals (words) in the sentence under consideration. In this paper we assume all guessed parse trees are binary branching. Let a parse tree T be defined as a set of triples (s, t, X)--where s denotes the position of the first symbol in a constituent, t denotes the position of the last symbol, and X represents a ter- minal or nonterminal symbol--meeting the following three requirements: 177 • The sentence was generated by the start sym- bol, S. Formally, (1, n, S) E T. • Every word in the sentence is in the parse tree. Formally, for every s between 1 and n the triple (s,s, ws) E T. • The tree is binary branching and consistent. Formally, for every (s,t, X) in T, s ¢ t, there is exactly one r, Y, and Z such that s < r < t and (s,r,Y) E T and (r+ 1,t,Z) e T. Let Tc denote the "correct" parse (the one in the treebank) and let Ta denote the "guessed" parse (the one output by the parsing algorithm). Let Na denote [Tal, the number of nonterminals in the guessed parse tree, and let Nc denote [Tel, the num- ber of nonterminals in the correct parse tree. 2.2 Evaluation Metrics There are various levels of strictness for determin- ing whether a constituent (element of Ta) is "cor- rect." The strictest of these is Labelled Match. A constituent (s,t, X) E Te is correct according to La- belled Match if and only if (s, t, X) E To. In other words, a constituent in the guessed parse tree is cor- rect if and only if it occurs in the correct parse tree. The next level of strictness is Bracketed Match. Bracketed match is like labelled match, except that the nonterminal label is ignored. Formally, a con- stituent (s, t, X) ETa is correct according to Brack- eted Match if and only if there exists a Y such that (s,t,Y) E To. The least strict level is Consistent Brackets (also called Crossing Brackets). Consistent Brackets is like Bracketed Match in that the label is ignored. It is even less strict in that the observed (s,t,X) need not be in Tc--it must simply not be ruled out by any (q, r, Y) e To. A particular triple (q, r, Y) rules out (s,t, X) if there is no way that (s,t,X) and (q, r, Y) could both be in the same parse tree. In particular, if the interval (s, t) crosses the interval (q, r), then (s, t, X) is ruled out and counted as an error. Formally, we say that (s, t) crosses (q, r) if and only ifs<q<t <rorq<s<r<t. If Tc is binary branching, then Consistent Brack- ets and Bracketed Match are identical. The follow- ing symbols denote the number of constituents that match according to each of these criteria. L = ITc n Tal : the number of constituents in Ta that are correct according to Labelled Match. B = I{(s,t,X) : (s,t,X) ETa and for some Y (s,t,Y) E Tc}]: the number of constituents in Ta that are correct according to Bracketed Match. C = I{(s, t, X) ETa : there is no (v, w, Y) E Tc crossing (s,t)}[ : the number of constituents in TG correct according to Consistent Brackets. Following are the definitions of the six metrics used in this paper for evaluating binary branching trees: The in the following table: (1) Labelled Recall Rate = L/Nc. (2) Labelled Tree Rate = 1 if L = ATe. It is also called the Viterbi Criterion. (3) Bracketed Recall Rate = B/Nc. (4) Bracketed Tree Rate = 1 if B = Nc. (5) Consistent Brackets Recall Rate = C/NG. It is often called the Crossing Brackets Rate. In the case where the parses are binary branching, this criterion is the same as the Bracketed Recall Rate. (6) Consistent Brackets Tree Rate = 1 if C = No. This metric is closely related to the Bracketed Tree Rate. In the case where the parses are binary branching, the two metrics are the same. This criterion is also called the Zero Crossing Brackets Rate. preceding six metrics each correspond to cells II Recall I Tree Consistent Brackets C/NG 1 if C = Nc Brackets B/Nc 1 if B = Nc Labels L/Nc 1 if L = Arc 2.3 Maximizing Metrics Despite this long list of possible metrics, there is only one metric most parsing algorithms attempt to maximize, namely the Labelled Tree Rate. That is, most parsing algorithms assume that the test corpus was generated by the model, and then attempt to evaluate the following expression, where E denotes the expected value operator: Ta = argmTaXE ( 1 ifL = gc) (1) This is true of the Labelled Tree Algorithm and stochastic versions of Earley's Algorithm (Stolcke, 1993), and variations such as those used in Picky parsing (Magerman and Weir, 1992). Even in prob- abilistic models not closely related to PCFGs, such as Spatter parsing (Magerman, 1994), expression (1) is still computed. One notable exception is Brill's Transformation-Based Error Driven system (Brill, 1993), which induces a set of transformations de- signed to maximize the Consistent Brackets Recall Rate. However, Brill's system is not probabilistic. Intuitively, if one were to match the parsing algo- rithm to the evaluation criterion, better performance should be achieved. Ideally, one might try to directly maximize the most commonly used evaluation criteria, such as Consistent Brackets Recall (Crossing Brackets) 178 Rate. Unfortunately, this criterion is relatively diffi- cult to maximize, since it is time-consuming to com- pute the probability that a particular constituent crosses some constituent in the correct parse. On the other hand, the Bracketed Recall and Bracketed Tree Rates are easier to handle, since computing the probability that a bracket matches one in the correct parse is inexpensive. It is plausible that algorithms which optimize these closely related criteria will do well on the analogous Consistent Brackets criteria. 2.4 Which Metrics to Use When building an actual system, one should use the metric most appropriate for the problem. For in- stance, if one were creating a database query sys- tem, such as an ATIS system, then the Labelled Tree (Viterbi) metric would be most appropriate. A sin- gle error in the syntactic representation of a query will likely result in an error in the semantic represen- tation, and therefore in an incorrect database query, leading to an incorrect result. For instance, if the user request "Find me all flights on Tuesday" is mis- parsed with the prepositional phrase attached to the verb, then the system might wait until Tuesday be- fore responding: a single error leads to completely incorrect behavior. Thus, the Labelled Tree crite- rion is appropriate. On the other hand, consider a machine assisted translation system, in which the system provides translations, and then a fluent human manually ed- its them. Imagine that the system is given the foreign language equivalent of "His credentials are nothing which should be laughed at," and makes the single mistake of attaching the relative clause at the sentential level, translating the sentence as "His credentials are nothing, which should make you laugh." While the human translator must make some changes, he certainly needs to do less editing than he would if the sentence were completely mis- parsed. The more errors there are, the more editing the human translator needs to do. Thus, a criterion such as the Labelled Recall criterion is appropriate for this task, where the number of incorrect con- stituents correlates to application performance. 3 Labelled Recall Parsing Consider writing a parser for a domain such as ma- chine assisted translation. One could use the La- belled Tree Algorithm, which would maximize the expected number of exactly correct parses. How- ever, since the number of correct constituents is a better measure of application performance for this domain than the number of correct trees, perhaps one should use an algorithm which maximizes the Labelled Recall criterion, rather than the Labelled Tree criterion. The Labelled Recall Algorithm finds that tree TG which has the highest expected value for the La- belled Recall Rate, L/Nc (where L is the number of correct labelled constituents, and Nc is the number of nodes in the correct parse). This can be written as follows: Ta = arg n~xE(L/Nc) (2) It is not immediately obvious that the maximiza- tion of expression (2) is in fact different from the maximization of expression (1), but a simple exam- ple illustrates the difference. The following grammar generates four trees with equal probability: S ~ A C 0.25 S ~ A D 0.25 S --* EB 0.25 S --~ FB 0.25 A, B, C, D, E, F ~ xx 1.0 The four trees are S S X XX X X XX X (3) S S E B F B X XX X X XX X For the first tree, the probabilities of being correct are S: 100%; A:50%; and C: 25%. Similar counting holds for the other three. Thus, the expected value of L for any of these trees is 1.75. On the other hand, the optimal Labelled Recall parse is S X XX X This tree has 0 probability according to the gram- mar, and thus is non-optimal according to the La- belled Tree Rate criterion. However, for this tree the probabilities of each node being correct are S: 100%; A: 50%; and B: 50%. The expected value of L is 2.0, the highest of any tree. This tree therefore optimizes the Labelled Recall Rate. 3.1 Algorithm We now derive an algorithm for finding the parse that maximizes the expected Labelled Recall Rate. We do this by expanding expression (2) out into a probabilistic form, converting this into a recursive equation, and finally creating an equivalent dynamic programming algorithm. We begin by rewriting expression (2), expanding out the expected value operator, and removing the 179 which is the same for all TG, and so plays no NC ' role in the maximization. Ta = argmTaX~,P(Tc l w~) ITnTcl Tc This can be further expanded to (4) Ta = arg mTax E P(Tc I w~)E1 if (s,t,X) 6 Tc Tc (,,t,X)eT (5) Now, given a PCFG with start symbol S, the fol- lowing equality holds: P(s . 1,4)= E P(Tc I ~7)( 1 if (s, t, X) 6 Tc) (6) Tc By rearranging the summation in expression (5) and then substituting this equality, we get Ta =argm~x E P(S =~ s-t... (,,t,X)eT (7) At this point, it is useful to introduce the Inside and Outside probabilities, due to Baker (1979), and explained by Lari and Young (1990). The Inside probability is defined as e(s,t,X) = P(X =~ w~) and the Outside probability is f(s, t, X) = P(S =~ 8 - I n w 1 Xwt+l). Note that while Baker and others have used these probabilites for inducing grammars, here they are used only for parsing. Let us define a new function, g(s, t, X). g(s,t,X) P(S =~ ,-1.. n = w 1 Awt+ 1 [w'~) P(S :~ ,-t n wl Xw,+I)P(X =~ w's) P(S wE) = f(s, t, X) x e(s, t, X)/e(1, n, S) Now, the definition of a Labelled Recall Parse can be rewritten as T =arg%ax g(s,t,X) (8) (s,t,X)eT Given the matrix g(s, t, X) it is a simple matter of dynamic programming to determine the parse that maximizes the Labelled Recall criterion. Define MAXC(s, t) = n~xg(s, t, X)+ max (MAXC(s, r) + MAXC(r + 1,t)) rls_<r<t for length := 2 to n for s := 1 to n-length+l t := s + length - I; loop over nonterminals X let max_g:=maximum of g(s,t,X) loop over r such that s <= r < t let best_split:= max of maxc[s,r] + maxc[r+l,t] maxc[s, t] := max_g + best split; Figure h Labelled Recall Algorithm It is clear that MAXC(1, n) contains the score of the best parse according to the Labelled Recall cri- terion. This equation can be converted into the dy- namic programming algorithm shown in Figure 1. For a grammar with r rules and k nonterminals, the run time of this algorithm is O(n 3 + kn 2) since there are two layers of outer loops, each with run time at most n, and an inner loop, over nonterminals and n. However, this is dominated by the computa- tion of the Inside and Outside probabilities, which takes time O(rna). By modifying the algorithm slightly to record the actual split used at each node, we can recover the best parse. The entry maxc[1, n] contains the ex- pected number of correct constituents, given the model. 4 Bracketed Recall Parsing The Labelled Recall Algorithm maximizes the ex- pected number of correct labelled constituents. However, many commonly used evaluation met- rics, such as the Consistent Brackets Recall Rate, ignore labels. Similarly, some gram- mar induction algorithms, such as those used by Pereira and Schabes (1992) do not produce mean- ingful labels. In particular, the Pereira and Schabes method induces a grammar from the brackets in the treebank, ignoring the labels. While the induced grammar has labels, they are not related to those in the treebank. Thus, although the Labelled Recall Algorithm could be used in these domains, perhaps maximizing a criterion that is more closely tied to the domain will produce better results. Ideally, we would maximize the Consistent Brackets Recall Rate directly. However, since it is time-consuming to deal with Consistent Brackets, we instead use the closely related Bracketed Recall Rate. For the Bracketed Recall Algorithm, we find the parse that maximizes the expected Bracketed Recall Rate, B/Nc. (Remember that B is the number of brackets that are correct, and Nc is the number of constituents in the correct parse.) 180 TG = arg rn~x E(B/Nc) (9) Following a derivation similar to that used for the Labelled Recall Algorithm, we can rewrite equation (9) as Ta=argm~x ~ ~_P(S:~ ,-1.~ ,~ wl (s,t)ET X (I0) The algorithm for Bracketed Recall parsing is ex- tremely similar to that for Labelled Recall parsing. The only required change is that we sum over the symbols X to calculate max_g, rather than maximize over them. 5 Experimental Results We describe two experiments for testing these algo- rithms. The first uses a grammar without meaning- ful nonterminal symbols, and compares the Brack- eted Recall Algorithm to the traditional Labelled Tree (Viterbi) Algorithm. The second uses a gram- mar with meaningful nonterminal symbols and per- forms a three-way comparison between the Labelled Recall, Bracketed Recall, and Labelled Tree Algo- rithms. These experiments show that use of an algo- rithm matched appropriately to the evaluation cri- terion can lead to as much as a 10% reduction in error rate. In both experiments the grammars could not parse some sentences, 0.5% and 9%, respectively. The un- parsable data were assigned a right branching struc- ture with their rightmost element attached high. Since all three algorithms fail on the same sentences, all algorithms were affected equally. 5.1 Experiment with Grammar Induced by Pereira and Schabes Method The experiment of Pereira and Schabes (1992) was duplicated. In that experiment, a grammar was trained from a bracketed form of the TI section of the ATIS corpus 1 using a modified form of the Inside- Outside Algorithm. Pereira and Schabes then used the Labelled Tree Algorithm to select the best parse for sentences in held out test data. The experi- ment was repeated here, except that both the La- belled Tree and Labelled Recall Algorithm were run for each sentence. In contrast to previous research, we repeated the experiment ten times, with differ- ent training set, test set, and initial conditions each time. Table 1 shows the results of running this ex- periment, giving the minimum, maximum, mean, and standard deviation for three criteria, Consis- tent Brackets Recall, Consistent Brackets Tree, and 1For our experiments the corpus was slightly cleaned up. A diff file for "ed" between the orig- inal ATIS data and the cleaned-up version is avail- able from ftp://ftp.das.harvard.edu/pub/goodman/atis- ed/ ti_tb.par-ed and ti_tb.pos-ed. The number of changes made was small, less than 0.2% Criteria I[ Min I Max I Mean I SDev I Labelled Tree Algorithm Cons Brack Rec 86.06 93.27 90.13 2.57 Cons Brack Tree 51.14 77.27 63.98 7.96 Brack Rec 71.38 81.88 75.87 3.18 Bracketed Recall Algorithm Cons Brack Rec 88.02 94.34 91.14 2.22 Cons Brack Tree 53.41 76.14 63.64 7.82 Brack Rec 72.15 80.69 76.03 3.14 Differences Cons Brack Rec -1.55 2.45 1.01 1.07 ] Cons Brack Tree -3.41 3.41 -0.34 2.34 Brack Rec -1.34 2.02 0.17 1.20 Table 1: Percentages Correct for Labelled Tree ver- sus Bracketed Recall for Pereira and Schabes Bracketed Recall. We also display these statistics for the paired differences between the algorithms. The only statistically significant difference is that for Consistent Brackets Recall Rate, which was sig- nificant to the 2% significance level (paired t-test). Thus, use of the Bracketed Recall Algorithm leads to a 10% reduction in error rate. In addition, the performance of the Bracketed Re- call Algorithm was also qualitatively more appeal- ing. Figure 2 shows typical results. Notice that the Bracketed Recall Algorithm's Consistent Brackets Rate (versus iteration) is smoother and more nearly monotonic than the Labelled Tree Algorithm's. The Bracketed Recall Algorithm also gets off to a much faster start, and is generally (although not always) above the Labelled Tree level. For the Labelled Tree Rate, the two are usually very comparable. 5.2 Experiment with Grammar Induced by Counting The replication of the Pereira and Schabes experi- ment was useful for testing the Bracketed Recall Al- gorithm. However, since that experiment induces a grammar with nonterminals not comparable to those in the training, a different experiment is needed to evaluate the Labelled Recall Algorithm, one in which the nonterminals in the induced grammar are the same as the nonterminals in the test set. 5.2.1 Grammar Induction by Counting For this experiment, a very simple grammar was induced by counting, using a portion of the Penn Tree Bank, version 0.5. In particular, the trees were first made binary branching by removing epsilon pro- ductions, collapsing singleton productions, and con- verting n-ary productions (n > 2) as in figure 3. The resulting trees were treated as the "Correct" trees in the evaluation. Only trees with forty or fewer sym- bols were used in this experiment. 181 O o ¢;o ¢- p {D D. 100 90 80 70 60 50 40 30 20 10 ! I I I I I .._-- " ........ i- .......................... :--- ................ J ..... .... °4 .----... ..----" k .-.'" "'" ,'°, ,-4 ,% ,----" ~°-" "-- ' ~/.\.~ (;:':"~" ''J':";'-'~:"":':'-/'~'-'~ ............. _ ... . . . . . . . . ::::::- ...:.:.:::7- :':'::::'...:.... ..-... : / .J ,' / : / ; /" , ..... . / / ; ./ / ; A/ .. • • ...." oo, - ,%,/, t : "/ I 0 lO Labelled Tree Algorithm: Consistent Brackets Recall Bracketed Recall Algorithm: Consistent Brackets Recall ..... Labelled Tree Algorithm: Labelled Tree ...... Bracketed Recall Algorithm: Labelled Tree ........... I | I I I 20 30 40 50 60 Iteration Number 70 Figure 2: Labelled Tree versus Bracketed Recall in Pereira and Schabes Grammar X becomes X A X_Cont B X_Cont C D Brackets Labels II Recall I Tree I Labelled Recall Labelled Tree Table 3: Metrics and Corresponding Algorithms Figure 3: Conversion of Productions to Binary Branching 6 Conclusions and Future Work A grammar was then induced in a straightforward way from these trees, simply by giving one count for each observed production. No smoothing was done. There were 1805 sentences and 38610 nonterminals in the test data. 5.2.2 Results Table 2 shows the results of running all three algo- rithms, evaluating against five criteria. Notice that for each algorithm, for the criterion that it optimizes it is the best algorithm. That is, the Labelled Tree Algorithm is the best for the Labelled Tree Rate, the Labelled Recall Algorithm is the best for the Labelled Recall Rate, and the Bracketed Recall Al- gorithm is the best for the Bracketed Recall Rate. Matching parsing algorithms to evaluation crite- ria is a powerful technique that can be used to im- prove performance. In particular, the Labelled Re- call Algorithm can improve performance versus the Labelled Tree Algorithm on the Consistent Brack- ets, Labelled Recall, and Bracketed Recall criteria. Similarly, the Bracketed Recall Algorithm improves performance (versus Labelled Tree) on Consistent Brackets and Bracketed Recall criteria. Thus, these algorithms improve performance not only on the measures that they were designed for, but also on related criteria. Furthermore, in some cases these techniques can make parsing fast when it was previously imprac- tical. We have used the technique outlined in this paper in other work (Goodman, 1996) to efficiently parse the DOP model; in that model, the only pre- viously known algorithm which summed over all the 182 Criterion Label I Label Brack Cons Brack Cons Brack Algorithm Tree ] Recall Recall Recall Tree Label Tree 4.54~ 48.60% 60.98% 66.35% 12.07% Label Recall 3.71% 49.66~ 61.34% 68.39% 11.63% Bracket Recall 0.11% 4.51% 61.63~ 68.17% 11.19% Table 2: Grammar Induced by Counting: Three Algorithms Evaluated on Five Criteria possible derivations was a slow Monte Carlo algo- rithm (Bod, 1993). However, by maximizing the Labelled Recall criterion, rather than the Labelled Tree criterion, it was possible to use a much sim- pler algorithm, a variation on the Labelled Recall Algorithm. Using this technique, along with other optimizations, we achieved a 500 times speedup. In future work we will show the surprising re- sult that the last element of Table 3, maximizing the Bracketed Tree criterion, equivalent to maximiz- ing performance on Consistent Brackets Tree (Zero Crossing Brackets) Rate in the binary branching case, is NP-complete. Furthermore, we will show that the two algorithms presented, the Labelled Re- call Algorithm and the Bracketed Recall Algorithm, are both special cases of a more general algorithm, the General Recall Algorithm. Finally, we hope to extend this work to the n-ary branching case. 7 Acknowledgements I would like to acknowledge support from National Science Foundation Grant IRI-9350192, National Science Foundation infrastructure grant CDA 94- 01024, and a National Science Foundation Gradu- ate Student Fellowship. I would also like to thank Stanley Chen, Andrew Kehler, Lillian Lee, and Stu- art Shieber for helpful discussions, and comments on earlier drafts, and the anonymous reviewers for their comments. Conference on Empirical Methods in Natural Lan- guage Processing. To appear. Lari, K. and S.J. Young. 1990. The estimation of stochastic context-free grammars using the inside- outside algorithm. Computer Speech and Lan- guage, 4:35-56. Magerman, David. 1994. Natural Language Parsing as Statistical Pattern Recognition. Ph.D. thesis, Stanford University University, February. Magerman, D.M. and C. Weir. 1992. Efficiency, ro- bustness, and accuracy in picky chart parsing. In Proceedings of the Association for Computational Linguistics. Pereira, Fernando and Yves Schabes. 1992. Inside- Outside reestimation from partially bracketed cor- pora. In Proceedings of the 30th Annual Meeting of the ACL, pages 128-135, Newark, Delaware. Stolcke, Andreas. 1993. An efficient probabilistic context-free parsing algorithm that computes pre- fix probabilities. Technical Report TR-93-065, In- ternational Computer Science Institute, Berkeley, CA. References Baker, J.K. 1979. Trainable grammars for speech recognition. In Proceedings of the Spring Confer- ence of the Acoustical Society of America, pages 547-550, Boston, MA, June. Bod, Rens. 1993. Using an annotated corpus as a stochastic grammar. In Proceedings of the Sixth Conference of the European Chapter of the ACL, pages 37-44. Brill, Eric. 1993. A Corpus-Based Approach to Lan- guage Learning. Ph.D. thesis, University of Penn- sylvania. Goodman, Joshua. 1996. Efficient algorithms for parsing the DOP model. In Proceedings of the 183 | 1996 | 24 |
A New Statistical Parser Based on Bigram Lexical Dependencies Michael John Collins* Dept. of Computer and Information Science University of Pennsylvania Philadelphia, PA, 19104, U.S.A. mcollins@gradient, cis. upenn, edu Abstract This paper describes a new statistical parser which is based on probabilities of dependencies between head-words in the parse tree. Standard bigram probability es- timation techniques are extended to calcu- late probabilities of dependencies between pairs of words. Tests using Wall Street Journal data show that the method per- forms at least as well as SPATTER (Mager- man 95; Jelinek et al. 94), which has the best published results for a statistical parser on this task. The simplicity of the approach means the model trains on 40,000 sentences in under 15 minutes. With a beam search strategy parsing speed can be improved to over 200 sentences a minute with negligible loss in accuracy. 1 Introduction Lexical information has been shown to be crucial for many parsing decisions, such as prepositional-phrase attachment (for example (Hindle and Rooth 93)). However, early approaches to probabilistic parsing (Pereira and Schabes 92; Magerman and Marcus 91; Briscoe and Carroll 93) conditioned probabilities on non-terminal labels and part of speech tags alone. The SPATTER parser (Magerman 95; 3elinek et ah 94) does use lexical information, and recovers labeled constituents in Wall Street Journal text with above 84% accuracy - as far as we know the best published results on this task. This paper describes a new parser which is much simpler than SPATTER, yet performs at least as well when trained and tested on the same Wall Street Journal data. The method uses lexical informa- tion directly by modeling head-modifier 1 relations between pairs of words. In this way it is similar to *This research was supported by ARPA Grant N6600194-C6043. 1By 'modifier' we mean the linguistic notion of either an argument or adjunct. Link grammars (Lafferty et al. 92), and dependency grammars in general. 2 The Statistical Model The aim of a parser is to take a tagged sentence as input (for example Figure l(a)) and produce a phrase-structure tree as output (Figure l(b)). A statistical approach to this problem consists of two components. First, the statistical model assigns a probability to every candidate parse tree for a sen- tence. Formally, given a sentence S and a tree T, the model estimates the conditional probability P(T[S). The most likely parse under the model is then: Tb~,, -- argmaxT P(TIS ) (1) Second, the parser is a method for finding Tbest. This section describes the statistical model, while section 3 describes the parser. The key to the statistical model is that any tree such as Figure l(b) can be represented as a set of baseNPs 2 and a set of dependencies as in Fig- ure l(c). We call the set of baseNPs B, and the set of dependencies D; Figure l(d) shows B and D for this example. For the purposes of our model, T = (B, D), and: P(TIS ) = P(B,D]S) = P(B[S) x P(D]S,B) (2) S is the sentence with words tagged for part of speech. That is, S =< (wl,tl), (w2,t2)...(w~,t,) >. For POS tagging we use a maximum-entropy tag- ger described in (Ratnaparkhi 96). The tagger per- forms at around 97% accuracy on Wall Street Jour- nal Text, and is trained on the first 40,000 sentences of the Penn Treebank (Marcus et al. 93). Given S and B, the reduced sentence :~ is de- fined as the subsequence of S which is formed by removing punctuation and reducing all baseNPs to their head-word alone. ~A baseNP or 'minimal' NP is a non-recursive NP, i.e. none of its child constituents are NPs. The term was first used in (l:tamshaw and Marcus 95). 184 (a) John/NNP Smith/NNP, the/DT president/NN of/IN IBM/NNP, announced/VBD his/PR, P$ res- ignation/NN yesterday/NN . (b) S NP J ~ NP NP NP PP A A IN NP NNP NNP DT NN I a I I I I ] NNP I John Smith the president of IBM VP VBD NP NP PRP$ NN NN I I I announced his resignation yesterday (c) [John NP S VP VBD Smith] [the president] of [IBM] announced [his VP NP vp NP I I resignation ] [yesterday ] (d) B={ [John Smith], [the president], [IBM], [his resignation], [yesterday] } NP S VP NP NP NP NPNPPP INPPNP VBD vP NP D=[ Smith announced, Smith president, president of, of IBM, announced resignation VBD VP NP announced yesterday } Figure 1: An overview of the representation used by the model. (a) The tagged sentence; (b) A candidate parse-tree (the correct one); (c) A dependency representation of (b). Square brackets enclose baseNPs (heads of baseNPs are marked in bold). Arrows show modifier --* head dependencies. Section 2.1 describes how arrows are labeled with non-terminal triples from the parse-tree. Non-head words within baseNPs are excluded from the dependency structure; (d) B, the set of baseNPs, and D, the set of dependencies, are extracted from (c). Thus the reduced sentence is an array of word/tag pairs, S=< (t~l,tl),(@2,f2)...(@r~,f,~)>, where m _~ n. For example for Figure l(a) Example 1 S = < (Smith, ggP), (president, NN), (of, IN), (IBM, NNP), (announced, VBD), (resignation, N N), (yesterday, N g) > Sections 2.1 to 2.4 describe the dependency model. Section 2.5 then describes the baseNP model, which uses bigram tagging techniques similar to (Ramshaw and Marcus 95; Church 88). 2.1 The Mapping from Trees to Sets of Dependencies The dependency model is limited to relationships between words in reduced sentences such as Ex- ample 1. The mapping from trees to dependency structures is central to the dependency model. It is defined in two steps: 1. For each constituent P --.< C1...Cn > in the parse tree a simple set of rules 3 identifies which of the children Ci is the 'head-child' of P. For example, NN would be identified as the head-child of NP ~ <DET JJ 33 NN>, VP would be identified as the head-child of $ -* <NP VP>. Head-words propagate up through the tree, each parent receiv- ing its head-word from its head-child. For example, in S --~ </~P VP>, S gets its head-word, announced, 3The rules are essentially the same as in (Magerman 95; Jelinek et al. 94). These rules are also used to find the head-word of baseNPs, enabling the mapping from S and B to S. 185 from its head-child, the VP. S ( ~ ) NP(Sml*h) VP(announcedl Iq~smah) NPLmu~=nt) J~(presidmt) PP(of) VBD(annoumzdI NP(fesignatian) NP(yeuaerday) NN T ~P I NN NN Smith l~sid~t of IBM ~mounced rmign~ioe ~ y Figure 2: Parse tree for the reduced sentence in Example 1. The head-child of each constituent is shown in bold. The head-word for each constituent is shown in parentheses. 2. Head-modifier relationships are now extracted from the tree in Figure 2. Figure 3 illustrates how each constituent contributes a set of dependency re- lationships. VBD is identified as the head-child of VP ---," <VBD NP NP>. The head-words of the two NPs, resignation and yesterday, both modify the head-word of the VBD, announced. Dependencies are labeled by the modifier non-terminal, lip in both of these cases, the parent non-terminal, VP, and finally the head-child non-terminal, VBD. The triple of non- terminals at the start, middle and end of the arrow specify the nature of the dependency relationship - <liP,S,VP> represents a subject-verb dependency, <PP ,liP ,liP> denotes prepositional phrase modifi- cation of an liP, and so on 4. v ~ 7 Figure 3: Each constituent with n children (in this case n = 3) contributes n - 1 dependencies. Each word in the reduced sentence, with the ex- ception of the sentential head 'announced', modifies exactly one other word. We use the notation AF(j) = (hi, Rj) (3) to state that the jth word in the reduced sentence is a modifier to the hjth word, with relationship Rj 5. AF stands for 'arrow from'. Rj is the triple of labels at the start, middle and end of the ar- row. For example, wl = Smith in this sentence, 4The triple can also be viewed as representing a se- mantic predicate-argument relationship, with the three elements being the type of the argument, result and func- tot respectively. This is particularly apparent in Cat- egorial Grammar formalisms (Wood 93), which make an explicit link between dependencies and functional application. 5For the head-word of the entire sentence hj = 0, with Rj=<Label of the root of the parse tree >. So in this case, AF(5) = (0, < S >). and ~5 = announced, so AF(1) = (5, <NP,S,VP>). D is now defined as the m-tuple of dependen- cies: n = {(AF(1),AF(2)...AF(m)}. The model assumes that the dependencies are independent, so that: P(DIS, B) = 11 P(AF(j)IS' B) (4) j=l 2.2 Calculating Dependency Probabilities This section describes the way P(AF(j)]S, B) is es- timated. The same sentence is very unlikely to ap- pear both in training and test data, so we need to back-offfrom the entire sentence context. We believe that lexical information is crucial to attachment de- cisions, so it is natural to condition on the words and tags. Let 1) be the vocabulary of all words seen in training data, T be the set of all part-of-speech tags, and TTCAZA f be the training set, a set of reduced sentences. We define the following functions: • C ( (a, b/, (c, d / ) for a, c c l], and b, d c 7- is the number of times (a,b I and (c,d) are seen in the same reduced sentence in training data. 6 Formally, C((a,b>, <c,d>)= Z h = <a, b), : <e, d)) • ~ ¢ T'R,,AZ~/" k,Z=l..I;I, z#k where h(m) is an indicator function which is 1 if m is true, 0 if x is false. • C (R, (a, b), (c, d) ) is the number of times (a, b / and (c, d) are seen in the same reduced sentence in training data, and {a, b) modifies (c,d) with rela- tionship R. Formally, C (R, <a, b), <e, d) ) = Z h(S[k] = (a,b), SIll = (c,d), AF(k) = (l,R)) -¢ c T'R~gZ2q" k3_-1..1~1, l¢:k (6) • F(RI(a, b), (c, d) ) is the probability that (a, b) modifies (c, d) with relationship R, given that (a, b) and (e, d) appear in the same reduced sentence. The maximum-likelihood estimate of F(RI (a, b), (c, d) ) is: C(R, (a, b), (c, d) ) (7) fi'(Rl<a ,b), <c,d) )= C( (a,b), (c,d) ) We can now make the following approximation: P(AF(j) = (hi, Rj) IS, B) P(R I (S) Ek=l P(P I eNote that we count multiple co-occurrences in a single sentence, e.g. if 3=(<a,b>,<c,d>,<c,d>) then C(< a,b >,< c,d >) = C(< c,d >,< a,b >) = 2. 186 where 79 is the set of all triples of non-terminals. The denominator is a normalising factor which ensures that E P(AF(j) = (k,p) l S, B) = 1 k=l..rn,k~j,pe'P From (4) and (8): P(DIS, B) ~ (9) YT The denominator of (9) is constant, so maximising P(D[S, B) over D for fixed S, B is equivalent to max- imising the product of the numerators, Af(DIS, B). (This considerably simplifies the parsing process): m N(DIS, B) = I-[ 6), Zh ) ) (10) j=l 2.3 The Distance Measure An estimate based on the identities of the two tokens alone is problematic. Additional context, in partic- ular the relative order of the two words and the dis- tance between them, will also strongly influence the likelihood of one word modifying the other. For ex- ample consider the relationship between 'sales' and the three tokens of 'of': Example 2 Shaw, based in Dalton, Ga., has an- nual sales of about $1.18 billion, and has economies of scale and lower raw-material costs that are ex- pected to boost the profitability of Armstrong's brands, sold under the Armstrong and Evans-Black names . In this sentence 'sales' and 'of' co-occur three times. The parse tree in training data indicates a relationship in only one of these cases, so this sen- tence would contribute an estimate of ½ that the two words are related. This seems unreasonably low given that 'sales of' is a strong collocation. The lat- ter two instances of 'of' are so distant from 'sales' that it is unlikely that there will be a dependency. This suggests that distance is a crucial variable when deciding whether two words are related. It is included in the model by defining an extra 'distance' variable, A, and extending C, F and /~ to include this variable. For example, C( (a, b), (c, d), A) is the number of times (a, b) and (c, d) appear in the same sentence at a distance A apart. (11) is then maximised instead of (10): rn At(DIS, B) = 1-I P(Rj I ((vj, tj), (~hj, [hj), Aj,ni) j=l (11) A simple example of Aj,hj would be Aj,hj = hj - j. However, other features of a sentence, such as punc- tuation, are also useful when deciding if two words are related. We have developed a heuristic 'dis- tance' measure which takes several such features into account The current distance measure Aj,h~ is the combination of 6 features, or questions (we motivate the choice of these questions qualitatively - section 4 gives quantitative results showing their merit): Question 1 Does the hjth word precede or follow the jth word? English is a language with strong word order, so the order of the two words in surface text will clearly affect their dependency statistics. Question 2 Are the hjth word and the jth word adjacent? English is largely right-branching and head-initial, which leads to a large proportion of de- pendencies being between adjacent words 7. Table 1 shows just how local most dependencies are. Distance 1 < 2 < 5 < 10 Percentage 74.2 86.3 95.6 99.0 Table 1: Percentage of dependencies vs. distance be- tween the head words involved. These figures count baseNPs as a single word, and are taken from WSJ training data. Number of verbs 0 <=1 <=2 Percentage 94.1 98.1 99.3 Table 2: Percentage of dependencies vs. number of verbs between the head words involved. Question 3 Is there a verb between the hjth word and the jth word? Conditioning on the exact dis- tance between two words by making Aj,hj = hj - j leads to severe sparse data problems. But Table 1 shows the need to make finer distance distinctions than just whether two words are adjacent. Consider the prepositions 'to', 'in' and 'of' in the following sentence: Example 3 Oil stocks escaped the brunt of Fri- day's selling and several were able to post gains , including Chevron , which rose 5/8 to 66 3//8 in Big Board composite trading of 2.4 million shares. The prepositions' main candidates for attachment would appear to be the previous verb, 'rose', and the baseNP heads between each preposition and this verb. They are less likely to modify a more distant verb such as 'escaped'. Question 3 allows the parser to prefer modification of the most recent verb - effec- tively another, weaker preference for right-branching structures. Table 2 shows that 94% of dependencies do not cross a verb, giving empirical evidence that question 3 is useful. ZFor example in '(John (likes (to (go (to (University (of Pennsylvania)))))))' all dependencies are between ad- jacent words. 187 Questions 4, 5 and 6 • Are there 0, 1, 2, or more than 2 'commas' be- tween the hith word and the jth word? (All symbols tagged as a ',' or ':' are considered to be 'commas'). • Is there a 'comma' immediately following the first of the hjth word and the jth word? • Is there a 'comma' immediately preceding the second of the hjth word and the jth word? People find that punctuation is extremely useful for identifying phrase structure, and the parser de- scribed here also relies on it heavily. Commas are not considered to be words or modifiers in the de- pendency model - but they do give strong indica- tions about the parse structure. Questions 4, 5 and 6 allow the parser to use this information. 2.4 Sparse Data The maximum likelihood estimator in (7) is likely to be plagued by sparse data problems - C( (,.~j, {j), (wa~,{h,), Aj,h i) may be too low to give a reliable estimate, or worse still it may be zero leav- ing the estimate undefined. (Collins 95) describes how a backed-off estimation strategy is used for mak- ing prepositional phrase attachment decisions. The idea is to back-off to estimates based on less context. In this case, less context means looking at the POS tags rather than the specific words. There are four estimates, El, E2, Ea and E4, based respectively on: 1) both words and both tags; 2) ~j and the two POS tags; 3) ~hj and the two POS tags; 4) the two POS tags alone. E1 = where 8 61 = 62 = 6a = 64 = 7]2 _7_ 773 = E2- ~ Ea= ~ E4= ~- (12) 6a 6~ c( (~,/~), (~.,,/,,, ), as,h~) c( (/-~), <~h~, ~-,,,), ~,~,) C(R~, (~,~~), (/),~), ±~,h~) C(Ro, (~), (~,~.), A~,.,) C(~, (~), ¢.j),,~,.~) (13) c( (~,~, ~j), (~-,.j), Aj,,.j ) = ~ C( (~,j, {j), (=, ~-,.~), Aj,,,j ) xCV c((~), <%), %,,,~) = ~ ~ c( <~, ~), (y, ~,,j), A~,,,,) xelJ y~/~ where Y is the set of all words seen in training data: the other definitions of C follow similarly. Estimates 2 and 3 compete - for a given pair of words in test data both estimates may exist and they are equally 'specific' to the test case example. (Collins 95) suggests the following way of combining them, which favours the estimate appearing more often in training data: E2a - '12 + '~a (14) 62 + 63 This gives three estimates: El, E2a and E4, a similar situation to trigram language modeling for speech recognition (Jelinek 90), where there are tri- gram, bigram and unigram estimates. (Jelinek 90) describes a deleted interpolation method which com- bines these estimates to give a 'smooth' estimate, and the model uses a variation of this idea: If E1 exists, i.e. 61 > 0 ~(Rj I (~J,~J), (~h~,ih~), A~,h~) : A1 x El + ( i - At) x E23 (15) Else If Eus exists, i.e. 62 + 63 > 0 A2 x E23 + (1 - A2) x E4 (16) Else ~'(R~I(~.~,~)), (¢hj,t),j),Aj,hj) = E4 (17) (Jelinek 90) describes how to find A values in (15) and (16) which maximise the likelihood of held-out data. We have taken a simpler approach, namely: 61 A1 -- 61+1 62 + 6a A2 - (18) 62 + 6a + 1 These A vMues have the desired property of increas- ing as the denominator of the more 'specific' esti- mator increases. We think that a proper implemen- tation of deleted interpolation is likely to improve results, although basing estimates on co-occurrence counts alone has the advantage of reduced training times. 2.5 The BaseNP Model The overall model would be simpler if we could do without the baseNP model and frame everything in terms of dependencies. However the baseNP model is needed for two reasons. First, while adjacency be- tween words is a good indicator of whether there is some relationship between them, this indicator is made substantially stronger if baseNPs are re- duced to a single word. Second, it means that words internal to baseNPs are not included in the co-occurrence counts in training data. Otherwise, 188 in a phrase like 'The Securities and Exchange Com- mission closed yesterday', pre-modifying nouns like 'Securities' and 'Exchange' would be included in co- occurrence counts, when in practice there is no way that they can modify words outside their baseNP. The baseNP model can be viewed as tagging the gaps between words with S(tart), C(ontinue), E(nd), B(etween) or N(ull) symbols, respectively meaning that the gap is at the start of a BaseNP, continues a BaseNP, is at the end of a BaseNP, is between two adjacent baseNPs, or is between two words which are both not in BaseNPs. We call the gap before the ith word Gi (a sentence with n words has n - 1 gaps). For example, [ 3ohn Smith ] [ the president ] of [ IBM ] has an- nounced [ his resignation ] [ yesterday ] =~ John C Smith B the C president E of S IBM E has N announced S his C resignation B yesterday The baseNP model considers the words directly to the left and right of each gap, and whether there is a comma between the two words (we write ci = 1 if there is a comma, ci = 0 otherwise). Probability estimates are based on counts of consecutive pairs of words in unreduced training data sentences, where baseNP boundaries define whether gaps fall into the S, C, E, B or N categories. The probability of a baseNP sequence in an unreduced sentence S is then: 1-I P(G, I ~,,_,,ti_l, wi,t,,c,) (19) i=2...n The estimation method is analogous to that de- scribed in the sparse data section of this paper. The method is similar to that described in (Ramshaw and Marcus 95; Church 88), where baseNP detection is also framed as a tagging problem. 2.6 Summary of the Model The probability of a parse tree T, given a sentence S, is: P(T[S) = P(B, DIS) = P(BIS ) x P(D[S, B) The denominator in Equation (9) is not actu- ally constant for different baseNP sequences, hut we make this approximation for the sake of efficiency and simplicity. In practice this is a good approxima- tion because most baseNP boundaries are very well defined, so parses which have high enough P(BIS ) to be among the highest scoring parses for a sen- tence tend to have identical or very similar baseNPs. Parses are ranked by the following quantityg: P(BIS ) x AZ(DIS, B) (20) Equations (19) and (11) define P(B]S) and Af(DIS, B). The parser finds the tree which max- imises (20) subject to the hard constraint that de- pendencies cannot cross. 9in fact we also model the set of unary productions, U, in the tree, which are of the form P -~< Ca >. This introduces an additional term, P(UIB , S), into (20). 2.7 Some Further Improvements to the Model This section describes two modifications which im- prove the model's performance. • In addition to conditioning on whether depen- dencies cross commas, a single constraint concerning punctuation is introduced. If for any constituent Z in the chart Z --+ <.. X ¥ . . > two of its children X and ¥ are separated by a comma, then the last word in ¥ must be directly followed by a comma, or must be the last word in the sentence. In training data 96% of commas follow this rule. The rule also has the benefit of improving efficiency by reducing the number of constituents in the chart. • The model we have described thus far takes the single best sequence of tags from the tagger, and it is clear that there is potential for better integra- tion of the tagger and parser. We have tried two modifications. First, the current estimation meth- ods treat occurrences of the same word with differ- ent POS tags as effectively distinct types. Tags can be ignored when lexical information is available by defining C(a,c)= E C((a,b>, (c,d>) (21) b,deT where 7" is the set of all tags. Hence C (a, c) is the number of times that the words a and c occur in the same sentence, ignoring their tags. The other definitions in (13) are similarly redefined, with POS tags only being used when backing off from lexical information. This makes the parser less sensitive to tagging errors. Second, for each word wi the tagger can provide the distribution of tag probabilities P(tiIS) (given the previous two words are tagged as in the best overall sequence of tags) rather than just the first best tag. The score for a parse in equation (20) then has an additional term, 1-[,'=l P(ti IS), the product of probabilities of the tags which it contains. Ideally we would like to integrate POS tagging into the parsing model rather than treating it as a separate stage. This is an area for future research. 3 The Parsing Algorithm The parsing algorithm is a simple bottom-up chart parser. There is no grammar as such, although in practice any dependency with a triple of non- terminals which has not been seen in training data will get zero probability. Thus the parser searches through the space of all trees with non- terminal triples seen in training data. Probabilities of baseNPs in the chart are calculated using (19), while probabilities for other constituents are derived from the dependencies and baseNPs that they con- tain. A dynamic programming algorithm is used: if two proposed constituents span the same set of words, have the same label, head, and distance from 189 MODEL ~ 40 Words (2245 sentences) < 100 Words (2416 sentences) s (1) 84.9% 84.9% 1.32 57.2% 80.8% 84.3% 84.3% 1.53 54.7% 77.8% (2) 85.4% 85.5% 1.21 58.4% 82.4% 84.8% 84.8% 1.41 55.9% 79.4% (3) 85.5% 85.7% 1.19 59.5% 82.6% 85.0% 85.1% 1.39 56.8% 7.9.6% (4) 85.8% 86.3% 1.14 59.9% 83.6% 85.3% 85.7% 1.32 57.2% 80.8% SPATTER 84.6% 84.9% 1.26 56.6% 81.4% 84.0% 84.3% 1.46 54.0% 78.8% Table 3: Results on Section 23 of the WSJ Treebank. (1) is the basic model; (2) is the basic model with the punctuation rule described in section 2.7; (3) is model (2) with POS tags ignored when lexical information is present; (4) is model (3) with probability distributions from the POS tagger. LI:t/LP = labeled recall/precision. CBs is the average number of crossing brackets per sentence. 0 CBs, ~ 2 CBs are the percentage of sentences with 0 or < 2 crossing brackets respectively. VBD NP announced his resignation Scorc=Sl Score=S2 vP VBD NP announced his resignation Score = S1 * $2 * P(Gap--S I announced, his) * P(<np,vp,vbd> I resignation, announced) Distance Measure Yes Yes Yes No No Yes Lexical informationl LR I LP ] CBs 85.0% 85.1% 1.39 76.1% 76.6% 2.26 80.9% 83.6% 1.51 Figure 4: Diagram showing how two constituents join to form a new constituent. Each operation gives two new probability terms: one for the baseNP gap tag between the two constituents, and the other for the dependency between the head words of the two constituents. the head to the left and right end of the constituent, then the lower probability constituent can be safely discarded. Figure 4 shows how constituents in the chart combine in a bottom-up manner. 4 Results The parser was trained on sections 02 - 21 of the Wall Street Journal portion of the Penn Treebank (Mar- cus et al. 93) (approximately 40,000 sentences), and tested on section 23 (2,416 sentences). For compari- son SPATTER (Magerman 95; Jelinek et al. 94) was also tested on section 23. We use the PARSEVAL measures (Black et al. 91) to compare performance: Labeled Precision -- number of correct constituents in proposed parse number of constituents in proposed parse Labeled Recall = number of correct constituents in proposed parse number of constituents in treebank parse Crossing Brackets = number of constituents which violate constituent bound- aries with a constituent in the treebank parse. For a constituent to be 'correct' it must span the same set of words (ignoring punctuation, i.e. all to- kens tagged as commas, colons or quotes) and have the same label l° as a constituent in the treebank 1°SPATTER collapses ADVP and PRT to the same label, for comparison we also removed this distinction when Table 4: The contribution of various components of the model. The results are for all sentences of < 100 words in section 23 using model (3). For 'no lexi- cal information' all estimates are based on POS tags alone. For 'no distance measure' the distance mea- sure is Question 1 alone (i.e. whether zbj precedes or follows ~hj). parse. Four configurations of the parser were tested: (1) The basic model; (2) The basic model with the punctuation rule described in section 2.7; (3) Model (2) with tags ignored when lexical information is present, as described in 2.7; and (4) Model (3) also using the full probability distributions for POS tags. We should emphasise that test data outside of sec- tion 23 was used for all development of the model, avoiding the danger of implicit training on section 23. Table 3 shows the results of the tests. Table 4 shows results which indicate how different parts of the system contribute to performance. 4.1 Performance Issues All tests were made on a Sun SPARCServer 1000E, using 100% of a 60Mhz SuperSPARC processor. The parser uses around 180 megabytes of memory, and training on 40,000 sentences (essentially extracting the co-occurrence counts from the corpus) takes un- der 15 minutes. Loading the hash table of bigram counts into memory takes approximately 8 minutes. Two strategies are employed to improve parsing efficiency. First, a constant probability threshold is used while building the chart - any constituents with lower probability than this threshold are discarded. If a parse is found, it must be the highest ranked parse by the model (as all constituents discarded have lower probabilities than this parse and could 190 calculating scores. not, therefore, be part of a higher probability parse). If no parse is found, the threshold is lowered and parsing is attempted again. The process continues until a parse is found. Second, a beam search strategy is used. For each span of words in the sentence the probability, Ph, of the highest probability constituent is recorded. All other constituents spanning the same words must have probability greater than ~-~ for some constant beam size /3 - constituents which fall out of this beam are discarded. The method risks introduc- ing search-errors, but in practice efficiency can be greatly improved with virtually no loss of accuracy. Table 5 shows the trade-off between speed and ac- curacy as the beam is narrowed. I Beam [ Speed [ Sizefl ~ Sentences/minute 118 166 217 261 283 289 Table 5: The trade-off between speed and accuracy as the beam-size is varied. Model (3) was used for this test on all sentences < 100 words in section 23. 5 Conclusions and Future Work We have shown that a simple statistical model based on dependencies between words can parse Wall Street Journal news text with high accuracy. The method is equally applicable to tree or depen- dency representations of syntactic structures. There are many possibilities for improvement, which is encouraging. More sophisticated estimation techniques such as deleted interpolation should be tried. Estimates based on relaxing the distance mea- sure could also be used for smoothing- at present we only back-off on words. The distance measure could be extended to capture more context, such as other words or tags in the sentence. Finally, the model makes no account of valency. Acknowledgements I would like to thank Mitch Marcus, Jason Eisner, Dan Melamed and Adwait Ratnaparkhi for many useful discussions, and for comments on earlier ver- sions of this paper. I would also like to thank David Magerman for his help with testing SPATTER. References E. Black et al. 1991. A Procedure for Quantita- tively Comparing the Syntactic Coverage of En- glish Grammars. Proceedings of the February 1991 DARPA Speech and Natural Language Workshop. T. Briscoe and J. Carroll. 1993. Generalized LR Parsing of Natural Language (Corpora) with Unification-Based Grammars. Computa- tional Linguistics, 19(1):25-60. K. Church. 1988. A Stochastic Parts Program and Noun Phrase Parser for Unrestricted Text. Second Conference on Applied Natural Language Process- ing, A CL. M. Collins and J. Brooks. 1995. Prepositional Phrase Attachment through a Backed-off Model. Proceed- ings of the Third Workshop on Very Large Cor- pora, pages 27-38. D. Hindle and M. Rooth. 1993. Structural Ambigu- ity and Lexical Relations. Computational Linguis- tics, 19(1):103-120. F. Jelinek. 1990. Self-organized Language Model- ing for Speech Recognition. In Readings in Speech Recognition. Edited by Waibel and Lee. Morgan Kaufmann Publishers. F. Jelinek, J. Lafferty, D. Magerman, R. Mercer, A. Ratnaparkhi, S. Roukos. 1994. Decision Tree Pars- ing using a Hidden Derivation Model. Proceedings of the 1994 Human Language Technology Work- shop, pages 272-277. J. Lafferty, D. Sleator and, D. Temperley. 1992. Grammatical Trigrams: A Probabilistic Model of Link Grammar. Proceedings of the 1992 AAAI Fall Symposium on Probabilistic Approaches to Natural Language. D. Magerman. 1995. Statistical Decision-Tree Mod- els for Parsing. Proceedings of the 33rd Annual Meeting of the Association for Computational Linguistics, pages 276-283. D. Magerman and M. Marcus. 1991. Pearl: A Prob- abilistic Chart Parser. Proceedings of the 1991 Eu- ropean A CL Conference, Berlin, Germany. M. Marcus, B. Santorini and M. Marcinkiewicz. 1993. Building a Large Annotated Corpus of En- glish: the Penn Treebank. Computational Linguis- tics, 19(2):313-330. F. Pereira and Y. Schabes. 1992. Inside-Outside Reestimation from Partially Bracketed Corpora. Proceedings of the 30th Annual Meeting of the Association for Computational Linguistics, pages 128-135. L. Ramshaw and M. Marcus. 1995. Text Chunk- ing using Transformation-Based Learning. Pro- ceedings of the Third Workshop on Very Large Corpora, pages 82-94. A. Ratnaparkhi. 1996. A Maximum Entropy Model for Part-Of-Speech Tagging. Conference on Em- pirical Methods in Natural Language Processing, May 1996. M. M. Wood. 1993. Categorial Grammars, Rout- ledge. 191 | 1996 | 25 |
Two Sources of Control over the Generation of Software Instructions* Anthony Hartley Language Centre University of Brighton, Falmer Brighton BN1 9PH, UK afh@it ri. bton. ac. uk C6cile Paris t Information Technology Research Institute University of Brighton, Lewes Road Brighton BN2 4AT, UK clp@itri, brighton, ac. uk 1 Introduction Our work addresses the generation of software manu- als in French and English, starting from a semantic model of the task to be documented (Paris et al., 1995). Our prime concern is to be able to exercise control over the mapping from the task model to the generated text. We set out to establish whether the task model Mone is sufficient to control the linguis- tic output of a text generation system, or whether additional control is required. In this event, an ob- vious source to explore is the communicative pur- pose of the author, which is not necessarily constant throughout a manuM. Indeed, in a typical software manual, it is possible to distinguish at least three sections, each with a different purpose: a tutorial containing exercises for new users, a series of step- by-step instructions for the major tasks to be ac- complished, and a ready-reference summary of the commands. We need, therefore, to characterise the linguis- tic expressions of the different elements of the task model, and to establish whether these expressions are sensitive or not to their context, that is, the functional section in which they appear. This pa- per presents the results of an analysis we conducted to this end on a corpus of software instructions in French. 2 Methodology The methodology we employed is similar to that en- dorsed by (Biber, 1995). It is summarised as follows: 1. Collect the texts and note their situational char- acteristics. We consider two such character- * This work is partially supported by the Engineering and Physical Sciences Research Council (EPSRC) Grant J19221, by BC/DAAD ARC Project 293, by the Commis- sion of the European Union Grant LRE-62009, and by the Office of Naval Research Grant N00014-96-1-0465. t Starting this fall, Dr. Paris' address will be CSIRO, Division of Information Technology, Sydney Laboratory, Building E6B, Macquarie University Campus, North Ryde, Sydney, NSW 2113, Australia. istics: task structure and communicative pur- pose. 2. Identify the range of linguistic features to be included in the analysis; 3. Code the corpus in terms of the selected fea- tures; 4. Compute the frequency count of each linguistic feature; 5. Identify co-occurrences between linguistic fea- tures and the situational characteristics under consideration. We first carried out a classical sublanguage analy- sis on our corpus as a whole, without differentiating between any of the situational characteristics (Hart- ley and Paris, 1995). This initial description was necessary to give us a clear statement of the lin- guistic potentiM required of our text generator, to which we could relate any restrictions on language imposed by situational variables. Thus we can ac- count for language restrictions by appealing to gen- eral discourse principles, in keeping with the recom- mendations of (Kittredge, 1995) and (Biber, 1995) for the definition of sublanguages. We then correlated task elements with grammati- cM features. Finally, where linguistic realisation was under-determined by task structure alone, we estab- lished whether the communicative purpose provided more discriminating control over the linguistic re- sources available. 3 Linguistic Framework: Systemic Functional Linguistics Our analysis was carried out within the framework of Systemic-Functional Linguistics (SFL) (Halliday, 1978; Halliday, 1985) which views language as a re- source for the creation of meaning. SFL stratifies meaning into context and language. The strata of the linguistic resources are organised into networks of choices, each choice resulting in a different mean- ing realised (i.e., expressed) by appropriate struc- tures. The emphasis is on paradigmatic choices, as opposed to syntagma~ic structures. Choices made in 192 each stratum constrain the choices available in the stratum beneath. Context thus constrains language. This framework was chosen for several reasons. First, the organisation of linguistic resources accord- ing to this principle is well-suited to natural lan- guage generation, where the starting point is nec- essarily a communicative goal, and the task is to find the most appropriate expression for the in- tended meaning (Matthiessen and Bateman, 1991). Second, a functional perspective offers an advan- tage for multilingual text generation, because of its ability to achieve a level of linguistic description which holds across languages more effectively than do structurally-based accounts. The approach has been shown capable of supporting the sharing of lin- guistic resources between languages as structurally distinct as English and Japanese (Bateman et al., 1991a; Bateman et at., 1991b). It is therefore rea- sonable to expect that at least the same degree of commonality of description is achievable between English and French within this framework. Finally, KPML (Bateman, 1994), the tactical generator we employ, is based on SFL, and it is thus appropriate for us to characterise the corpus in terms immedi- ately applicable to our generator. 4 Coding features Our lexico-grammatical coding was done using the networks and features of the Nigel grammar (Hal- liday, 1985). We focused on four main concerns, guided by previous work on instructional texts, e.g., (Lehrberger, 1986; Plum et at., 1990; Ghadessy, 1993; Kosseim and Lapalme, 1994). • Relations between processes: to determine whether textual cohesion was achieved through conjunctives or through relations implicit in the task structure elements. Among the features considered were clause dependency and conjunction type. • Agency: to see whether the actor perform- ing or enabling a particular action is clearly identified, and whether the reader is explic- itly addressed. We coded here for features such as voice and agent types. • Mood, modality and polarity: to find out the extent to which actions are presented to the reader as being desirable, possible, mandatory, or prohibited. We coded for both true and implicit negatives, and for both personal and impersonal expressions of modality. • Process types: to see how the domain is construed in terms of actions on the part of the user and the software. We coded for sub-categories of material, mental, verbal and relational processes. 5 The Corpus The analysis was conducted on the French version of the Macintosh MacWrite manual (Kaehler, 1983). The manual is derived from an English source by a process of adaptive translation (Sager, 1993), i.e., one which loealises the text to the expectations of the target readership. The fact that the translation is adaptive rather than literal gives us confidence in using this manual for our analysis. 1 Furthermore, we know that Macintosh documentation undergoes thorough local quality control. It certainly conforms to the principles of good documentation established by current research on technical documentation and on the needs of end-users, e.g., (Carroll, 1994; Ham- mond, 1994), in that it supplies clear and concise information for the task at hand. Finally, we have been assured by French users of the software that they consider this particular manual to be well writ- ten and to bear no unnatural trace of its origins. Technical manuals within a specific domain con- stitute a sublanguage, e.g., (Kittredge, 1982; Sager et al., 1980). An important defining property of a sublanguage is that of closure, both lexieal and syn- tactic. Lexical closure has been demonstrated by, for example, (Kittredge, 1987), who shows that after as few as the first 2000 words of a sublanguage text, the number of new word types increases little if at all. Other work, e.g., (Biber, 1988; Biber, 1989) and (Grishman and Kittredge, 1986) illustrates the prop- erty of syntactic closure, which means that generally available constructions just do not occur in this or that sublanguage. In the light of these results, we considered a corpus of 15000 words to be adequate for our purposes, at least for an initial analysis. The MacWrite manual is organised into three chapters, corresponding to the three different sec- tions identified earlier: a tutorial, a series of step- by-step instructions for the major word-processing tasks, and a ready-reference summary of the com- mands. We omitted the tutorial because the gen- eration of such text is not our concern, retaining the other two chapters which provide the user with generic instructions for performing relevant tasks, and descriptions of the commands available within MacWrite. The overlap in information between the two chapters offers opportunities to observe differ- ences in the linguistic expressions of the same task structure elements in different contexts. 1We would have preferred to use a manual which orig- inated in French to exclude all possibility of interfer- ence from a source language, but this proved impossi- ble. Surprisingly, it appears that large French compa- nies often have their documents authored in English by francophones and subsequently translated into French. One large French software house that we contacted does author its documentation in French, but had registered considerable customer dissatisfaction with its quality. We decided, therefore, that their material would be un- suitable for our purposes. 193 Goals: Functions: Constraints: Results: Substeps: La s~lection Gloss: Selection Pour sdlectionner un mot, (faites un double-clic sur le mot) Gloss: To select a word, (do a double-click on the word) (Fermer -) Cet article permet de fermer une fen~tre activ~e Gloss: (Close -) This command enables you to close the active window Si vous donnez ~ votre document le titre d'un document ddj~ existant, (une zone de dialogue apparait) Gloss: If you give your document the title of an existing document, (a dialog box appears) (Choisissez Coller dans le menu Edition - ) Une copie du contenu du presse-papiers apparait Gloss: (Choose Paste from the Edit menu -) A copy of the content of the clipboard appears Ferrnez la fen@tre Rechercher Gloss: Close the Find window Ensuite, on ouvre le document de destination Gloss: Next, one opens the target document Figure 1: Examples of task element expressions 6 Task Structure Task structure is constituted by five types of task elements, which we define below. We used the no- tion of task structure element both as a contextual feature for the analysis and to determine the seg- mentation of the text into units. Each unit is taken to be the expression of a single task element. Our definition of the task elements is based on the concepts and relations commonly chosen to repre- sent a task structure (a goal and its associated plan), e.g., (Fikes and Nilsson, 1971; Sacerdoti, 1977), and on related research, e.g., (Kosseim and Lapalme, 1994). Our generator produces instructions from an underlying semantic knowledge base which uses this representation (Paris et al., 1995). To generate an instruction for performing a task is to chose some task elements to be expressed and linearise them so that they form a coherent set for a given goal the user might have. We distinguish the following ele- ments, and provide examples of them in Figure 1:2 goals: actions that users will adopt as goals and which motivate the use of a plan. functions: actions that represent the functionality of an interface object (such as a menu item). A 2The text in parentheses in the Figure is part of the linguistic context of the task element rather than the element itself. function is closely related to a goal, in that it is also an action that the user may want to per- form. However, the function is accessed through the interface object, and not through a plan. constraints and preconditions: states which must hold before a plan can be employed successfully. The domain model dis- tinguishes constraints (states which cannot be achieved through planning) and preconditions (states which can be achieved through plan- ning). We do not make this distinction in the linguistic analysis and regroup these related task structure elements under one label. We decided to proceed in this way to determine at first how constraints in general are expressed. Moreover, it is not always clear from the text which type of constraint is expressed. Drawing too fine distinctions in the corpus analysis at this point, in the absence of a test for assigning a unit to one of these constraint types, would have rendered the results of the analysis more subjective and thus less reliable. results: states which arise as planned or unplanned effects of carrying out a plan. While it might be important to separate planned and un- planned effects in the underlying representa- tion, we again abstract over them in the lexico- grammatical coding. 194 sub-steps: actions which contribute to the execu- tion of the plan. If the sub-steps are not prim- itive, they can themselves be achieved through other plans. 7 The Coding Procedure No tools exist to automate a functional analysis of text, which makes coding a large body of text a time- consuming task. We first performed a detailed cod- ing of units of texts on approximately 25% of the corpus, or about 400 units, 3 using the WAG coder (O'Donnell, 1995), a tool designed to facilitate a functional analysis. We then used a public-domain concordance pro- gram, MonoConc (Barlow, 1994), to verify the rep- resentativeness of the results. We enumerated the realisations of those features that the first analysis had shown as marked, and produced KWIC 4 list- ings for each set of realisations. We found that the second analysis corroborated the results of the first, consistent with the nature of sublanguages. 8 Distribution of Grammatical Features over Task Structure and Communicative Purpose We examined the correlations between lexico- grammatical realisations and task elements and com- municative purpose. The results are best expressed using tables generated by WAG: given any system, WAG splits the codings into a number of sets, one for each feature in that system. Percentages and means are computed, and the sets are compared statisti- cally, using the standard T-test. WAG displays the results with an indicator of how statistically signifi- cant a value is compared to the combined means in the other sets. The counts were all done using the local mean, that is, the feature count is divided by the total number of codings which select that fea- ture's system. Full definitions of the features can be found in (Halliday, 1985; Bateman et al., 1990). In some cases, the type of task element is on its own sufficient to determine, or at least strongly con- strain, its linguistic realisation. The limited space available here allows us to provide only a small num- ber of examples, shown in Figure 2. We see that the use of modals is excluded in the expression of func- tion, result and constraint, whereas goal and sub- step do admit modals. As far as the polarity sys- tem is concerned, negation is effectively ruled out for function, goal and substep. Finally, with respect to the mood system, only substep can be realised through imperatives. 3The authors followed guidelines for identifying task element units which had yielded consistent results when used by students coding other corpora. 4Key Word In Context In other cases, however, we observe a diversity of realisations. We highlight here three cases: modality in goal, polarity in constraint, and mood in substep. In such cases, we must appeal to another source of control over the apparently available choices. We have looked to the construct of genre (Martin, 1992) to provide this additional control, on two grounds: (1) since genres are distinguished by their commu- nicative purposes, we can view each of the functional sections already identified as a distinct genre; (2) genre is presented as controlling text structure and realisation. In Martin's view, genre is defined as a staged, goal-oriented social process realised through register, the context of situation, which in turn is realised in language to achieve the goals of a text. Genre is responsible for the selection of a text struc- ture in terms of task elements. As part of the re- alisation process, generic choices preselect a register associated with particular elements of text structure, which in turn preselect lexico-grammatical features. The coding of our text in terms genre and task el- ements thus allows us to establish the role played by genre in the realisations of the task elements. It will also allow us to determine the text structures appropriate in each genre, a study we are currently undertaking. This is consistent with other accounts of text structure for text generation in technical do- mains, e.g., (McKeown, 1985; Paris, 1993; Kittredge et al., 1991). For those cases where the realisation remains under-determined by the task element type, we con- ducted a finer-grained analysis, by overlaying a genre partition on the undifferentiated data. We distin- guished earlier two genres with which we are con- cerned: ready-reference and step-by-step. In the manual analysed, we recognised two more specific communicative purposes in the step-by-step section: to enable the reader to perform a task, and to in- crease the reader's knowledge about the task, the way to achieve it, or the properties of the system as a whole. Because of their distinct communica- tive purposes, we again feel justified in calling these genres. We label them respectively procedure and elaboration. The intention that the reader should recognise the differences in function of each section is underscored by the use of distinctive typographi- cal devices, such as fonts and lay-out. 5 The first step at this stage of the analysis was to establish whether there was an effective overlap in task elements among the three genres under consid- eration. The results of this step is shown in Figure 3. Sub-step and goal are found in all three genres, while constraint, result and function occur in both ready- reference and elaboration but are absent from pro- cedure. The next step was to undertake a comparative 5See (Hartley and Paris, 1995) for examples extracted from the manuals. 195 Function Result Constraint Goal Substep Modal-System modal 0% 1% 0% 24% 16% non-modal 100% 99% 100% 76% 84% polarity positive 100% 90% 68% 97% 97% negative 0% 10% 32% 3% 3% mood-system declarative 100% 100% 100% 100% 24% imperative 0% 0% 0% 0% 76% Figure 2: Selective realisations of task elements Ready-Reference Procedure Elaboration Sub-step 37% 77% 42% Goal 11% 23% 14% Constraint 10% 0% 14% Result 23% 0% 27% Function 11% 0% 3% Figure 3: Distribution of task structure elements over genres analysis of the lexico-grammatical features found in the three genres. This analysis indicated that the language employed in these different sections of the text varies greatly. We summarise here the two genres that are strongly contrasted: procedure and ready-reference. Elaboration shares features with both of these. procedure: The top-level goM of the user is ex- pressed as a nominMisation. Actions to be achieved by the reader are almost exclusively reMised by imperatives, directly addressing the reader. These actions are mostly materiM di- rected actions, and there are no causatives. Few modals are employed, and, when they are, it is to express obligation impersonally. The polar- ity of processes is always positive. Procedure employs mostly independent clauses, and, when clause complexes are used, the conjunctions are mostly purpose (linking a user goal and an ac- tion) and alternative (linking two user actions or two goals). ready-reference: In this genre, M1 task elements are always realised through clauses. The declar- ative mood predominates, with few impera- tives addressing the reader. Virtually all the causatives occur here. On the dimension of modality, the emphasis is on personal possi- bility, rather than obligation, and on inclina- tion. We find in this genre most of the ver- bM processes, entirely absent from procedure. Ready-reference is more weighted than proce- dure towards dependent clauses, and is partic- ularly marked by the presence of temporal con- junctions. The analysis so far demonstrates that genre, like task structure, provides some measure of control over the linguistic resources but that neither of these alone is sufficient to drive a generation system. The finM step was therefore to look at the realisations of the task elements differentiated by genre, in cases where the realisation was not strongly determined by the task element. We refer the reader back to Figure 2, and the under-constrained cases of modality in goal, polar- ity in constraint, and mood in substep. Figure 4 shows the realisations the task element goal with re- spect to the modal system, which brings into sharp relief the absence of modality from procedure. Fig- ure 5 presents the reaiisations by genre of the po- larity system for constraint. We observe that only positive polarity occurs in ready-reference. Finally, we note from Figure 6 that the realisation of sub- steps is heavily loaded in favour of imperatives in procedure. These figures show that genre does indeed provide useful additional control over the expression of task elements, which can be exploited by a text genera- tion system. Neither task structure nor genre alone is sufficient to provide this control, but, taken to- gether, they offer a real prospect of adequate control over the output of a text generator. 196 Non-modal Modal Procedure Ready-Reference Elaboration 100.0% 75.0% 72.6% 0.0% 25.0% 28.4% Figure 4: Genre-related differences in the modal system for goal Negative Positive Ready-Reference Elaboration 0.0% 41.7% 100% 58.3% Figure 5: Genre-related differences in the polarity system for constraint Imperative Declarative Procedure Ready-Reference Elaboration 97.3% 44.4% 77.6% 2.7% 55.6% 22.4% Figure 6: Genre-related differences in the mood system for substep 9 Related Work The results from our linguistic analysis are con- sistent with other research on sublanguages in the instructions domain, in both French and English, e.g., (Kosseim and Lapalme, 1994; Paris and Scott, 1994). Our analysis goes beyond previous work by identifying within the discourse context the means for exercising explicit control over a text generator. An interesting difference with respect to previous descriptions is the use of the true (or direct) imper- ative to express an action in the procedure genre, as results from (Paris and Scott, 1994) seem to in- dicate that the infinitive-form of the imperative is preferred in French. These results, however, were obtained from a corpus of instructions mostly for domestic appliances as opposed to software manuals. Furthermore the use of the infinitive-form in instruc- tions in general as observed by (Kocourek, 1982) is declining, as some of the conventions already com- mon in English technical writing are being adopted by French technical writers, e.g., (Timbal-Duclaux, 1990). We also note that the patterns of realisations un- covered in our analysis follow the principle of good technical writing practice known as the minimal- ist approach, e.g., (Carroll, 1994; Hammond, 1994). Moreover, we observe that our corpus does not ex- hibit shortcomings identified in a Systemic Func- tional analysis of English software manuals (Plum et al., 1990), such as a high incidence of agentless passive and a failure to distinguish the function of informing from that of instructing. Other work has focused on the cross-linguistic re- alisations of two specific semantic relations (gener- ation and enablement) (Delin et al., 1994; Delia et al., 1996), in a more general corpus of instructions for household appliances. Our work focuses on the single application domain of software instructions. However, it takes into consideration the whole task structure and looks at the realisation of semantic el- ements as found in the knowledge base, instead of two semantic relations not explicitly present in the underlying semantic model. 10 Conclusion In this paper we have shown how genre and task structure provide two essential sources of control over the text generation process. Genre does so by constraining the selection of the task elements and the range of their expressions. These elements, which are the procedural representation of the user's tasks, constitute a layer of control which mediates between genre and text, but which, without genre, cannot control the grammar adequately. The work presented here is informing the devel- opment of our text generator by specifying the nec- essary coverage of the French grammar to be imple- mented, the required discourse structures, and the mechanisms needed to control them. We continue to explore further situational and contextual factors which might allow a system to fully control its avail- able linguistic resources. References Michael Barlow. 1994. A Guide to MonoConc. Athelston, Houston, TX. John A. Bateman, Robert T. Kasper, Johanna D. Moore, and Richard Whitney. 1990. A general or- 197 ganization of knowledge for natural Language pro- cessing: The Penman Upper Model. Technical report, USC/ISI, March. John A. Bateman, Christian M.I.M. Matthiessen, Keizo Nanri, and Licheng Zeng. 1991a. The re-use of linguistic resources across languages in multilin- gual generation components. In Proceedings of the 1991 International Join~ Conference on Artificial Intelligence, Volume 2, Sydney, Australia, pages 966 - 971. Morgan Kaufmann Publishers. John A. Bateman, Christian M.I.M. Matthiessen, Keizo Nanri, and Licheng Zeng. 1991b. Multi- Lingual text generation: an architecture based on functional typology. In International Conference on Current Issues in Computational Linguistics, Penang, Malaysia. John A. Bateman. 1994. KPML: The KOMET- Penman (Multilingual) Development Environ- ment. Technical report, Institut fiir Integrierte Publikations- und Informationssysteme (IPSI), GMD, Darmstadt, September. Release 0.6. Douglas Biber. 1988. Variation Across Speech and Writing. Cambridge University Press, Cambridge UK. Douglas Biber. 1989. A typology of English texts. Linguistics, 27:3-43. Douglas Biber. 1995. Dimensions of Register Vari- ation: A Cross-linguistic comparison. Cambridge University Press, Cambridge UK. John Carroll. 1994. Techniques for minimalist doc- umentation and user interface design. In Quality of Technical Documentation, pages 67-75. Rodopi, Amsterdam. Judy Delin, Anthony Hartley, C@cile Paris, Donia Scott, and Keith Vander Linden. 1994. Ex- pressing procedural relationships in multilingual instructions. In Proceedings of the Seventh In- ternational Workshop on Natural Language Gen- eration, Kennebunkport, MN, 21-24 June 1994, pages 61-70. Judy Delin, Donia Scott, and Anthony Hartley. 1996. Language-specific mappings from seman- tics to syntax. In Proceedings of the 16th Interna- tional Conference on Computational Linguistics (COLING-96), Copenhagen, Denmark, August. R. E. Fikes and Nils Nilsson. 1971. STRIPS: a new approach to the application of theorem proving to problem solving. Artificial Intelligence, 2:189- 208. Mohsen Ghadessy, editor. 1993. Register Analysis: Theory and Practice. Frances Pinter, London. Ralph Grishman and Richard Kittredge, editors. 1986. Analyzing Language in Restricted Domains. Lawrence Erlbaum Associates, Hillsdale, New Jer- sey. Michael A. K. Halliday. 1978. Language as a Social Semiotic: The Social Interpretation of Language and Meaning. Edward Arnold, London. Michael A. K. Halliday. 1985. An Introduction to Functional Grammar. Edward Arnold, London. Nick Hammond. 1994. Less is More: The Mini- malist Approach. Usability Now! A Guide to Us- ability. Published by the Open University and the Department of Trade and Industry. Anthony Hartley and C@cile Paris. 1995. French cor- pus analysis and grammatical description. Techni- cal Report Project IED/4/1/5827, ITRI, Novem- ber. Carol Ka~hler. 1983. Macintosh MacWrite. Apple Seedrin, Les Ulis, France. Richard Kittredge, Tanya Korelsky, and Owen Ram- bow. 1991. On the Need for Domain Commu- nication Knowledge. Computational Intelligence, 7(4):305-314, November. Richard Kittredge. 1982. Variation and Homogene- ity of Sublanguages. In Richard Kittredge and J. Lehrberger, editors, Sublanguage: Studies of language in restricted semantic domains, pages 107-137. de Gruyter, Berlin and New York. Richard Kittredge. 1987. The significance of sub- language for automatic translation. In Sergei Nirenburg, editor, Machine Translation: Theoret- ical and methodological issues, pages 59-67. Cam- bridge University Press, London. Richard Kittredge. 1995. Efficiency vs. Generality in InterlinguaL Design: Some Linguistic Consid- erations. In the Working notes of the IJCAI-95 Workshop on Multilingual Text Generation, Au- gust 20-21, Montr@M, Canada. Rostislav Kocourek. 1982. La langue frangaise de la technique et de la science. Brandstetter Verlag, Wiesbaden, Germany. Leila Kosseim and Guy Lapalme. 1994. Content and rhetorical status selection in instructional texts. In Proceedings of the Seventh International Work- shop on Natural Language Generation, Kenneb- unkport, MN, 21-24 June 1994, pages 53-60. John Lehrberger. 1986. Sublanguage Analysis. In RMph Grishman and Richard Kittredge, editors, Analyzing Language in Restricted Domains, pages 19-38. Lawrence Erlbaum Associates, Hillsdale, New Jersey. James R. Martin. 1992. English text: systems and structure. Benjamins, Amsterdam. Christian M.I.M. Matthiessen and John A. Bate- man. 1991. Text generation and systemic- functional linguistics: experiences from English and Japanese. Frances Pinter Publishers and St. Martin's Press, London and New York. 198 Kathleen 1%. McKeown. 1985. Text Generation. Cambridge University Press, New York. Michael O'Donnell. 1995. From Corpus to Cod- ings: Semi-Automating the Acquisition of Lin- guistic Features. Proceedings of the AAAI Spring Symposium on Empirical Methods in Discourse Interpretation and Generation, Stanford Univer- sity, California, March 27 - 29, March. C~cile Paris and Donia Scott. 1994. Stylistic vari- ation in multilingual instructions. In Proceedings of the Seventh International Workshop on Nat- ural Language Generation, Kennebunkport, MN, 21-24 June 1994, pages 45 - 52. C@cile Paris, Keith Vander Linden, Markus Fischer, Anthony Hartley, Lyn Pemberton, Richard Power, and Donia Scott. 1995. A support tool for writ- ing multilingual instructions. In Proceedings of the Fourteenth International Joint Conference on Artificial Intelligence, August 20-25, Montr@al, Canada, pages 1398-1404. C~cile L. Paris. 1993. User Modelling in Text Gen- eration. Frances Pinter, London. Guenter A. Plum, Christian M.I.M. Matthiessen, Michael A.K. Halliday, and Natalie Shea. 1990. The Electronic Discourse Analyzer Project: Re- port on Textual Analysis of FJ Manuals. Techni- cal report, Fujitsu Australia Limited, Documen- tation Engineering Division. EDA Project Deliv- erables; Output 7. Earl D. Sacerdoti. 1977. A Structure for Plans and Behavior. Elsevier, New York. Juan C. Sager, David Dungworth, and Peter F. Mc- Donald. 1980. English Special Languages. Brand- stetter Verlag, Wiesbaden, Germany. Juan C. Sager. 1993. Language Engineering and Translation: Consequences of Automation. John Benjamins Publishing Company, Amsterdam. Louis Timbal-Duclaux. 1990. La communica- tion dcrite seientifique et technique. ESF, Paris, France. 199 | 1996 | 26 |
Chart Generation Martin Kay Stanford Universi~ and Xerox Palo Alto Research Center kay@pare, xerox, eom Abstract Charts constitute a natural uniform architecture for parsing and generation provided string position is replaced by a notion more appropriate to logical forms and that measures are taken to curtail gener- ation paths containing semantically incomplete phrases. 1 Charts Shieber (1988) showed that parsing charts can be also used in generation and raised the question, which we take up again here, of whether they constitute a natural uniform architecture for parsing and generation. In particular, we will be interested in the extent to which they bring to the generation process advantages comparable to those that make them attractive in parsing. Chart parsing is not a well defined notion. The usual conception of it involves at least four related ideas: Inactive edges. In context-free grammar, all phrases of a given category that cover a given part of the string are equivalent for the purposes of constructing larger phrases. Efficiency comes from collecting equivalent sets of phrases into (inactive) edges and constructing edges from edges rather than phrases from phrases. Active edges. New phrases of whatever size can be built by considering existing edges pair-wise if provision is made for partial phrases. Partial phrases are collected into edges that are said to be active because they can be thought of as actively seeking material to complete them. The algorithm schema. Newly created edges are placed on an agenda. Edges are moved from the agenda to the chart one by one until none remains to be moved. When an edge is moved, all interactions between it and edges already in the chart are considered and any new edges that they give rise to are added to the agenda. Indexing. The positions in the string at which phrases begin and end can be used to index edges so that the algorithm schema need consider interactions only between adjacent pairs. Chart parsing is attractive for the analysis of natural lan- guages, as opposed to programming languages, for the way in which it treats ambiguity. Regardless of the number of alternative structures for a particular string that a given phrase participates in, it will be constructed once and only once. Although the number of structures of a string can grow exponentially with the length of the string, the number of edges that needs to be constructed grows only with the square of the string length and the whole parsing process can be accomplished in cubic time. Innumerable variants of the basic chart parsing scheme are possible. For example, if there were languages with truly free word order, we might attempt to characterize them by rules like those of context-free grammar, but with a somewhat different interpretation. Instead of replacing non- terminal symbols in a derivation with strings from the right- hand side of corresponding rules, we would remove the nonterminal symbol and insert the symbols from the right- hand side of the rule at arbitrary places in the string. A chart parser for languages with free word order would be a minor variant of the standard one. An edge would take the form X~,, where v is a vector with a bit for every word in the string and showing which of those words the edge covers. There is no longer any notion of adjacency so that there would be no indexing by string position. Inter- esting interactions occur between pairs of edges whose bit vectors have empty intersections, indicating that they cover disjoint sets of words. There can now be as many edges as bit-vectors and, not surprisingly, the computational com- plexity of the parsing process increases accordingly. 2 Generation A parser is a transducer from strings to structures or logical forms. A generator, for our purposes, is the inverse. One way to think of it, therefore, is as a parser of structures or logical forms that delivers analyses in the form of strings. This view has the apparent disadvantage of putting insignif- icant differences in the syntax of a logical forms, such as the relative order of the arguments to symmetric operators, on the same footing as more significant facts about them. We know that it will not generally be possible to reduce 200 logical expressions to a canonical form but this does not mean that we should expect our generator to be compro- mised, or even greatly delayed, by trivial distinctions. Con- siderations of this kind were, in part, responsible for the recent resurgence of interest in "flat" representations of log- ical form (Copestake et a/.,1996) and for the representa- tions used for transfer in Shake-and-Bake translation (Whitelock, 1992). They have made semantic formalisms like those now usually associated with Davison (Davidson, 1980, Parsons, 1990) attractive in artificial intelligence for many years (Hobbs 1985, Kay, 1970). Operationally, the attraction is that the notations can be analyzed largely as free word-order languages in the manner outlined above. Consider the expression ( 1 ) (1) r: run(r), past(r), fast(r), argl (r,j), name(j, John) which we will take as a representation of the logical form of the sentences John ran fast and John ran quickly. It consists of a distinguished index (r) and a list of predicates whose relative order is immaterial. The distinguished index identi- fies this as a sentence that makes a claim about a running event. "John" is the name of the entity that stands in the 'argl' relation to the running which took place in the past and which was fast. Nothing turns on these details which will differ with differing ontologies, logics, and views of semantic structure. What concerns us here is a procedure for generating a sentence from a structure of this general kind. Assume that the lexicon contains entries like those in (2) in which the italicized arguments to the semantic predi- cates are variables. (2) Words Cat Semantics John np(x) x: name(x, John) ran vp(x, y) x: run(x), argl (x, y), past(x) fast adv(x) x: fast(x) quickly adv(x) x: fast(x) A prima facie argument for the utility of these particular words for expressing (I) can be made simply by noting that, modulo appropriate instantiation of the variables, the semantics of each of these words subsumes (1). 3 The Algorithm Schema The entries in (2), with their variables suitably instantiated, become the initial entries of an agenda and we begin to move them to the chart in accordance with the algorithm schema, say in the order given. The variables in the 'Cat' and 'Semantics' columns of (2) provide the essential link between syntax and semantics. The predicates that represent the semantics of a phrase will simply be the union of those representing the constituents. The rules that sanction a phrase (e.g. (3) below) show which variables from the two parts are to be identified. When the entry for John is moved, no interactions are possible because the chart is empty. When run is moved, the sequence John ran is considered as a possible phrase on the basis of rule (3). (3) s(x) ~ rip(y), vp(x, 3'). With appropriate replacements for variables, this maps onto the subset (4) of the original semantic specification in (1). (4) r: run(r), past(r), argl(r,j), name(j, John) Furthermore it is a complete sentence. However, it does not count as an output to the generation process as a whole because it subsumes some but not all of (1). It therefore simply becomes a new edge on the agenda. The string ran fast constitutes a verb phrase by virtue of rule (5) giving the semantics (6), and the phrase ran quickly with the same semantics is put on the agenda when the quickly edge is move to the chart. (5) vp(x) ~ vp(x) adv(x) (6) r: run(r), past(r), fast(r), arg 1 (r, y) The agenda now contains the entries in (7). (7) Words Cat Semantics John ran s(r) r: run(r), past(r), argl(r,j), name(j, John) ran fast vp(r, j) r: run(r), past(r), fast(r), argl(r,j) ran quickly vp(r, j) r: run(r), past(r), fast(r), argl(r,j) Assuming that adverbs modify verb phrases and not sen- tences, there will be no interactions when the John ran edge is moved to the chart. When the edge for ran .fast is moved, the possibility arises of creating the phrase tan fast quickly as well as ran fast fast. Both are rejected, however, on the grounds that they would involve using a predicate from the original semantic specification more than once. This would be simi- lar to allowing a given word to be covered by overlapping phrases in free word-order parsing. We proposed eliminat- ing this by means of a bit vector and the same technique applies here. The fruitful interactions that occur here are between ran .fast and ran quickly on the one hand, and John 201 on the other. Both give sentences whose semantics sub- sumes the entire input. Several things are noteworthy about the process just outlined. !. Nothing turns on the fact that it uses a primitive version of event semantics. A scheme in which the indices were handles referring to subexpressions in any variety of fiat semantics could have been treated in the same way. Indeed, more conventional formalisms with richly recursive syntax could be converted to this form on the fly. 2. Because all our rules are binary, we make no use of active edges. 3. While it fits the conception of chart parsing given at the beginning of this paper, our generator does not involve string positions centrally in the chart represen- tation. In this respect, it differs from the proposal of Shieber (1988) which starts with all word edges leav- ing and entering a single vertex. But there is essentially no information in such a representation. Neither the chart nor any other special data structure is required to capture the fact that a new phrase may be constructible out of any given pair, and in either order, if they meet certain syntactic and semantic criteria. 4. Interactions must be considered explicitly between new edges and all edges currently in the chart, because no indexing is used to identify the existing edges that could interact with a given new one. 5. The process is exponential in the worst case because, if a sentence contains a word with k modifiers, then a version it will be generated with each of the 2 k subsets of those modifiers, all but one of them being rejected when it is finally discovered that their semantics does not subsume the entire input. If the relative orders of the modifiers are unconstrained, matters only get worse. Points 4 and 5 are serious flaws in our scheme for which we shall describe remedies. Point 2 will have some importance for us because it will turn out that the indexing scheme we propose will require the use of distinct active and inactive edges, even when the rules are all binary. We take up the complexity issue first, and then turn to bow the efficiency of the generation chart might be enhanced through indexing. 4 Internal and External Indices The exponential factor in the computational complexity of our generation algorithm is apparent in an example like (8). (8) Newspaper reports said the tall young Polish athlete ran fast The same set of predicates that generate this sentence clearly also generate the same sentence with deletion of all subsets of the words tall, young, and Polish for a total of 8 strings. Each is generated in its entirety, though finally rejected because it fails to account for all of the semantic material. The words newspaper and fast can also be deleted independently giving a grand total of 32 strings. We concentrate on the phrase tall young Polish athlete which we assumed would be combined with the verb phrase ran fast by the rule (3). The distinguished index of the noun phrase, call it p, is identified with the variable y in the rule, but this variable is not associated with the syntactic cate- gory, s, on the left-hand side of the rule. The grammar has access to indices only through the variables that annotate grammatical categories in its rules, so that rules that incor- porate this sentence into larger phrases can have no further access to the index p. We therefore say that p is internal to the sentence the tall young Polish athlete ran fast. The index p would, of course, also be internal to the sentences the young Polish athlete ran fast, the tall Polish athlete ran fast, etc. However, in these cases, the semantic material remaining to be expressed contains predicates that refer to this internal index, say 'tall(p)', and 'young(p)'. While the lexicon may have words to express these predi- cates, the grammar has no way of associating their referents with the above noun phrases because the variables corre- sponding to those referents are internal. We conclude that, as a matter of principle, no edge should be constructed if the result of doing so would be to make internal an index occurring in part of the input semantics that the new phrase does not subsume. In other words, the semantics of a phrase must contain all predicates from the input specification that refer to any indices internal to it. This strategy does not pre- vent the generation of an exponential number of variants of phrases containing modifiers. It limits proliferation of the ill effects, however, by allowing only the maximal one to be incorporated in larger phrases. In other words, if the final result has phrases with m and n modifiers respectively, then 2 n versions of the first and 2 m of the second will be created, but only one of each set will be incorporated into larger phrases and no factor of 2 (n+m) will be introduced into the cost of the process. 5 Indexing String positions provide a natural way to index the strings input to the parsing process for the simple reason that there are as many of them as there are words but, for there to be any possibility of interaction between a pair of edges, they must come together at just one index. These are the natural points of articulation in the domain of strings. They cannot fill this role in generation because they are not natural prop- erties of the semantic expressions that are the input to the process. The corresponding natural points of articulation in 202 flat semantic structures are the entities that we have already been referring to as indices. In the modified version of the procedure, whenever a new inactive edge is created with label B(b ...), then for all rules of the form in (9), an active edge is also created with label A(...)/C(c ...). (9) A(...) ~ B(b ...) C(c ...) This represents a phrase of category A that requires a phrase of category C on the right for its completion. In these labels, b and c are (variables representing) the first, or distin- guished indices associated with B and C. By analogy with parsing charts, an inactive edge labeled B(b ...) can be thought of as incident from vertex b, which means simply that it is efficiently accessible through the index b. An active edge A(...)/C(c ...) should be thought of as incident from, or accessible through, the index c. The key property of this scheme is that active and inactive edges interact by virtue of indices that they share and, by letting vertices correspond to indices, we collect together sets of edges that could interact. We illustrate the modified procedure with the sentence (10) whose semantics we will take to be (11), the grammar rules (12)-(14), and the lexical entries in (15). (10) The dog saw the cat. (11 ) dog(d), def(d), saw(s), past(s), cat(c), def(c), argl (s, d), arg2(s, c). (l 2) s(x) ~ np(y) vp(x, y) (13) vp(x,y) --* v(x,y, z) np(z) (14) rip(x) ~ det(x) n(x) (15) Words cat saw dog the Cat Semantics n(x) x: cat(x) v(x, y, z) x: see(x), past(x), argl (x, y), arg2(xcz) n(x) x: dog(x) det(x) x: def(x) The procedure will be reminiscent of left-corner parsing. Arguments have been made in favor of a head-driven strat- egy which would, however, have been marginally more complex (e.g. in Kay (1989), Shieber, et el. (1989)) and the differences are, in any case, not germane to our current con- cerns. The initial agenda, including active edges, and collect- ing edges by the vertices that they are incident from, is given in (16). The grammar is consulted only for the purpose of cre- ating active edges and all interactions in the chart are between active and inactive pairs of edges incident from the same vertex. (16) Vert d Words the the dog saw saw the the cat Cat det(d) npfd)/n(d) n(d) v(s, d, c) vp(s, d)/np(c) det(c) np(c)/n(c) n(c) Semantics d: deffd) d: def(d) d: dog(d) s: see(s), past(s), argl(s, d), arg2(s, c) r: see(s), past(s), argl(r,j) c: def(c) c: def(c) c: dog(c) (17) Vert d Words the dog saw the cat c the cat s saw the cat Cat np(d) vp(s, d)/np(d) np(c) vp(s, d) Semantics d: dog(d), def(d) s: see(s), past(s), arg 1 (s, d), arg2(s, c), cat(c), def(c) c: cat(c), def(c) s: see(s), past(s), argl (s, d), arg2(s, c), cat(c), def(c) Among the edges in (16), there are two interactions, one at vertices c and d. They cause the first and third edges in (17) to be added to the agenda. The first interacts with the active edge originally introduced by the verb "saw" produc- ing the fourth entry in (17). The label on this edge matches the first item on the right-hand side of rule (12) and the active edge that we show in the second entry is also intro- duced. The final interaction is between the first and second edges in (17) which give rise to the edge in (18). This procedure confirms perfectly to the standard algo- rithm schema for chart parsing, especially in the version that makes predictions immediately following the recogni- tion of the first constituent of a phrase, that is, in the version that is essentially a caching left-comer parser. 203 (18) Vert s Words Cat Semantics The dog saw the cat s(s) dog(d), def(d), see(s), past(s),arg 1 (s, d), arg2(s, c), cat(c), def(c). 6 Acknowledgments Whatever there may be of value in this paper owes much to the interest, encouragement, and tolerance of my colleagues Marc Dymetman, Ronald Kaplan, John Maxwell, and Hadar Shem Toy. I am also indebted to the anonymous reviewers of this paper. References Copestake, A., Dan Flickinger, Robert Malouf, and Susanne Riehemann, and Ivan Sag (1996). Translation Using Minimal Recursion Semantics. Proceedings of The Sixth International Conference on Theoretical and Method- ological Issues in Machine Translation, Leuven (in press). Davidson, D. (1980). Essays on Actions and Events. Oxford: The Clarendon Press. Hobbs, J. R. (1985). Ontological Promiscuity. 23rd Annual Meeting of the Association for Computational Lin- guistics, Chicago, ACL. Kay, M. (1970). From Semantics to Syntax. Progress in Linguistics. Bierwisch Manfred, and K. E. Heidolf. The Hague, Mouton: ! 14-126. Kay, M. (1989). Head-driven Parsing. Proceedings of Workshop on Parsing Technologies, Pittsburgh, PA. Parsons, T. (1990). Events in the Semantics of English. Cambridge, Mass.: MIT Press. Shieber, S. (1988). A Uniform Architecture for Parsing and Generation. COLING-88, Budapest, John yon Neu- mann Society for Computing Sciences. Shieber, S. M. et al. (1989). A Semantic-Head-Driven Generation Algorithm for Unification Based Formalisms. 27th Annual Meeting of the Association for Computational Linguistics, Vancouver. B.C. Whitelock, P. (1992). Shake and-Bake Translation. COLING-92, Nantes. 204 | 1996 | 27 |
Evaluating the Portability of Revision Rules for Incremental Summary Generation Jacques Robin http://www.di.ufpe.br/~jr [email protected] Departamento de Inform£tica, Universidade Federal de Pernambuco Caixa Postal 7851, Cidade Universit£ria Recife, PE 50732-970 Brazil Abstract This paper presents a quantitative evalu- ation of the portability to the stock mar- ket domain of the revision rule hierarchy used by the system STREAK to incremen- tally generate newswire sports summaries. The evaluation consists of searching a test corpus of stock market reports for sentence pairs whose (semantic and syntactic) struc- tures respectively match the triggering con- dition and application result of each revi- sion rule. The results show that at least 59% of all rule classes are fully portable, with at least another 7% partially portable. 1 Introduction The project STREAK 1 focuses on the specific issues involved in generating short, newswire style, natural language texts that summarize vast amount of in- put tabular data in their historical context. A series of previous publications presented complementary aspects of this project: motivating corpus analysis in (Robin and McKeown, 1993), new revision-based text generation model in (Robin, 1993), system im- plementation and rule base in (Robin, 1994a) and empirical evaluation of the robustness and scalabil- ity of this new model as compared to the traditional single pass pipeline model in (Robin and McKeown, 1995). The present paper completes this series by describing a second, empirical, corpus-based evalu- ation, this time quantifying the portability to an- other domain (the stock market) of the revision rule hierarchy acquired in the sports domain and imple- mented in STREAK. The goal of this paper is twofold: (1) assessing the generality of this particular rule hi- erarchy and (2) providing a general, semi-automatic 1 Surface Text Reviser Expressing Additional Knowledge. methodology for evaluating the portability of seman- tic and syntactic knowledge structures used for nat- ural language generation. The results reveal that at least 59% of the revision rule hierarchy abstracted from the sports domain could also be used to incre- mentally generate the complex sentences observed in a corpus of stock market reports. I start by providing the context of the evalua- tion with a brief overview of STREAK's revision-based generation model, followed by some details about the empirical acquisition of its revision rules from cor- pus data. I then present the methodology of this evaluation, followed by a discussion of its quantita- tive results. Finally, I compare this evaluation with other empirical evaluations in text generation and conclude by discussing future directions. 2 An overview of STREAK The project STREAK was initially motivated by an- alyzing a corpus of newswire summaries written by professional sportswriters 2. This analysis revealed four characteristics of summaries that challenge the capabilities of previous text generators: concise lin- guistic forms, complex sentences, optional and back- ground facts opportunistically slipped as modifiers of obligatory facts and high paraphrasing power. By greatly increasing the number of content planning and linguistic realization options that the genera- tor must consider, as well as the mutual constraints among them, these characteristics make generating summaries in a single pass impractical. The example run given in Fig. 1 illustrates how STREAK overcomes these difficulties. It first gener- ates a simple draft sentence that contains only the obligatory facts to include in any game report (lo- cation, date, game result and key player statistic). It then applies a series of revision rules 3, each one 2This 800,000 word corpus covers a whole NBA (Na- tional Basketball Association) season. 3In Fig. 1, the nile used is indicated above each re- 205 1. Initial draft (basic sentence pattern): "Dallas, TX - Charles Barkley scored 42 points Sunday as the Phoenix Suns defeated the Dallas Mavericks 123-97." 2. Adjunctization of Created into Instrument: "Dallas, TX - Charles Barkley tied a season high wlth 42 points Sunday as the Phoenix Suns defeated the Dallas Mavericks 123-97." 3. Coordinative Conjoin of Clause: "Dallas, TX - Charles Barkley tied a season high with 42 points and Danny A|nge added 21 Sunday as the Phoenix Suns defeated the Dallas Mavericks 123-97." 4. Absorb of Clause in Clause as Result with Agent Control: "Dallas, TX - Charles Barkley tied a season high with 42 points and Danny Ainge came oIT the bench to add 21 Sunday as the Phoenix Suns defeated the Dallas Mavericks 123-97." 5. l~ominalization with 0rdinal Adjoin: "Dallas, TX - Charles Barldey tied a season high with 42 points and Danny Ainge came off the bench to add 21 Sunday as the Phoenix Suns handed the Dallas Mavericks their 13th straight home defeat 123-97." 6. Adjoin of Classifier to NP: "Dallas, TX - Charles Barkley tied a season high with 42 points and Danny Ainge came off the bench to add 21 Sunday as the Phoenix Suns handed the Dallas Mavericks their league worst 13th straight home defeat 123-97." Figure 1: Complex sentence generation through incremental revisions in STREAK opportunistically adding a new fact 4 that either: • Complements an Mready included fact (e.g., re- vision of sentence 2 into 3). • Justifies its relevance by providing its historical background (e.g., revision of sentence 1 into 2). Some of these revisions are non-monotonic, rewording 5 a draft fact to more concisely accommo- date the additional fact (e.g., revision of sentence 1 into 2). Popping additional facts from a prior- ity stack, STREAK stops revising when the summary vised sentence. 4Highlighted in bold in Fig. 1. 5In Fig. 1, words that get deleted are italicized and words that get modified are underlined. Charles Barldey scored 42 points. Those 42 points equal his best scoring performance of the season. Danny Ainge is a teammate of Barkley. They play for the Phoenix Suns. Ainge is a reserve player. Yet he scored 21 points. The high scoring performances by Barkley and Ainge helped the Suns defeat the Dallas Mavericks. The Mav- ericks played on their homecourt in Texas. They had already lost their 12 previous games there. No other team in the league has lost so many games in a row at home. The final score was 123-97. The game was played Sunday. Figure 2: Paragraph of simple sentences paraphrasing a single complex sentence sentence reaches linguistic complexity limits empir- icMly observed in the corpus (e.g., 50 word long or parse tree of depth 10). While STREAK generates only single sentences, those complex sentences convey as much information as whole paragraphs made of simple sentences, only far more fluently and concisely. This is illustrated by the 12 sentence paragraph 6 of Fig. 2, which para- phrases sentence 6 of Fig. 1. Because they express facts essentially independently of one another, such multi-sentence paragraphs are much easier to gener- ate than the complex single sentences generated by STREAK. 3 Acquiring revision rules from corpus data The rules driving the revision process in STREAK were acquired by reverse engineering 7 about 300 cor- pus sentences. These sentences were initially classi- fied in terms of: • The combination of domain concepts they ex- pressed. • The thematic role and top-level syntactic cate- gory used for each of these concepts. 6This paragraph was not generated by STREAK, it is shown here only for contrastive purposes. v i.e., analyzing how they could be incrementally gen- erated through gradual revisions. 206 The resulting classes, called realization patterns, abstract the mapping from semantic to syntactic structure by factoring out lexical material and syn- tactic details. Two examples of realization patterns are given in Fig. 3. Realization patterns were then grouped into surface decrement pairs consisting of: • A more complex pattern (called the target pat- tern). • A simpler pattern (called the source pattern) that is structurally the closest to the target pat- tern among patterns with one less concept s . The structural transformations from source to tar- get pattern in each surface decrement pair were then hierarchically classified, resulting in the revi- sion rule hierarchy shown in Fig. 4-10. For ex- ample, the surface decrement pair < R~, R 1 >, shown in Fig. 3, is one of the pairs from which the revision rule Adjunctization of Range into Instrument, shown in Fig. 10 was abstracted. It involves displacing the Range argument of the source clause as an Instrument adjunct to accom- modate a new verb and its argument. This revi- sion rule is a sibling of the rule Adjunctization of Created into Instrument used to revise sentence i into 2 in STREAK'S run shown in Fig. 1 (where the Created argument role "42 points" of the verb "to score" in I becomes an Instrument adjunct in 2). The bottom level of the revision rule hierarchy specifies the side revisions that are orthogonal and sometimes accompany the restructuring revisions discussed up to this point. Side revisions do not make the draft more informative, but instead im- prove its style, conciseness and unambiguity. For ex- ample, when STREAK revises sentence (3) into (4) in the example run of Fig. 1, the Agent of the absorbed clause "Danny Ainge added 21 points" becomes con- trolled by the new embedding clause "Danny Ainge came off the bench" to avoid the verbose form: ? "Danny Ainge came off the bench for Danny Ainge to add 21 points". 4 Evaluation methodology In the spectrum of possible evaluations, the eval- uation presented in this paper is characterized as follows: • Its object is the revision rule hierarchy acquired from the sports summary corpus. It thus does not directly evaluate the output of STREAK, but rather the special knowledge structures required by its underlying revision-based model. s i.e., the source pattern expresses the same concept combination than the target pattern minus one concept. The particular property of this revision rule hi- erarchy that is evaluated is cross-domain porta- bility: how much of it could be re-used to gener- ate summaries in another domain, namely the stock market? The basis for this evaluation is corpus data 9. The original sports summary corpus from which the revision rules were acquired is used as the 'training' (or acquisition) corpus and a cor- pus of stock market reports taken from several newswires is used as the 'test' corpus. This test corpus comprises over 18,000 sentences. The evaluation procedure is quantitative, mea- suring percentages of revision rules whose target and source realization patterns are observable in the test corpus. It is also semi-automated through the use of the corpus search tool CREP (Duford, 1993) (as explained below). Basic principle As explained in section 3, a re- vision rule is associated with a list of surface decre- ment pairs, each one consisting of: A source pattern whose content and linguistic form match the triggering conditions of the rule (e.g., R~ in Fig. 3 for the rule Adjunctization of Range into Instrument). A target pattern whose content and linguis- tic form can be derived from the source pat- tern by applying the rule (e.g., R 2 in Fig. 3 for the rule Adjunctization of Range into Instrument). This list of decrement pairs can thus be used as the signature of the revision rule to detect its usage in the test corpus. The needed evidence is the simul- taneous presence of two test corpus sentences 1° , each one respectively matching the source and target pat- terns of at least one element in this list. Requiring occurrence of the source pattern in the test corpus is necessary for the computation of conservative porta- bility estimates: while it may seem that one target pattern alone is enough evidence, without the pres- ence of the corresponding source pattern, one cannot rule out the possibility that, in the test domain, this target pattern is either a basic pattern or derived from another source pattern using another revision rule. 9Only the corpus analysis was performed for both do- mains. The implementation was not actually ported to the stock market domain. 1°In general, not from the same report. 207 Realization pattern R~: • Expresses the concept pair: <game-result(winner,loser,score), streak(winner,aspect,result-type,lengt h) >. • Is a target pattern of the revision rule Adjunctization of Range into Instrument. winner aspect l type l streak length [ agent action affected/located location proper verb NP PP det] classifier I noun prep ] Utah extended its win streak to 6 games with Boston stretching its winning spree to 9 outings with [ score ]game-result [ loser instrument PP NP L_J n u m b e ~ J PP a 99-84 triumph - over enl3-d-~V-~ a 118-94 rout of Utah Realization pattern R~: * Expresses the single concept <game-result(winner,loser,score)>. • Is a source pattern of the revision rule Adjunctization of Range into Instrument. • Is a surface decrement of pattern R~ above. winner agent action proper support-verb I score ] game-result I loser range NP det I number I noun Chicago claimed a Y Orlando recorded a 101-95 triumph I PP over New Jersey against New York Figure 3: Realization pattern examples Partially automating the evaluation The soft- ware tool CREP 11 was developed to partially auto- mate detection of realization patterns in a text cor- pus. The basic idea behind CREP is to approximate a realization pattern by a regular expression whose terminals are words or parts-of-speech tags (POS- tags). CR~.P will then automatically retrieve the cor- pus sentences matching those expressions. For ex- ample, the CREP expression C~1 below approximates the realization pattern R~ shown in Fig. 3: (61) TEAM Of (claimed[recorded)@VBD I- SCORE O= (victory[triumph)@NN O= (over[against)@IN O= TEAM In the expression above, 'VBD', 'NN' and 'IN' are the POS-tags for past verb, singular noun and preposi- tion (respectively), and the sub-expressions 'TEAH' and 'SCORE' (whose recursive definitions are not shown here) match the team names and possible fi- nal scores (respectively) in the NBA. The CREP op- erators 'N=' and 'N-' (N being an arbitrary integer) respectively specify exact and minimal distance of N words, and 'l' encodes disjunction. llcREP was implemented (on top of FLEX, GNUS' ver- sion of LEX) and to a large extent also designed by Du- ford. It uses Ken Church's POS tagger. Because a realization pattern abstracts away from lexical items to capture the mapping from concepts to syntactic structure, approximating such a pattern by a regular expression of words and POS-tags in- volves encoding each concept of the pattern by the disjunction of its alternative lexicalizations. In a given domain, there are therefore two sources of in- accuracy for such an approximation: • Lexical ambiguity resulting in false positives by over-generalization. • Incomplete vocabulary resulting in false nega- tives by over-specialization 12. Lexical ambiguities can be alleviated by writing more context-sensitive expressions. The vocabu- lary can be acquired through additional exploratory CREP runs with expressions containing wild-cards for some concept slots. Although automated corpus search using CREP expressions considerably speeds- up corpus analysis, manual intervention remains 12This is the case for example of C1 above, which is a simplification of the actual expression that was used to search occurrences of R~ in the test corpus (e.g., Cz is missing "win" and "rout" as alternatives for "victory"). 208 necessary to filter out incorrect matches resulting from imperfect approximations. Cross-domain discrepancies Basic similarities between the finance and sports domains form the basis for the portability of the revision rules. In both domains, the core facts reported are statis- tics compiled within a standard temporal unit (in sports, one ballgame; in finance, one stock market session) together with streaks 13 and records com- piled across several such units. This correspondence is, however, imperfect. Consequently, before they can track down usage of a revision rule in the test do- main, the CREP expressions approximating the sig- nature of the rule in the acquisition domain must be adjusted for cross-domain discrepancies to prevent false negatives. Two major types of adjustments are necessary: lexical and thematic. Lexical adjustments handle cases of partial mis- match between the respective vocabularies used to lexicalize matching conceptual structures in each do- main. (e.g.,, the verb "to rebound from" expresses the interruption of a streak in the stock market do- main, while in the basketball domain "to break" or "to snap" are preferred since "to rebound" is used to express a different concept). Thematic adjustments handle cases of partial dif- ferences between corresponding conceptual struc- tures in the acquisition and test domains. For ex- ample, while in sports garae-result involves an- tagonistic teams, its financial domain counterpart session-result concerns only a single indicator. Consequently, the sub-expression for the loser role in the example CI:tEP expression (~1 shown before, and which approximates realization pattern /~ for game-resull; (shown in Fig. 3), needs to become optional in order to also approximate patterns for session-resul~. This is done using the CREP op- erator ? as shown below: (C1/): TEAM O= (claimedlrecorded)@VBD 1- SCORE O= (victoryltriumph) @NN O= ( (over] against)@IN 0= TEAM) ? Note that it is the CREP expressions used to auto- matically retrieve test corpus sentence pairs attest- ing usage of a revision rule that require this type of adjustment and not the revision rule itself 14. For example, the Adjoin of Frequency PP to Clause revision rule attaches a streak to a session-result clause without loser role in exactly the same way than it attaches a streak to a game-result with 13i.e., series of events with similar outcome. 14Some revision rules do require adjustment, but of another type (cfl Sect. 5). loser role. This is illustrated by the two corpus sentences below: P~: "The Chicago Bulls beat the Phoenix Suns 99 91 for their 3rd straight win" pt: "The Amex Market Value Index inched up 0.16 to 481.94 for its sixth straight advance" Detailed evaluation procedure The overall procedure to test portability of a revision rule con- sists of considering the surface decrement pairs in the rule signature in order, and repeating the following steps: 1. Write a CREP expression for the acquisition tar- get pattern. 2. Iteratively delete, replace or generalize sub- expressions in the CREP expression - to gloss over thematic and lexical discrepancies between the acquisition and test domains, and prevent false negatives - until it matches some test cor- pus sentence(s). 3. Post-edit the file containing these matched sen- tences. If it contains only false positives of the sought target pattern, go back to step 2. Oth- erwise, proceed to step 4. 4. Repeat step (1-3) with the source pattern of the pair under consideration. If a valid match can also be found for this source pattern, stop: the revision rule is portable. Otherwise, start over from step 1 with the next surface decrement pair in the revision rule signature. If there is no next pair left, stop: the revision rule is considered non-portable. Steps (2,3) constitute a general, generate-and-test procedure to detect realization patterns usage in a corpus 15. Changing one CKEP sub-expression may result in going from too specific an expression with no valid match to either: (1) a well-adjusted ex- pression with a valid match, (2) still too specific an expression with no valid match, or (3) already too general an expression with too many matches to be manually post-edited. It is in fact always possible to write more context- sensitive expressions, to manually edit larger no- match files, or even to consider larger test corpora in the hope of finding a match. At some point however, one has to estimate, guided by the results of previ- ous runs, that the likelihood of finding a match is too 15And since most generators rely on knowledge struc- tures equivalent to realization patterns, this procedure can probably be adapted to semi-automatically evaluate the portability of virtually any corpus-based generator. 209 small to justify the cost of further attempts. This is why the last line in the algorithm reads "considered non-portable" as opposed to "non-portable". The algorithm guarantees the validity of positive (i.e., portable) results only. Therefore, the figures pre- sented in the next section constitute in fact a lower- bound estimate of the actual revision rule portability. 5 Evaluation results The results of the evaluation are summarized in Fig. 4-10. They show the revision rule hierarchy, with portable classes highlighted in bold. The fre- quency of occurrence of each rule in the acquisition corpus is given below the leaves of the hierarchy. Some rules are same-concept portable: they are used to attach corresponding concepts in each do- main (e.g., Adjoin of Frequency PP to Clause, as explained in Sect. 4). They could be re-used "as is" in the financial domain. Other rules, however, are only different-concept portable: they are used to attach altogether different concepts in each domain. This is the case for example of Adjoin Finite Time Clause to Clause, as illustrated by the two corpus sentences below, where the added temporal adjunct (in bold) conveys a streak in the sports sentence, but a complementary statistics in the financial one: T~: "to lead Utah to a 119-89 trouncing of Denver as the Jazz defeated the Nuggets for the 12th straight time at home." T~: "Volume amounted to a solid 349 million shares as advances out-paced declines 299 to 218.". For different-concept portable rules, the left-hand side field specifying the concepts incorporable to the draft using this rule will need to be changed when porting the rule to the stock market domain. In Fig. 4-10, the arcs leading same-concept portable classes are full and thick, those leading to different- concept portable classes are dotted, and those lead- ing to a non-portable classes are full but thin. 59% of all revision rule classes turned out to be same-concept portable, with another 7% different- concept portable. Remarkably, all eight top-level classes identified in the sports domain had instances same-concept portable to the financial domain, even those involving the most complex non-monotonic re- visions, or those with only a few instances in the sports corpus. Among the bottom-level classes that distinguish between revision rule applications in very specific semantic and syntactic contexts, 42% are same-concept portable with another 10% different- concept portable. Finally, the correlation between high usage frequency in the acquisition corpus and portability to the test corpus is not statistically sig- nificant (i.e., the hypothesis that the more common a rule, the more likely it is to be portable could not be confirmed on the analyzed sample). See (Robin, 1994b) for further details on the evMuation results. There are two main stumbling blocks to porta- bility: thematic role mismatch and side revisions. Thematic role mismatches are cases where the se- mantic label or syntactic sub-category of a con- stituent added or displaced by the rule differ in each domain (e.g., Adjunctization of Created into Instrument vs. Adjoin of Affected into Instrument). They push portability from 92% down to 71%. Their effect could be reduced by allowing STREAK'S reviser to manipulate the draft down to the surface syntactic role level (e.g., in both cor- pora Created and Affected surface as object). Cur- rently, the reviser stops at the thematic role level to allow STREAK to take full advantage of the syntac- tic processing front-end SURGE (Elhadad and Robin, 1996), which accepts such thematic structures as in- put. Accompanying side revisions push portability from 71% to 52%. This suggests that the design of STREAK could be improved by keeping side revisions separate from re-structuring revisions and interleav- ing the applications of the two. Currently, they are integrated together at the bottom of the revision rule hierarchy. 6 Related work Apart from STREAK, only three generation projects feature an empirical and quantitative evaluation: ANA (Kukich, 1983), KNIGHT (Lester, 1993) and IM- AGENE (Van der Linden, 1993). ANA generates short, newswire style summaries of the daily fluctuations of several stock market indexes from half-hourly updates of their values. For eval- uation, Kukich measures both the conceptual and linguistic (lexical and syntactic) coverages of ANA by comparing the number of concepts and realiza- tion patterns identified during a corpus analysis with those actually implemented in the system. KNIGHT generates natural language concept defi- nitions from a large biological knowledge base, rely- ing on SURGE for syntactic realization. For evalua- tion, Lester performs a Turing test in which a panel of human judges rates 120 sample definitions by as- signing grades (from A to F) for: • Semantic accuracy (defined as "Is the definition adequate, providing correct information and fo- cusing on what's important?" in the instruc- tions provided to the judges). • Stylistic accuracy (defined as "Does the defini- tion use good prose and is the information it 210 Monotonic Revisions Non-monotonic Revisions A d l ~ j o i n Recast Adjunctization Nomlnalization Demotion Promotion Figure 4: Revision rule hierarchy: top-levels Absorb of NP ofclause insidc-NP ~¢~'~ ....... ~d~lause inslde!lanse as querier it ...... " ° " ' ~ "''"~ - as- ns rument as-affected-apposition as-mean as-co-event 1 2 l 1 3 Figure 5: Absorb revision rule sub-hierarchy Recast of NP of clause from classifier from location from range to qualifier to instrument to time to instrument 10 9 1 1 Nominalization +ordinal +ordinal +ordinal +classifier +qualifier 1 2 2 Figure 6: Recast and Nominalize revision rule sub-hierarchy Demotion f r o m ~ f f e c t e d ] to qualifleJ(affeeted) I to score(co-event) to determiner(affected) 1 2 1 Coordination Promotion simple w/Adjunctization 1 1 Figure 7: Demotion and Promotion revision rule sub-hierarchy 211 Adjoin to NT to clause 1 25 aSfi use4~non.fi • . • ° rel ve-cla rote clause abridged /~deleted ~ +reorder ref ref full abridged 2ref/''- ref~ 3 ref 9 2 2 10 f r e q u e n ~ n t I I i I PP non-finite finite non-finite da e clause clause 2 , full I ~ abridged abridg~ ~ deleted refl~ ref ref~ ref J \ ref 13 3 1 12 4 Figure 8: Adjoin revision rule sub-hierarchy Conjoin NPs by apposition / % % % % % abridged ~deleted re~ f ref I \ ref 15 5 4 1 clauses I by coordination by coordination "+scope full ~bridged full Aabridged l~mark refl ~ ref refl ~ ref 1 2 1 5 1 1 Figure 9: Conjoin revision rule sub-hierarchy into opposition into instrument of affected of range of created of location 1 ~ +agent 1 \demotion abridged / full,deleted \ full abridged ref// refJ\ ref \ ref/~ ref 7 27 3 1 14 5 4 Figure 10: Adjunctization revision rule sub-hierarchy 212 ANA KNIGHT Object of Evaluation knowledge structures output text IMAGENE output text knowledge structures STREAK Evaluated Properties conceptual coverage linguistic coverage semantic accuracy stylistic accuracy stylistic accuracy stylistic robustness cross-domain portability same-domain robustness same-domain scalability Empirical Basis textual corpus human judges textual corpus textual corpus Evaluation Procedure manual mallual manual semi-automatic Figure 11: Empirical evaluations in language generation conveys well organized" in the instructions pro- vided to the judges). The judges did not know that half the definitions were computer-generated while the other half were written by four human domain experts. Impres- sively, the results show that: • With respect to semantic accuracy, the human judges could not tell KNIGHT apart from the hu- man writers. * While as a group, humans got statistically sig- nificantly better grades for stylistic accuracy than KNIGHT, the best human writer was single- handly responsible for this difference. IMAGENE generates instructions on how to oper- ate household devices relying on NIGEL (Mann and Matthiessen, 1983) for syntactic realization. The implementation focuses on a very limited aspect of text generation: the realization of purpose relations. Taking as input the description of a pair <operation, purpose of the operation>, augmented by a set of features simulating the communicative context of generation, IMAGENE selects, among the many real- izations of purpose generable by NIGEL (e.g., fronted to-infinitive clause vs. trailing for-gerund clauses), the one that is most appropriate for the simulated context (e.g., in the context of several operations sharing the same purpose, the latter is preferentially expressed before those actions than after them). IM- AGENE's contextual preference rules were abstracted by analyzing an acquisition corpus of about 300 pur- pose clauses from cordless telephone manuMs. For evaluation, Van der Linden compares the purpose realizations picked by IMAGENE to the one in the corresponding corpus text, first on the acquisition corpus and then on a test corpus of about 300 other purpose clauses from manuals for other devices than cordless telephones (ranging from clock radio to au- tomobile). The results show a 71% match on the acquisition corpus 16 and a 52% match on the test corpus. The table of Fig. 11 summarizes the difference on both goal and methodology between the eval- uations carried out in the projects ANA, KNIGHT, IMAGENE and STREAK. In terms of goals, while Kukich and Lester evaluate the coverage or accu- racy of a particular implementation, I instead fo- cus on three properties inherent to the use of the revision-based generation model underlying STREAK: robustness (how much of other text samples from the same domain can be generated without acquiring new knowledge?) and scalability (how much more new knowledge is needed to fully cover these other samples?) discussed in (Robin and McKeown, 1995), and portability to another domain in the present pa- per. Van der Linden does a little bit of both by first measuring the stylistic accuracy of his system for a very restricted sub-domain, and then measuring how it degrades for a more general domain. In itself, measuring the accuracy and coverage of a particular implementation in the sub-domain for which it was designed brings little insights about what generation approach should be adopted in fu- ture work. Indeed, even a system using mere canned text can be very accurate and attain substantial cov- erage if enough hand-coding effort is put into it. However, all this effort will have to be entirely du- plicated each time the system is scaled up or ported to a new domain. Measuring how much of this effort duplication can be avoided when relying on revision- based generation was the very object of the three evaluations carried in the STREAK project. 16This imperfect match on the acquisition corpus seems to result from the heuristic nature of IMAGENE's stylistic preferences: individually, none of them needs to apply to the whole corpus. 213 In terms of methodology, the main originality of these three evaluations is the use of CREP to par- tially automate reverse engineering of corpus sen- tences. Beyond evaluation, CREP is a simple, but general and very handy tool that should prove use- ful to speed-up a wide range of corpora analyses. 7 Conclusion In this paper, I presented a quantitative evaluation of the portability to the stock market domain of the revision rule hierarchy used by the system STREAK to incrementally generate newswire sports summaries. The evaluation procedure consists of searching a test corpus of stock market reports for sentence pairs whose (semantic and syntactic) structures respec- tively match the triggering condition and application result of each revision rule. The results show that at least 59% of all rule classes are fully portable, with at least another 7% partially portable. Since the sports domain is not closer to the finan- cial domain than to other quantitative domains such as meteorology, demography, business auditing or computer surveillance, these results are very encour- aging with respect to the general cross-domain re- usability potential of the knowledge structures used in revision-based generation. However, the present evaluation concerned only one type of such knowl- edge structures: revision rules. In future work, sim- ilar evaluations will be needed for the other types of knowledge structures: content selection rules, phrase planning rules and lexicalization rules. Acknowledgements Many thanks to Kathy McKeown for stressing the im- portance of empirically evaluating STREAK. The re- search presented in this paper is currently supported by CNPq (Brazilian Research Council) under post-doctoral research grant 150130-95.3. It started out while I was at Columbia University supported by of a joint grant from the Office of Naval Research, by the Advanced Research Projects Agency under contract N00014-89-J- 1782, by National Science Foundation Grants IRT-84- 51438 and GER-90-2406, and by the New York State Science and Technology Foundation under this auspices of the Columbia University CAT in High Performance Computing and Communications in Health Care, a New York State Center for Advanced Technology. References Duford, D. 1993. caEP: a regular expression- matching textual corpus tool. Technical Report CU-CS-005-93. Computer Science Department, Columbia University, New York. Elhadad, M. and Robin, J. 1996. An overview of SURGE: a re-usable comprehensive syntactic realization component. Technical Report 96-03. Computer Science and Mathematics Department, Ben Gurion University, Beer Sheva, Israel. Kukich, K. 1983. Knowledge-based report genera- tion: a knowledge engineering approach to natural language report generation. PhD. Thesis. Depart- ment of Information Science. University of Pitts- burgh. Lester, J.C. 1993. Generating natural language explanations from large-scale knowledge bases. PhD. Thesis. Computer Science Department, University of Texas at Austin. Mann, W.C. and Matthiessen, C. M. 1983. NIGEL: a systemic grammar for text generation. Research Report RR-83-105. ISI. Marina Del Rey, CA. Robin, J. and McKeown, K.R. 1993. Corpus anal- ysis for revision-based generation of complex sen- tences. In Proceedings of the 11th National Con- ference on Artificial Intelligence, Washington DC. (AAAI'93). Robin, J. and McKeown, K.R. 1995. Empirically designing and evaluating a new revision-based model for summary generation. Artificial Intel- ligence. Vol 85. Special Issue on Empirical Meth- ods. North-Holland. Robin, J. 1993. A revision-based generation archi- tecture for reporting facts in their historical con- text. In New Concepts in Natural Language Gen- eration: Planning, Realization and System. Ho- racek, H. and Zock, M., Eds. Frances Pinter. Robin, J. 1994a. Automatic generation and revision of natural language summaries providing histori- cal background In Proceedings of the 11th Brazil- ian Symposium on Artificial Intelligence, Fort- aleza, Brazil. (SBIA'94). Robin, J. 1994b. Revision-based generation of natu- ral language summaries providing historical back- ground: corpus-based analysis, design, implemen- tation and evaluation. PhD. Thesis. Available as Technical Report CU-CS-034-94. Computer Science Department, Columbia University, New York. Van der Linden, K. and Martin, J.H. 1995. Ex- pressing rhetorical relations in instructional texts: a case study of the purpose relation. Computa- tional Linguistics, 21(1). MIT Press. 214 | 1996 | 28 |
Compilation of Weighted Finite-State Transducers from Decision Trees Richard Sproat Bell Laboratories 700 Mountain Avenue Murray Hill, N J, USA rws@bell-labs, tom Michael Riley AT&T Research 600 Mountain Avenue Murray Hill, N J, USA riley@research, att. com Abstract We report on a method for compiling decision trees into weighted finite-state transducers. The key assumptions are that the tree predictions specify how to rewrite symbols from an input string, and the decision at each tree node is stateable in terms of regular expressions on the input string. Each leaf node can then be treated as a separate rule where the left and right contexts are constructable from the decisions made traversing the tree from the root to the leaf. These rules are compiled into trans- ducers using the weighted rewrite-rule rule-compilation algorithm described in (Mohri and Sproat, 1996). 1 Introduction Much attention has been devoted recently to methods for inferring linguistic models from data. One powerful inference method that has been used in various applications are decision trees, and in particular classification and regression trees (Breiman et al., 1984). An increasing amount of attention has also been focussed on finite-state methods for imple- menting linguistic models, in particular finite- state transducers and weighted finite-state trans- ducers; see (Kaplan and Kay, 1994; Pereira et al., 1994, inter alia). The reason for the renewed in- terest in finite-state mechanisms is clear. Finite- state machines provide a mathematically well- understood computational framework for repre- senting a wide variety of information, both in NLP and speech processing. Lexicons, phonological rules, Hidden Markov Models, and (regular) gram- mars are all representable as finite-state machines, and finite-state operations such as union, intersec- tion and composition mean that information from these various sources can be combined in useful 215 and computationally attractive ways. The reader is referred to the above-cited papers (among oth- ers) for more extensive justification. This paper reports on a marriage of these two strands of research in the form of an algorithm for compiling the information in decision trees into weighted finite-state transducers. 1 Given this al- gorithm, information inferred from data and rep- resented in a tree can be used directly in a system that represents other information, such as lexicons or grammars, in the form of finite-state machines. 2 Quick Review of Tree-Based Modeling A general introduction to classification and regres- sion trees ('CART') including the algorithm for growing trees from data can be found in (Breiman et al., 1984). Applications of tree-based modeling to problems in speech and NLP are discussed in (Riley, 1989; Riley, 1991; Wang and Hirschberg, 1992; Magerman, 1995, inter alia). In this section we presume that one has already trained a tree or set of trees, and we merely remind the reader of the salient points in the interpretation of those trees. Consider the tree depicted in Figure 1, which was trained on the TIMIT database (Fisher et al., 1987), and which models the phonetic realization of the English phoneme/aa/(/a/) in various en- vironments (Riley, 1991). When this tree is used in predicting the allophonic form of a particular instance of /aa/, one starts at the root of the tree, and asks questions about the environment in which the/aa/is found. Each non-leaf node n, dominates two daughter nodes conventionally la- beled as 2n and 2n+ 1; the decision on whether to go left to 2n or right to 2n + 1 depends on the an- swer to the question that is being asked at node n. 1The work reported here can thus be seen as com- plementary to recent reports on methods for directly inferring transducers from data (Oncina et al., 1993; Gildea and Jurafsky, 1995). A concrete example will serve to illustrate. Con- sider that we have/aa/in some environment. The first question that is asked concerns the number of segments, including the /aa/itself, that occur to the left of the/aa/in the word in which/aa/oc- curs. (See Table 1 for an explanation of the sym- bols used in Figure 1.) In this case, if the /aa/ is initial -- i.e., lseg is 1, one goes left; if there is one or more segments to the left in the word, go right. Let us assume that this /aa/is initial in the word, in which case we go left. The next question concerns the consonantal 'place' of artic- ulation of the segment to the right of/an/; if it is alveolar go left; otherwise, if it is of some other quality, or if the segment to the right of/aa/is not a consonant, then go right. Let us assume that the segment to the right is/z/, which is alveolar, so we go left. This lands us at terminal node 4. The tree in Figure 1 shows us that in the training data 119 out of 308 occurrences of/aa/in this environment were realized as [ao], or in other words that we can estimate the probability of/aa/being realized as [ao] in this environment as .385. The full set of realizations at this node with estimated non-zero probabilities is as follows (see Table 2 for a rele- vant set of ARPABET-IPA correspondences): phone probability - log prob. (weight) ao 0.385 0.95 aa 0.289 1.24 q+aa 0.103 2.27 q+ao 0.096 2.34 ah 0.069 2.68 ax 0.058 2.84 An important point to bear in mind is that a decision tree in general is a complete description, in the sense that for any new data point, there will be some leaf node that corresponds to it. So for the tree in Figure 1, each new novel instance of/aa/will be handled by (exactly) one leaf node in the tree, depending upon the environment in which the/an/finds itself. Another important point is that each deci- sion tree considered here has the property that its predictions specify how to rewrite a symbol (in context) in an input string. In particular, they specify a two-level mapping from a set of input symbols (phonemes) to a set of output symbols (allophones). 3 Quick Review of Rule Compilation Work on finite-state phonology (Johnson, 1972; Koskenniemi, 1983; Kaplan and Kay, 1994) has shown that systems of rewrite rules of the famil- iar form ¢ --* ¢/)~ p, where ¢, ¢, A and p are 216 regular expressions, can be represented computa- tionally as finite-state transducers (FSTs): note that ¢ represents the rule's input rule, ¢ the out- put, and ~ and p, respectively, the left and right contexts. Kaplan and Kay (1994) have presented a con- crete algorithm for compiling systems of such rules into FSTs. These methods can be ex- tended slightly to include the compilation of prob- abilistic or weighted rules into weighted finite- state-transducers (WFSTs -- see (Pereira et al., 1994)): Mohri and Sproat (1996) describe a rule- compilation algorithm which is more efficient than the Kaplan-Kay algorithm, and which has been extended to handle weighted rules. For present purposes it is sufficient to observe that given this extended algorithm, we can allow ¢ in the expres- sion ¢ --~ ¢/~ p, to represent a weighted reg- ular expression. The compiled transducer corre- sponding to that rule will replace ¢ with ¢ with the appropriate weights in the context A p. 4 The Tree Compilation Algorithm The key requirements on the kind of decision trees that we can compile into WFSTs are (1) the pre- dictions at the leaf nodes specify how to rewrite a particular symbol in an input string, and (2) the decisions at each node are stateable as regu- lar expressions over the input string. Each leaf node represents a single rule. The regular expres- sions for each branch describe one aspect of the left context )~, right context p, or both. The left and right contexts for the rule consist of the inter- sections of the partial descriptions of these con- texts defined for each branch traversed between the root and leaf node. The input ¢ is prede- fined for the entire tree, whereas the output ¢ is defined as the union of the set of outputs, along with their weights, that are associated with the leaf node. The weighted rule belonging to the leaf node can then be compiled into a transducer us- ing the weighted-rule-compilation algorithm refer- enced in the preceding section. The transducer for the entire tree can be derived by the intersection of the entire set of transducers associated with the leaf nodes. Note that while regular relations are not generally closed under intersection, the subset of same-length (or more strictly speaking length- preserving) relations is closed; see below. To see how this works, let us return to the ex- ample in Figure 1. To start with, we know that this tree models the phonetic realization of/aa/, so we can immediately set ¢ to be aa for the whole tree. Next, consider again the traversal of the tree from the root node to leaf node 4. The first deci- sion concerns the number of segments to the left of the /aa/ in the word, either none for the left 0/349 69/128 10 11 ,many 6 no,sec 2080/2080 415/439 14 15 Figure 1: Tree modeling the phonetic realization of/aa/. All phones are given in ARPABET. Table 2 gives ARPABET-IPA conversions for symbols relevant to this example. See Table 1 for an explanation of other symbols cpn cp-n place of articulation of consonant n segments to the right place of articulation of consonant n segments to the left values: alveolar; bilabial; labiodental; dental; palatal; velar; pharyngeal; n/a if is a vowel, or there is no such segment vpn vp-n place of articulation of vowel n segments to the right place of articulation of vowel n segments to the left values: central-mid-high; back-low; back-mid-low; back-high; front-low; front-mid-low; front-mid-high; front-high; central-mid-low; back-mid-high n/a if is a consonant, or there is no such segment Iseg number of preceding segments including the segment of interest within the word rseg number of following segments including the segment of interest within the word values: 1, 2, 3, many str stress assigned to this vowel values: primary, secondary, no (zero) stress n/a if there is no stress mark Table 1: Explanation of symbols in Figure 1. 217 aa (] ao ax ah ^ q-baa 7(] q+ao ?~ Table 2: ARPABET-IPA conversion for symbols relevant for Figure 1. branch, or one or more for the right branch. As- suming that we have a symbol a representing a single segment, the symbol # representing a word boundary, and allowing for the possibility of in- tervening optional stress marks ~ which do not count as segments, these two possibilities can be represented by the regular expressions for A in (a) of Table 3. 2 At this node there is no deci- sion based on the righthand context, so the right- hand context is free. We can represent this by setting p at this node to be E*, where E (con- ventionally) represents the entire alphabet: note that the alphabet is defined to be an alphabet of all ¢:¢ correspondence pairs that were determined empirically to be possible. The decision at the left daughter of the root node concerns whether or not the segment to the right is an alveolar. Assuming we have defined classes of segments alv, blab, and so forth (repre- sented as unions of segments) we can represent the regular expression for p as in (b) of Table 3. In this case it is A which is unrestricted, so we can set that at ~*. We can derive the ~ and p expressions for the rule at leaf node 4 by intersecting together the expressions for these contexts defined for each branch traversed on the way to the leaf. For leaf node 4, A = #Opt(')N E* = #Opt('), and p = E* n Opt(')(alv) = Opt(')(alv). 3 The rule input ¢ has already been given as aa. The output ¢ is defined as the union of all of the possible ex- pressions -- at the leaf node in question -- that aa could become, with their associated weights (neg- ative log probabilities), which we represent here as subscripted floating-point numbers: ¢ = a00.95 U aal.24 O q+aa2.27 O q-l-ao2.34U ah2.6s U ax2.s4 Thus the entire weighted rule can be written as 2As far as possible, we use the notation of Kaplan and Kay (1994). 3Strictly speaking, since the As and ps at each branch may define expressions of different lengths, it is necessary to left-pad each )~ with ~*, and right-pad each p with ~*. We gloss over this point here in order to make the regular expressions somewhat simpler to understand 218 follows: aa --~ (aoo.95Uaal.24tdq+aa2.27Uq-bao2.34t.Jah2.6sU ax2.s4)/#Opt(') Opt(')(alv) By a similar construction, the rule at node 6, for example, would be represented as: aa --* (aa0.40 U aol.n)/ N (Z*((cmh) U (bl) U (bml) U (bh))) r: Each node thus represents a rule which states that a mapping occurs between the input symbol ¢ and the weighted expression ¢ in the condition described by A p. Now, in cases where ¢ finds itself in a context that is not subsumed by A p, the rule behaves exactly as a two-level surface co- ercion rule (Koskenniemi, 1983): it freely allows ¢ to correspond to any ¢ as specified by the al- phabet of pairs. These ¢:¢ correspondences are, however, constrained by other rules derived from the tree, as we shall see directly. The interpretation of the full tree is that it represents the conjunction of all such mappings: for rules 1, 2 ...n, ¢ corresponds to ¢1 given con- dition ~1__Pl and ¢ corresponds to ¢~ given condition ~2 P2 ...and ¢ corresponds to ¢,, given condition ~ p~. But this conjunction is simply the intersection of the entire set of trans- ducers defined for the leaves of the tree. Observe now that the ¢:¢ correspondences that were left free by the rule of one leaf node, are constrained by intersection with the other leaf nodes: since, as noted above, the tree is a complete description, it follows that for any leaf node i, and for any context A p not subsumed by hi Pi, there is some leaf node j such that )~j pj subsumes ~ p. Thus, the transducers compiled for the rules at nodes 4 and 6, are intersected together, along with the rules for all the other leaf nodes. Now, as noted above, and as discussed by Kaplan and Kay (1994) regular relations -- the algebraic counter- part of FSTs -- are not in general closed under intersection; however, the subset of same-length regular relations is closed under intersection, since they can be thought of as finite-state acceptors ex- (a) left branch A = #Opt(') p = E* right branch A (b) left branch A = E* p = Opt(O(alv ) = (#Opt(')aOpt(')) U (#Opt(')aOpt(')aOpt('))U ( #Opt(')aOpt(')aOpt(')( aOpt(') ) +) = (Opt(')~)+Opt(') right branch A = p = ( Opt(')( blab) ) U ( Opt(')( labd) ) U ( Opt(')( den ) ) U ( Opt(')(pal) )U (Opt(')(vel)) U (Opt(')(pha)) U (Opt(')(n/a)) Table 3: Regular-expression interpretation of the decisions involved in going from the root node to leaf node 4 in the tree in Figure 1. Note that, as per convention, superscript '+' denotes one or more instances of an expression. pressed over pairs of symbols. 4 This point can be extended somewhat to include relations that involve bounded deletions or insertions: this is pre- cisely the interpretation necessary for systems of two-level rules (Koskenniemi, 1983), where a sin- gle transducer expressing the entire system may be constructed via intersection of the transduc- ers expressing the individual rules (Kaplan and Kay, 1994, pages 367-376). Indeed, our decision tree represents neither more nor less than a set of weighted two-level rules. Each of the symbols in the expressions for A and p actually represent (sets of) pairs of symbols: thus alp, for example, rep- resents all lexical alveolars paired with all of their possible surface realizations. And just as each tree represents a system of weighted two-level rules, so a set of trees -- e.g., where each tree deals with the realization of a particular phone -- represents a system of weighted two-level rules, where each two-level rule is compiled from each of the indi- vidual trees. We can summarize this discussion more for- mally as follows. We presume a function Compile which given a rule returns the WFST computing that rule. The WFST for a single leaf L is thus defined as follows, where CT is the input symbol for the entire tree, eL is the output expression de- fined at L, t95 represents the path traversed from the root node to L, p is an individual branch on 4One can thus define intersection for transducers analogously with intersection for acceptors. Given two machines Gz and G2, with transition functions 51 and 52, one can define the transition function of G, 5, as follows: for an input-output pair (i,o), 5((ql, q2), (i, o)) = (q~, q~) if and only if 5z(ql, (i, o)) = q~ and 62(q2, (i, o)) = q~. 219 that path, and Ap and pp are the expressions for A and p defined at p: RuleL = Compite(¢ - eL/ N aP N ;P) PEPL pEPL The transducer for an entire tree T is defined as: RuleT ---- D RuleL LET Finally, the transducer for a forest F of trees is just: RuleF = N RuleT TEF 5 Empirical Verification of the Method. The algorithm just described has been empiri- cally verified on the Resource Management (RM) continuous speech recognition task (Price et al., 1988). Following somewhat the discussion in (Pereira et al., 1994; Pereira and Riley, 1996), we can represent the speech recognition task as the problem of finding the best path in the com- position of a grammar (language model) G, the transitive-closure of a dictionary D mapping be-" tween words and their phonemic representation, a model of phone realization (I), and a weighted lattice representing the acoustic observations A. Thus: BestPath(G o D* o ¢ o A) (1) The transducer ¢ fo= ~e~ RuleT can be con- structed out of the r of 40 trees, one for each phoneme, trained on the TIMIT database. The size of the trees range from 1 to 23 leaf nodes, with a totM of 291 leaves for the entire forest. The model was tested on 300 sentences from the RM task containing 2560 word tokens, and approximately 10,500 phonemes. A version of the model of recognition given in expression (1), where q~ is a transducer computed from the trees, was compared with a version where the trees were used directly following a method described in (Ljolje and Riley, 1992). The phonetic realizations and their weights were identical for both methods, thus verifying the correctness of the compilation algo- rithm described here. The sizes of the compiled transducers can be quite large; in fact they were sufficiently large that instead of constructing ¢b beforehand, we inter- sected the 40 individual transducers with the lat- tice D* at runtime. Table 4 gives sizes for the entire set of phone trees: tree sizes are listed in terms of number of rules (terminal nodes) and raw size in bytes; transducer sizes are listed in terms of number of states and arcs. Note that the entire alphabet comprises 215 symbol pairs. Also given in Table 4 are the compilation times for the indi- vidual trees on a Silicon Graphics R4400 machine running at 150 MHz with 1024 Mbytes of memory. The times are somewhat slow for the larger trees, but still acceptable for off-line compilation. While the sizes of the resulting transducers seem at first glance to be unfavorable, it is im- portant to bear in mind that size is not the only consideration in deciding upon a particular repre- sentation. WFSTs possess several nice properties that are not shared by trees, or handwritten rule- sets for that matter. In particular, once compiled into a WFST, a tree can be used in the same way as a WFST derived from any other source, such as a lexicon or a language model; a compiled WFST can be used directly in a speech recognition model such as that of (Pereira and Riley, 1996) or in a speech synthesis text-analysis model such as that of (Sproat, 1996). Use of a tree directly requires a special-purpose interpreter, which is much less flexible. It should also be borne in mind that the size explosion evident in Table 4 also characterizes rules that are compiled from hand-built rewrite rules (Kaplan and Kay, 1994; Mohri and Sproat, 1996). For example, the text-analysis ruleset for 220 the Bell Labs German text-to-speech (TTS) sys- tem (see (Sproat, 1996; Mohri and Sproat, 1996)) contains sets of rules for the pronunciation of var- ious orthographic symbols. The ruleset for <a>, for example, contains 25 ordered rewrite rules. Over an alphabet of 194 symbols, this compiles, using the algorithm of (Mohri and Sproat, 1996), into a transducer containing 213,408 arcs and 1,927 states. This is 72% as many arcs and 48% as many states as the transducer for/ah/in Ta- ble 4. The size explosion is not quite as great here, but the resulting transducer is still large compared to the original rule file, which only requires 1428 bytes of storage. Again, the advantages of rep- resenting the rules as a transducer outweigh the problems of size. 5 6 Future Applications We have presented a practical algorithm for con- verting decision trees inferred from data into weighted finite-state transducers that directly im- plement the models implicit in the trees, and we have empirically verified that the algorithm is cor- rect. Several interesting areas of application come to mind. In addition to speech recognition, where we hope to apply the phonetic realization models described above to the much larger North Amer- ican Business task (Paul and Baker, 1992), there are also applications to TTS where, for example, the decision trees for prosodic phrase-boundary prediction discussed in (Wang and Hirschberg, 1992) can be compiled into transducers and used directly in the WFST-based model of text analysis used in the multi-lingual version of the Bell Lab- oratories TTS system, described in (Sproat, 1995; Sproat, 1996). 7 Acknowledgments The authors wish to thank Fernando Pereira, Mehryar Mohri and two anonymous referees for useful comments. References Leo Breiman, Jerome H. Friedman, Richard A. Olshen, and Charles J. Stone. 1984. Clas- 5Having said this, we note that obviously one would like to decrease the size of the resulting transducers if that is possible. We are currently investigating ways to avoid precompiling the transducers beforehand, but rather to construct 'on the fly', only those portions of the transducers that are necessary for a particular intersection. ARPABET phone • nodes size of tree (bytes) # arcs # states time (sec) zh 1 47 215 1 0.3 jh 2 146 675 6 0.8 aw 2 149 1,720 8 1 f 2 119 669 6 0.9 ng 2 150 645 3 0.8 oy 2 159 1,720 8 1 uh 2 126 645 3 0.9 p 3 252 6,426 90 4 ay 3 228 4,467 38 2 m 3 257 2,711 27 1 ow 3 236 3,010 14 3 sh 3 230 694 8 1 v 3 230 685 8 1 b 4 354 3,978 33 2 ch 4 353 3,010 25 4 th 4 373 1,351 11 2 dh 5 496 1,290 6 3 ey 5 480 11,510 96 27 g 6 427 372,339 3,000 21 k 6 500 6,013 85 9 aa 6 693 18,441 106 15 ah 7 855 40,135 273 110 y 7 712 9,245 43 12 ao 8 1,099 85,439 841 21 eh 8 960 16,731 167 13 er 8 894 101,765 821 31 w 8 680 118,154 1,147 51 hh 9 968 17,459 160 10 1 9 947 320,266 3,152 31 uw 9 1,318 44,868 552 28 z 9 1,045 1,987 33 5 s 10 1,060 175,901 2,032 25 ae 11 1,598 582,445 4,152 231 iy 11 1,196 695,255 9,625 103 d 12 1,414 36,067 389 38 n 16 1,899 518,066 3,827 256 r 16 1,901 131,903 680 69 ih 17 2,748 108,970 669 71 t 22 2,990 1,542,612 8,382 628 ax 23 4,281 295,954 3,966 77 Table 4: Sizes of transducers corresponding to each of the individual phone trees. 221 sification and Regression Trees. Wadsworth & Brooks, Pacific Grove CA. William Fisher, Victor Zue, D. Bernstein, and David Pallet. 1987. An acoustic-phonetic data base. Journal of the Acoustical Society of America, 91, December. Supplement 1. Daniel Gildea and Daniel Jurafsky. 1995. Au- tomatic induction of finite state transducers for simple phonological rules. In 33rd Annual Meeting of the Association for Computational Linguistics, pages 9-15, Morristown, NJ. As- sociation for ComputationM Linguistics. C. Douglas Johnson. 1972. Formal Aspects of Phonological Description. Mouton, Mouton, The Hague. Ronald Kaplan and Martin Kay. 1994. Regular models of phonological rule systems. Compu- tational Linguistics, 20:331-378. Kimmo Koskenniemi. 1983. Two-Level Mor- phology: a General Computational Model for Word-Form Recognition and Production. Ph.D. thesis, University of Helsinki, Helsinki. Andrej Ljolje and Michael D. Riley. 1992. Op- timal speech recognition using phone recogni- tion and lexical access. In Proceedings of IC- SLP, pages 313-316, Banff, Canada, October. David Magerman. 1995. Statistical decision-tree models for parsing. In 33rd Annual Meeting of the Association for Computational Linguis- tics, pages 276-283, Morristown, NJ. Associ- ation for Computational Linguistics. Mehryar Mohri and Richard Sproat. 1996. An ef- ficient compiler for weighted rewrite rules. In 34rd Annual Meeting of the Association for Computational Linguistics, Morristown, NJ. Association for Computational Linguistics. Jos~ Oncina, Pedro Garela, and Enrique Vidal. 1993. Learning subsequential transducers for pattern recognition tasks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 15:448-458. Douglas Paul and Janet Baker. 1992. The design for the Wall Street Journal-based CSR corpus. In Proceedings of International Conference on Spoken Language Processing, Banff, Alberta. ICSLP. Fernando Pereira and Michael Riley. 1996. Speech recognition by composition of weighted finite automata. CMP-LG archive paper 9603001. Fernando Pereira, Michael Riley, and Richard Sproat. 1994. Weighted rational transduc- tions and their application to human lan- guage processing. In ARPA Workshop on Human Language Technology, pages 249-254. 222 Advanced Research Projects Agency, March 8-11. Patty Price, William Fisher, Jared Bernstein, and David Pallett. 1988. The DARPA 1000-word Resource Management Database for contin- uous speech recognition. In Proceedings of ICASSP88, volume 1, pages 651-654, New York. ICASSP. Michael Riley. 1989. Some applications of tree- based modelling to speech and language. In Proceedings of the Speech and Natural Lan- guage Workshop, pages 339-352, Cape Cod MA, October. DARPA, Morgan Kaufmann. Michael Riley. 1991. A statistical model for gener- ating pronunciation networks. In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing, pages Sl1.1.- Sll.4. ICASSP91, October. Richard Sproat. 1995. A finite-state architecture for tokenization and grapheme-to-phoneme conversion for multilingual text analysis. In Susan Armstrong and Evelyne Tzoukermann, editors, Proceedings of the EACL SIGDAT Workshop, pages 65-72, Dublin, Ireland. As- sociation for Computational Linguistics. Richard Sproat. 1996. Multilingual text analy- sis for text-to-speech synthesis. In Proceed- ings of the ECAI-96 Workshop on Extended Finite State Models of Language, Budapest, Hungary. European Conference on Artificial Intelligence. Michelle Wang and Julia Hirschberg. 1992. Au- tomatic classification of intonational phrase boundaries. Computer Speech and Language, 6:175-196. | 1996 | 29 |
Noun-Phrase Analysis in Unrestricted Text for Information Retrieval David A. Evans, Chengxiang Zhai Laboratory for Computational Linguistics Carnegie Mellon Univeristy Pittsburgh, PA 15213 [email protected], [email protected] Abstract Information retrieval is an important ap- plication area of natural-language pro- cessing where one encounters the gen- uine challenge of processing large quanti- ties of unrestricted natural-language text. This paper reports on the application of a few simple, yet robust and efficient noun- phrase analysis techniques to create bet- ter indexing phrases for information re- trieval. In particular, we describe a hy- brid approach to the extraction of mean- ingful (continuous or discontinuous) sub- compounds from complex noun phrases using both corpus statistics and linguistic heuristics. Results of experiments show that indexing based on such extracted sub- compounds improves both recall and pre- cision in an information retrieval system. The noun-phrase analysis techniques are also potentially useful for book indexing and automatic thesaurus extraction. 1 Introduction 1.1 Information Retrieval Information retrieval (IR) is an important applica- tion area of naturaManguage processing (NLP). 1 The IR (or perhaps more accurately "text retrieval") task may be characterized as the problem of select- ing a subset of documents (from a document col- lection) whose content is relevant to the informa- tion need of a user as expressed by a query. The document collections involved in IR are often gi- gabytes of unrestricted natural-language text. A user's query may be expressed in a controlled lan- guage (e.g., a boolean expression of keywords) or, more desirably, a natural language, such as English. A typical IR system works as follows. The doc- uments to be retrieved are processed to extract in- dexing terms or content carriers, which are usually (Evans, 1990; Evans et al., 1993; Smeaton, 1992; Lewis & Sparck Jones, 1996) single words or (less typically) phrases. The index- ing terms provide a description of the document's content. Weights are often assigned to terms to in- dicate how well they describe the document. A (natural-language) query is processed in a similar way to extract query terms. Query terms are then matched against the indexing terms of a document to determine the relevance of each document to the quer3a The ultimate goal of an IR system is to increase both precision, the proportion of retrieved docu- ments that are relevant, as well as recall, the propor- tion of relevant document that are retrieved. How- ever, the real challenge is to understand and rep- resent appropriately the content of a document and quer~ so that the relevance decision can be made ef- ficiently, without degrading precision and recall. A typical solution to the problem of making relevance decisions efficient is to require exact matching of in- dexing terms and query terms, with an evaluation of the 'hits' based on a scoring metric. Thus, for instance, in vector-space models of relevance rank- ing, both the indexing terms of a document and the query terms are treated as vectors (with individual term weights) and the similarity between the two vectors is given by a cosine-distance measure, es- sentially the angle between any two vectors? 1.2 Natural-Language Processing for IR One can regard almost any IR system as perform- ing an NLP task: text is 'parsed" for terms and terms are used to express 'meaning'--to capture document content. Clearly, most traditional IR sys- tems do not attempt to find structure in the natural- language text in the 'parsing' process; they merely extract word-like strings to use in indexing. Ide- ally, however, extracted structure would directly re- flect the encoded linguistic relations among terms-- captuing the conceptual content of the text better than simple word-strings. There are several prerequisites for effective NLP in an IR application, including the following. 2 (Salton & McGill, 1983) 17 1. Ability to process large amounts of text The amount of text in the databases accessed by modem IR systems is typically measured in gi- gabytes. This requires that the NLP used must be extraordinarily efficient in both its time and space requirements. It would be impractical to use a parser with the speed of one or two sentences per second. 2. Ability to process unrestricted text The text database for an IR task is generally unrestricted natural-language text possibly en- compassing many different domains and top- ics. A parser must be able to manage the many kinds of problems one sees in natural-language corpora, including the processing of unknown words, proper names, and unrecognized struc- tures. Often more is required, as when spelling, transcription, or OCR errors occur. Thus, the NLP used must be especially robust. 3. Need for shallow understanding While the large amount of unrestricted text makes NLP more difficult for IR, the fact that a deep and complete understanding of the text may not be necessary for IR makes NLP for IR relatively easier than other NLP tasks such as machine translation. The goal of an IR system is essentially to classify documents (as relevant or irrelevant) vis-a-vis a query. Thus, it may suffice to have a shallow and partial represen- tation of the content of documents. Information retrieval thus poses the genuine chal- lenge of processing large volumes of unrestricted natural-language text but not necessarily at a deep level. 1.3 Our Work This paper reports on our evaluation of the use of simple, yet robust and efficient noun-phrase analy- sis techniques to enhance phrase-based IR. In par- ticular, we explored an extension of the ~phrase- based indexing in the CLARIT TM system ° using a hybrid approach to the extraction of meaning- ful (continuous or discontinuous) subcompounds from complex noun phrases exploiting both corpus- statistics and linguistic heuristics. Using such sub- compounds rather than whole noun phrases as in- dexing terms helps a phrase-based IR system solve the phrase normalization problem, that is, the prob- lem of matching syntactically different, but semanti- cally similar phrases. The results of our experiments show that both recall and precision are improved by using extracted subcompounds for indexing. 2 Phrase-Based Indexing The selection of appropriate indexing terms is criti- cal to the improvement of both precision and recall in an IR task. The ideal indexing terms would di- rectly represent the concepts in a document. Since 'concepts' are difficult to represent and extract (as well as to define), concept-based indexing is an elusive goal. Virtually all commercial IR systems (with the exception of the CLARIT system) index only on "words', since the identification of words in texts is typically easier and more efficient than the identification of more complex structures. How- ever, single words are rarely specific enough to sup- port accurate discrimination and their groupings are often accidental. An often cited example is the contrast between "junior college" and "college ju- nior". Word-based indexing cannot distinguish the phrases, though their meanings are quite different. Phrase-based indexing, on the other hand, as a step toward the ideal of concept-based indexing, can ad- dress such a case directly. Indeed, it is interesting to note that the use of phrases as index terms has increased dramat- ically among the systems that participate in the TREC evaluations. ~ Even relatively traditional word-based systems are exploring the use of multi- word terms by supplementing words with sta- tistical phrases--selected high frequency adjacent word pairs (bigrams). And a few systems, such as CLARIT--which uses simplex noun phrases, attested subphrases, and contained words as in- dex terms--and New York University's TREC systemS--which uses "head-modifier pairs" de- rived from identified noun phrases--have demon- strated the practicality and effectiveness of thor- ough NLP in IR tasks. The experiences of the CLAR1T system are in- structive. By using selective NLP to identify sim- plex NPs, CLARIT generates phrases, subphrases, and individual words to use in indexing documents and queries. Such a first-order analysis of the lin- guistic structures in texts approximates concepts and affords us alternative methods for calculating the fit between documents and queries. In particu- lar, we can choose to treat some phrasal structures as atomic units and others as additional informa- tion about (or representations of) content. There are immediate effects in improving precision: 1. Phrases can replace individual indexing words. For example, if both "dog" and "hot" are used for indexing, they will match any query in which both words occur. But if only the phrase "hot dog" is used as an index term, then it will only match the same phrase, not any of the in- dividual words. 3(Evans et al., 1991; Evans et al., 1993; Evans et al., 1995; Evans et al., 1996) 4 (Harman, 1995; Harman, 1996) 5 (Strzalkowski, 1994) 18 2. Phrases can supplement word-level matches. For example, if only the individual words "ju- nior" and "college" are used for indexing, both "junior college" and "college junior" will match a query with the phrase "junior college" equally well. But if we also use the phrase "junior col- lege" for indexing, then "junior college" will match better than "college junior", even though the latter also will receive some credit as a match at the word level. We can see, then, that it is desirable to distinquish-- and, if possible, extract--two kinds of phrases: those that behave as lexical atoms and those that re- flect more general linguistic relations. Lexical atoms help us by obviating the possibility of extraneous word matches that have nothing to do with true relevance. We do not want "hot" or "dog" to match on "hot dog". In essence, we want to eliminate the effect of the independence assumption at the word level by creating new words--the lexical atoms--in which the individual word dependencies are explicit (structural). More general phrases help us by adding detail. Indeed, all possible phrases (or paraphrases) of ac- tual content in a document are potentially valuable in indexing. In practice, of course, the indexing term space has to be limited, so it is necessary to se- lect a subset of phrases for indexing. Short phrases (often nominal compounds) are preferred over long complex phrases, because short phrases have bet- ter chances for matching short phrases in queries and will still match longer phrases owing to the short phrases they have in common. Using only short phrases also helps solve the phrase normal- ization problem of matching syntactically different long phrases (when they share similar meaning). 6 Thus, lexical atoms and small nominal com- pounds should make good indexing phrases. While the CLARIT system does index at the level of phrases and subphrases, it does not currently index on lexical atoms or on the small compounds that can be derived from complex NPs, in particular, reflecting cross-simplex NP dependency relations. Thus, for example, under normal CLARIT process- ing the phrase "the quality of surface of treated stainless steel strip "7 would yield index terms such as "treated stainless steel strip", "treated stainless steel", "stainless steel strip", and "stainless steel" (as a phrase, not lexical atom), along with all the relevant single-word terms in the phrase. But the process would not identify "stainless steel" as a po- tential lexical atom or find terms such as "surface quality", "strip surface", and "treated strip". To achieve more complete (and accurate) phrase- based indexing, we propose to use the following 6 (Smeaton, 1992) ZThis is an actual example from a U.S. patent document. four kinds of phrases as indexing terms: 1. Lexical atoms (e.g., "hot dog" or 2. 3. 4. perhaps "stainless steel" in the example above) Head modifier pairs (e.g., "treated strip" and "steel strip" in the example above) Subcompounds (e.g., "stainless steel strip" in the example above) Cross-preposition modification pairs (e.g., "surface quality" in the example above) In effect, we aim to augment CLARIT indexing with lexical atoms and phrases capturing additional (dis- continuous) modification relations than those that can be found within simplex NPs. It is clear that a certain level of robust and effi- cient noun-phrase analysis is needed to extract the above four kinds of small compounds from a large unrestricted corpus. In fact, the set of small com- pounds extracted from a noun phrase can be re- garded as a weak representation of the meaning of the noun phrase, since each meaningful small com- pound captures a part of the meaning of the noun phrase. In this sense, extraction of such small com- pounds is a step toward a shallow interpretation of noun phrases. Such weak interpretation is use- ful for tasks like information retrieval, document classification, and thesaurus extraction, and indeed forms the basis in the CLARIT system for automated thesaurus discovery. 3 Methodology Our task is to parse text into NPs, analyze the noun phrases, and extract the four kinds of small com- pounds given above. Our emphasis is on robust and efficient NLP techniques to support large-scale applications. For our purposes, we need to be able to identify all simplex and complex NPs in a text. Complex NPs are defined as a sequence of simplex NPs that are associated with one another via prepositional phrases. We do not consider simplex NPs joined by relative clauses. Our approach to NLP involves a hybrid use of corpus statistics supplemented by linguistic heuris- tics. We assume that there is no training data (mak- ing the approach more practically useful) and, thus, rely only on statistical information in the document database itself. This is different from many cur- rent statistical NLP techniques that require a train- ing corpus. The volume of data we see in IR tasks also makes it impractical to use sophisticated statis- tical computations. The use of linguistic heuristics can assist statis- tical analysis in several ways. First, it can focus the use of statistics by helping to eliminate irrele- vant structures from consideration. For example, syntactic category analysis can filter out impossible 19 word modification pairs, such as [adjective, adjec- tive] and [noun, adjective]. Second, it may improve the reliability of statistical decisions. For example, the counting ofbigrams that occur only within noun phrases is more reliable for lexical atom discovery than the counting of all possible bigrams that occur in the corpus. In addition, syntactic category anal- ysis is also helpful in adjusting cutoff parameters for statistics. For example, one useful heuristic is that we should use a higher threshold of reliability (evidence) for accepting the pair [adjective, noun] as a lexical atom than for the pair [noun, noun]: a noun-noun pair is much more likely to be a lexical atom than an adjective-noun one. The general process of phrase generation is illus- trated in Figure 1. We used the CLARIT NLP mod- ule as a preprocessor to produce NPs with syntactic categories attached to words. We did not attempt to utilize CLARIT complex-NP generation or sub- phrase analysis, since we wanted to focus on the specific techniques for subphrase discovery that we describe in this paper. I Raw Text ~Np CLARIT Extractor I NPs NP Parser ~ I ' ~( Lexical Atoms 9 / Structured/~k Attested Terms ,NPs / ~ Subcompound / Generator / Meaningful Subcompounds Figure 1: General Processing for Phrase Generation After preprocessing, the system works in two stages--parsing and generation. In the parsing stage, each simplex noun phrase in the corpus is parsed. In the generation stage, the structured noun phrase is used to generate candidates for all four kinds of small compounds, which are further tested for occurrence (validity) in the corpus. Parsing of simplex noun phrases is done in mul- tiple phases. At each phase, noun phrases are par- tially parsed, then the partially parsed structures are used as input to start another phase of partial pars- ing. Each phase of partial parsing is completed by concatenating those most reliable modification pairs together to form a single unit. The reliability of a modification pair is determined by a score based on frequency statistics and category analysis and is further tested via local optimum phrase analysis (described below). Lexical atoms are discovered at the same time, during simplex noun phrase parsing. Phrase generation is quite simple. Once the struc- ture of a noun phrase (with marked lexical atoms) is known, the four kinds of small compounds can be easily produced. Lexical atoms are already avail- able. Head-modifier pairs can be extracted based on the modification relations implied by the structure. Subcompounds are just the substructures of the NP. Cross-preposition pairs are generated by enumerat- ing all possible pairs of the heads of each simplex NP within a complex NP in backward order. 8 To validate discontinuous compounds such as non-sequential head-modifier pairs and cross- preposition pairs, we use a standard technique of CLARIT processing, viz., we test any nominated compounds against the corpus itself. If we find independently attested (whole) simplex NPs that match the candidate compounds, we accept the candidates as index terms. Thus for the NP "the quality of surface of treated stainless steel strip", the head-modifier pairs "treated strip", "stain- less steel", "stainless strip", and "steel strip", and the cross-preposition pairs "strip surface", "surface quality", and "strip quality", would be generated as index terms only if we found independent evi- dence of such phrases in the corpus in the form of free-standing simplex NPs. 3.1 Lexical Atom Discovery A lexical atom is a semantically coherent phrase unit. Lexical atoms may be found among proper names, idioms, and many noun-noun compounds. Usually they are two-word phrases, but sometimes they can consist of three or even more words, as in the case of proper names and technical terms. Examples of lexical atoms (in general English) are "hot dog", "tear gas", "part of speech", and "yon Neumann". However, recognition of lexical atoms in free text is difficult. In particular, the relevant lexical atoms for a corpus of text will reflect the various discourse domains encompassed by the text. In a collection of medical documents, for example, "Wilson's dis- ease" (an actual rheumatological disorder) may be used as a lexical atom, whereas in a collection of general news stories, "Wilson's disease" (reference to the disease that Wilson has) may not be a lexi- cal atom. Note that in the case of the medical us- age, we would commonly find "Wilson's disease" as a bigram and we would not find, for example, 8 (Schwarz, 1990) reports a similar strategy. 2O "Wilson's severe disease" as a phrase, though the latter might well occur in the general news corpus. This example serves to illustrate the essential obser- vation that motivates our heuristics for identitying lexical atoms in a corpus: (1) words in lexical atoms have strong association, and thus tend to co-occur as a phrase and (2) when the words in a lexical atom co-occur in a noun phrase, they are never or rarely separated. The detection of lexical atoms, like the parsing of simplex noun phrases, is also done in multiple phases. At each phase, only two adjacent units are considered. So, initiall~ only two-word lexical atoms can be detected. But, once a pair is deter- mined to be a lexical atom, it will behave exactly like a single word in subsequent processing, so, in later phases, atoms with more than two words can be detected. Suppose the pair to test is [W1, W2]. The first heuristic is implemented by requiring the frequency of the pair to be higher than the frequency of any other pair that is formed by either word with other words in common contexts (within a simplex noun phrase). The intuition behind the test is that (1) in general, the high frequency of a bigram in a simple noun phrase indicates strong association and (2) we want to avoid the case where [W1, W2] has a high frequency, but [W1, W2, W] (or [W, W1, W2]) has an even higher frequency, which implies that W2 (or W1) has a stronger association with W than with W1 (or W2, respectively). More precisely, we re- quire the following: F(W~, W2) > Maa:LDF(W~, W2) and F(W~, W2) > Ma3:RDF(W1, W2) Where, MaxLDF(W1, W2) = Maxw( U in( F(W, W1), DF(W, W2))) and MaxRDF(W1, W2) = Maxw( U in( DF(W1, W), F(W2, W) ) ) W is any context word in a noun phrase and F(X, Y) and DF(X, Y) are the continuous and discontin- uous frequencies of [X, Y], respective135 within a simple noun phrase, i.e., the frequency of patterns [...X, Y...] and patterns [...X, ..., Y...], respectively. The second heuristic requires that we record all cases where two words occur in simplex NPs and compare the number of times the words occur as a strictly adjacent pair with the number of times they are separated. The second heuristic is simply implemented by requiring that F(W1, W2) be much higher than DF(W1, W2) (where 'higher' is deter- mined by some threshold). Syntactic category analysis also helps filter out impossible lexical atoms and establish the thresh- 21 old for passing the second test. Only the follow- ing category combinations are allowed for lexical atoms: [noun, noun], [noun, lexatom], [lexatom, noun], [adjective, noun], and [adjective, lexatom], where "lexatom" is the category for a detected lexi- cal atom. For combinations other than [noun, noun], the threshold for passing the second test is high. In practice, the process effectively nominates phrases that are true atomic concepts (in a par- ticular domain of discourse) or are being used so consistently as unit concepts that they can be safely taken to be lexical atoms. For example, the lexical atoms extracted by this process from the CACM corpus (about 1 MB) include "operating system", "data structure", "decision table", "data base", "real time", "natural language", "on line", "least squares", "numerical integration", and "fi- nite state automaton", among others. 3.2 Bottom-Up Association-Based Parsing Extended simplex noun-phrase parsing as devel- oped in the CLARIT system, which we exploit in our process, works in multiple phases. At each phase, the corpus is parsed using the most specific (i.e., recently created) lexicon of lexical atoms. New lex- ical atoms (results) are added to the lexicon and are reused as input to start another phase of parsing until a complete parse is obtained for all the noun phrases. The idea of association-based parsing is that by grouping words together (based on association) many times, we will eventually discover the most restrictive (and informative) structure of a noun phrase. For example, if we have evidence from the corpus that "high performance" is a more reliable association and "general purpose" a less reliable one, then the noun phrase "general purpose high performance computer" (an actual example from the CACM corpus) would undergo the following grouping process: general purpose high performance computer =~ general purpose [high=performance] computer =~ [general=purpose] [high=performance] computer =~ [general=purpose] [[high=performance]=computer] =~ [[general=purpose]=[[high=performance]=computer]] Word pairs are given an association score (S) ac- cording to the following rules. Scores provide ev- idence for groupings in our parsing process. Note that a smaller score means a stronger association. 1. Lexical atoms are given score 0. This gives the highest priority to lexical atoms. 2. The combination of an adverb with an adjec- tive, past participle, or progressive verb is given score 0. 3. Syntactically impossible pairs are given score 100. This assigns the lowest priority to those pairs filtered out by syntactic category analysis. The 'impossible' combinations include pairs such as [noun, adjective], [noun, adverb], [ad- jective, adjective], [past-participle, adjective], [past-participle, adverb], and [past-participle, past-participle], among others. 4. Other pairs are scored according to the formu- las given in Figure 2. Note the following effects of the formulas: When /;'(W1,W2) increases, S(W1,W2) de- creases; When DF(W1, W2) increases, S(Wx, W2) de- creases; When AvgLDF(W~, W2) or AvgRDF(W~, W2) increases, S(W1, W2) increases; and When F(Wx)- F(W1,W2) or F(W2)- F(W1, W2) increases, S(W1, W2) decreases. S(W1 W2)= I+LDF(W,,W2)+RDF(W1,W=) A(W1,W2) XlxF(W1,W2)+DF(W1,W,~) X Min(F(W, W1),DF(W,W',)) AvgLDF(Wa, W2) = ~-..,WeLD ILD[ 5-" Min( F( W2,W),D F( W1,W)) AvgRDF(W1, W2) = ~-..,WCRD IRDI A(W1, W2 ) = ~ F(W1)+F(W2)--2×F(WI,W2)+X2 Where • F(W) is frequency of word W • F(W1, W2) is frequency of adjacent bigram [W1,W2] (i.e ..... W1 W2 ...) • DF(W1, W2) is frequency of discontinuous bigram [W1,W21 (i.e ..... W1...W2...) • LD is all left dependents, i.e., {W]min(F(W, Wl), DF(W, W2)) ~ 0} • RD is all right dependents, i.e., {WJmin( D F(W1, W), F(W2, W) ) ¢ 0} • ),1 is the parameter indicating the relative contribu- tion of F(W1,W2) to the score (e.g., 5 in the actual experiment) • A2 is the parameter to control the contribution of word frequency (e.g., 1000 in the actual experiment) Figure 2: Formulas for Scoring The association score (based principally on fre- quency) can sometimes be unreliable. For example, if the phrase "computer aided design" occurs fre- quently in a corpus, "aided design" may be judged a good association pair, even though "computer aided" might be a better pair. A problem may arise when processing a phrase such as "program aided design": if "program aided" does not occur fre- quently in the corpus and we use frequency as the principal statistic, we may (incorrectly) be led to parse the phrase as "[program (aided design)]". One solution to such a problem is to recompute the bigram occurrence statistics after making each round of preferred associations. Thus, using the ex- ample above, if we first make the association "com- puter aided" everywhere it occurs, many instances of "aided design" will be removed from the corpus. Upon recalculation of the (free) bigram statistics, "aided design" will be demoted in value and the false evidence for "aided design" as a preferred as- sociation in some contexts will be eliminated. The actual implementation of such a scheme re- quires multiple passes over the corpus to generate phrases. The first phrases chosen must always be the most reliable. To aid us in making such decisions we have developed a metric for scoring preferred associations in their local NP contexts. To establish a preference metric, we use two statis- tics: (1) the frequency of the pair in the corpus, F(W1, W2), and (2) the number of the times that the pair is locally dominant in any NP in which the pair occurs. A pair is locally dominant in an NP iff it has a higher association score than either of the pairs that can be formed from contiguous other words in the NP. For example, in an NP with the se- quence [X, Y, g], we compare S(X, Y) with S(Y, g); whichever is higher is locally dominant. The prefer- ence score (PS) for a pair is determined by the ratio of its local dominance count (LDC)--the total num- ber of cases in which the pair is locally dominant--to its frequency: LDC(WI 1W2) PS(W1, W2) = r(Wl,W~) By definition all two-word NIPs score their pairs as locally dominant. In general, in each processing phase we make only those associations in the corpus where a pair's PS is above a specified threshold. If more than one as- sociation is possible (above theshold) in a particular NP, we make all possible associations, but in order of PS: the first grouping goes to the pair with high- est PS, and so on. In practice, we have used 0.7 as the threshold for most processing phases. 9 4 Experiment We tested the phrase extraction system (PES) by us- ing it to index documents in an actual retrieval task. In particular, we substituted the PES for the default NLP module in the CLARIT system and then in- dexed a large corpus using the terms nominated by the PES, essentially the extracted small compounds and single words (but not words within a lexi- cal atom). All other normal CLARIT processing-- weighting of terms, division of documents into subdocuments (passages), vector-space modeling, etc.--was used in its default mode. As a baseline °When the phrase data becomes sparse, e.g., after six or seven iterations of processing, it is desirable to reduce the threshold. 22 for comparison, we used standard CLARIT process- ing of the same corpus, with the NLP module set to return full NPs and their contained words (and no further subphrase analysis).l 0 The corpus used is a 240-megabyte collection of Associated Press newswire stories from 1989 (AP89), taken from the set of TREC corpora. There are about 3-million simplex NPs in the corpus and about 1.5-million complex NPs. For evaluation, we used TREC queries 51-100, ll each of which is a relatively long description of an information need. Queries were processed by the PES and nor- mal CLARIT NLP modules, respectively, to gener- ate query terms, which were then used for CLARIT retrieval. To quantify the effects of PES processing, we used the standard IR evaluation measures of recall and precision. Recall measures how many of the rele- vant documents have been actually retrieved. Pre- cision measures how many of the retrieved docu- ments are indeed relevant. For example, if the total number of relevant documents is N and the system returns M documents of which K are relevant, then, Recall = K IV and Precision = ~-. We used the judged-relevant documents from the TREC evaluations as the gold standard in scoring the performance of the two processes. suggests that the PES could be used to support other IR enhancements, such as automatic feedback of the top-returned documents to expand the initial query for a second retrieval step) 2 CLARIT Retrieved-Rel Total-Rel Recall Baseline 2,668 3,304 80.8% PES 2,695 3,304 81.6% Table 1: Recall Results Baseline Rel.Improvement 0.6819 4% Recall PES 0.00 0.7099 0.10 0.5535 0.5730 0.20 0.4626 0.4927 0.30 0.4098 0.4329 0.40 0.3524 0.3782 0.50 0.3289 0.3317 0.60 0.2999 0.3026 0.70 0.2481 0.2458 0.80 0.1860 0.1966 0.90 0.1190 0.1448 1.00 0.0688 0.0653 3.5% 6.5% 5.6% 7.0% 0.5% 0.9% --0.9% 5.7% 21.7% -5.0% Table 2: Interpolated Precision Results 5 Results The results of the experiment are given in Tables 1, 2, and 3. In general, we see improvement in both recall and precision. Recall improves slightly (about 1%), as shown in Table 1. While the actual improvement is not sig- nificant for the run of fifty queries, the increase in absolute numbers of relevant documents returned indicates that the small compounds supported bet- ter matches in some cases. Interpolated precision improves significantly5 as shown in Table 2. The general improvement in precision indicates that small compounds provide more accurate (and effective) indexing terms than full NPs. Precision improves at various returned-docu- ment levels, as well, as shown in Table 3. Initial precision, in particular, improves significantly. This 1°Note that the CLARIT process used as a baseline does not reflect optimum CLARIT performance, e.g., as ob- tained in actual TREC evaluations, since we did not use a variety of standard CLARIT techniques that significantly improve performance, such as automatic query expan- sion, distractor space generation, subterm indexing, or differential query-term weighting. Cf. (Evans et al., 1996) for details. 1 ~ (Harman, 1993) Do, c-Level 5 docs 10 docs 15 docs 20 docs 30 docs 100 docs 200 docs 500 docs 1000 docs Baseline PES Rel.Improvement 0.4255 0.4809 13% 0.4170 0.4426 6% 0.3943 0.4227 7% 0.3819 0.3957 4% 0.3539 0.3603 2% 0.2526 0.2553 1% 0.1770 0.1844 4% 0.0973 0.0994 2% 0.0568 0.0573 1% Table 3: Precision at Various Document Levels The PES, which was not optimized for pro- cessing, required approximately 3.5 hours per 20- megabyte subset of AP89 on a 133-MHz DEC alpha processor) 3 Most processing time (more than 2 of every 3.5 hours) was spent on simplex NP parsing. Such speed might be acceptable in some, smaller- scale IR applications, but it is considerably slower than the baseline speed of CLARIT noun-phrase identification (viz., 200 megabytes per hour on a 100-MIPS processor). l~ (Evans et al., 1995; Evans et al., 1996) 13Note that the machine was not dedicated to the PES processing; other processes were running simultaneously. 23 6 Conclusions The notion of association-based parsing dates at least from (Marcus, 1980) and has been explored again recently by a number of researchers. TM The method we have developed differs from previous work in that it uses linguistic heuristics and local- ity scoring along with corpus statistics to generate phrase associations. The experiment contrasting the PES with baseline processing in a commercial IR system demonstrates a direct, positive effect of the use of lexical atoms, subphrases, and other pharase associations across simplex NPs. We believe the use of N-P-substructure analysis can lead to more effective information man- agement, including more precise IR, text summa- rization, and concept clustering. Our future work will explore such applications of the techniques we have described in this paper. 7 Acknowledgements We received helpful comments from Bob Carpen- ter, Christopher Manning, Xiang Tong, and Steve Handerson, who also provided us with a hash table manager that made the implementation easier. The evaluation of the experimental results would have been impossible without the help of Robert Lefferts and Nata~a Mili4-Frayling at CLARITECH Corpo- ration. Finally, we thank the anonymous reviewers for their useful comments. References David A. Evans. 1990. Concept management in text via natural-language processing: The CLARIT approach. In: Working Notes of the 1990 AAAI Symposium on "Text- Based Intelligent Systems", Stanford University, March, 27-29, 1990, 93-95. David A. Evans, Kimberly Ginther-Webster, Mary Hart, Robert G. Lefferts, Ira A. Monarch. 1991. Automatic indexing using selective NLP and first-order thesauri. In: A. Lichnerowicz (ed.), Intelligent Text and Image Han- dling. Proceedings of a Conference, RIAO "91. Amsterdam, NL: Elsevier, pp. 624-644. David A. Evans, Robert G. Lefferts, Gregory Grefenstette, Steven K. Handerson, William R. Hersh, and Armar A. Archbold. 1993. CLARIT TREC design, experiments, and results. In: Donna K. Harman (ed.), The First Text REtrieval Conference (TREC-1). NIST Special Publication 500-207. Washington, DC: U.S. Government Printing Office, pp. 251-286; 494-501. David A. Evans, and Robert G. Lefferts. 1995. CLARIT- TREC experiments Information Processing and Manage- ment, Vol. 31, No. 3, 385-395. David A. Evans, Nata~a Mili4-Frayling, Robert G. Lef- ferts. 1996. CLARIT TREC-4 experiments. In: Donna 14 (Liberman et al., 1992; Pustejovsky et al., 1993; Resnik et al., 1993; Lauer, 1995) K. Harman (ed.), The Fourth Text REtrieval Conference (TREC-4). NIST Special Publication. Washington, DC: U.S. Government Printing Office. Donna K. Harman, ed. 1993. The First Text REtrieval Conference (TREC-1) NIST Special Publication 500-207. Washington, DC: U.S. Government Printing Office. Donna K. Harman, ed. 1995. Overview of the Third Text REtrieval Conference (TREC-3 ), NIST Special Publication 500-225. Washington, DC: U.S. Government Printing Office. Donna K. Harman, ed. 1996. Overview of the Fourth Text REtrieval Conference (TREC-4), NIST Special Publica- tion. Washington, DC: U.S. Government Printing Of- fice. Mark Lauer. 1995. Corpus statistics meet with the noun compound: Some empirical results. In: Proceedings of the 33th Annual Meeting of the Association for Computa- tional Linguistics. David Lewis and K. Sparck Jones. 1996. Natural language processing for information retrieval. Communications of the ACM, January, Vol. 39, No. 1, 92-101. Mark Liberman and Richard Sproat. 1992. The stress and structure of modified noun phrases in English. In: I. Sag and A. Szabolcsi (eds.), Lexical Matters, CSLI Lec- ture Notes No. 24. Chicago, IL: University of Chicago Press, pp. 131-181. Mitchell Maucus. 1980. A Theory of Syntactic Recognition for Natural Language. Cambridge, MA: MIT Press. J. Pustejovsky, S. Bergler, and P. Anick. 1993. Lexical semantic techniques for corpus analysis. In: Compu- tational Linguistics, Vol. 19(2), Special Issue on Using Large Corpora II, pp. 331-358. P. Resnik, and M. Hearst. 1993. Structural Ambiguity and Conceptual Relations. In: Proceedings of the Workshop on Very Large Corpora: Academic and Industrial Perspectives, June 22, Ohio State University, pp. 58-64. Gerard Salton and Michael McGill. 1983. Introduction to Modern Information Retrieval, New York, NY: McGraw- Hill. Christoph Schwarz. 1990. Content based text handling. Information Processing and Management, Vol. 26(2), pp. 219-226. Alan F. Smeaton. 1992. Progress in application of natu- ral language processing to information retrieval. The Computer Journal, Vol. 35, No. 3, pp. 268-278. T. Strzalkowski and J. Carballo. 1994. Recent develop- ments in natural language text retrieval. In: Donna K. Harman (ed.), The Second Text REtrieval Conference (TREC-2). NIST Special Publication 500-215. Washing- ton, DC: U.S. Government Printing Office, pp. 123-136. 24 | 1996 | 3 |
FAST PARSING USING PRUNING AND GRAMMAR SPECIALIZATION Manny Rayner and David Carter SRI International Suite 23, Millers Yard Cambridge CB2 1RQ United Kingdom manny@cam, sri. com, dmc~cam, sri. com Abstract We show how a general grammar may be automatically adapted for fast parsing of utterances from a specific domain by means of constituent pruning and grammar specialization based on explanation-based learning. These methods together give an order of magnitude increase in speed, and the coverage loss entailed by grammar spe- cialization is reduced to approximately half that reported in previous work. Experi- ments described here suggest that the loss of coverage has been reduced to the point where it no longer causes significant perfor- mance degradation in the context of a real application. 1 Introduction Suppose that we have a general grammar for En- glish, or some other natural language; by this, we mean a grammar which encodes most of the impor- tant constructions in the language, and which is in- tended to be applicable to a large range of different domains and applications. The basic question at- tacked in this paper is the following one: can such a grammar be concretely useful if we want to process input from a specific domain? In particular, how can a parser that uses a general grammar achieve a level of efficiency that is practically acceptable? The central problem is simple to state. By the very nature of its construction, a general grammar allows a great many theoretically valid analyses of almost any non-trivial sentence. However, in the context of a specific domain, most of these will be ex- tremely implausible, and can in practice be ignored. If we want efficient parsing, we want to be able to focus our search on only a small portion of the space of theoretically valid grammatical analyses. One possible solution is of course to dispense with the idea of using a general grammar, and sim- ply code a new grammar for each domain. Many people do this, but one cannot help feeling that something is being missed; intuitively, there are many domain-independent grammatical constraints, which one would prefer only to need to code once. In the last ten years, there have been a number of attempts to find ways to automatically adapt a general grammar and/or parser to the sub-language defined by a suitable training corpus. For exam- ple, (Briscoe and Carroll, 1993) train an LR parser based on a general grammar to be able to distin- guish between likely and unlikely sequences of pars- ing actions; (Andry et al., 1994) automatically infer sortal constraints, that can be used to rule out oth- erwise grammatical constituents; and (Grishman et al., 1984) describes methods that reduce the size of a general grammar to include only rules actually use- ful for parsing the training corpus. The work reported here is a logical continuation of two specific strands of research aimed in this gen- eral direction. The first is the popular idea of sta- tistical tagging e.g. (DeRose, 1988; Cutting et al., 1992; Church, 1988). Here, the basic idea is that a given small segment S of the input string may have several possible analyses; in particular, if S is a single word, it may potentially be any one of several parts of speech. However, if a substantial training corpus is available to provide reasonable es- timates of the relevant parameters, the immediate context surrounding S will usually make most of the locally possible analyses of S extremely implausible. In the specific case of part-of-speech tagging, it is well-known (DeMarcken, 1990) that a large propor- tion of the incorrect tags can be eliminated "safely"~ i.e. with very low risk of eliminating correct tags. In the present paper, the statistical tagging idea is generalized to a method called "constituent prun- ing"; this acts on local analyses of phrases normally 223 larger than single-word units. Constituent pruning is a bottom-up approach, and is complemented by a second, top-down, method based on Explanation-Based Learning (EBL; (Mitchell et al., 1986; van Harmelen and Bundy, 1988)). This part of the paper is essentially an exten- sion and generalization of the line of work described in (Rayner, 1988; Rayner and Samuelsson, 1990; Samuelsson and Rayner, 1991; Rayner and Samuels- son, 1994; Samuelsson, 1994b). Here, the basic idea is that grammar rules tend in any specific domain to combine much more frequently in some ways than in others. Given a sufficiently large corpus parsed by the original, general, grammar, it is possible to identify the common combinations of grammar rules and "chunk" them into "macro-rules". The result is a "specialized" grammar; this has a larger number of rules, but a simpler structure, allowing it in practice to be parsed very much more quickly using an LR- based method (Samuelsson, 1994a). The coverage of the specialized grammar is a strict subset of that of the original grammar; thus any analysis produced by the specialized grammar is guaranteed to be valid in the original one as well. The practical utility of the specialized grammar is largely determined by the loss of coverage incurred by the specialization pro- cess. The two methods, constituent pruning and gram- mar specialization, are combined as follows. The rules in the original, general, grammar are divided into two sets, called phrasal and non-phrasal respec- tively. Phrasal rules, the majority of which define non-recursive noun phrase constructions, are used as they are; non-phrasal rules are combined using EBL into chunks, forming a specialized grammar which is then compiled further into a set of LR- tables. Parsing proceeds by interleaving constituent creation and deletion. First, the lexicon and mor- phology rules are used to hypothesize word analyses. Constituent pruning then removes all sufficiently un- likely edges. Next, the phrasal rules are applied bottom-up, to find all possible phrasal edges, after which unlikely edges are again pruned. Finally, the specialized grammar is used to search for full parses. The scheme is fully implemented within a version of the Spoken Language Translator system (Rayner et al., 1993; Agniis et al., 1994), and is normally applied to input in the form of small lattices of hypotheses produced by a speech recognizer. The rest of the paper is structured as fol- lows. Section 2 describes the constituent pruning method. Section 3 describes the grammar special- ization method, focusing on how the current work extends and improves on previous results. Section 4 describes experiments where the constituent prun- ing/grammar specialization method was used on sets of previously unseen speech data. Section 5 con- cludes and sketches further directions for research, which we are presently in the process of investigat- ing. 2 Constituent Pruning Before both the phrasal and full parsing stages, the constituent table (henceforth, the chart) is pruned to remove edges that are relatively unlikely to con- tribute to correct analyses. For example, after the string "Show flight D L three one two" is lexically analysed, edges for "D" and "L" as individual characters are pruned because another edge, derived from a lexical entry for "D L" as an airline code, is deemed far more plausible. Similarly, edges for "one" as a determiner and as a noun are pruned because, when flanked by two other numbers, "one" is far more likely to function as a number. Phrasal parsing then creates a number of new edges, including one for "flight D L three one two" as a noun phrase. This edge is deemed far more likely to serve as the basis for a correct full parse than any of the edges spanning substrings of this phrase; those edges, too, are therefore pruned. As a result, full parsing is very quick, and only one analysis (the correct one) is produced for the sentence. In the ab- sence of pruning, processing takes over eight times as long and produces 37 analyses in total. 2.1 The pruning algorithm Our algorithm estimates the probability of correct- ness of each edge: that is, the probability that the edge will contribute to the correct full analysis of the sentence (assuming there is one), given certain lex- ical and/or syntactic information about it. Values on each criterion (selection of pieces of information) are derived from training corpora by maximum like- lihood estimation followed by smoothing. That is, our estimate for the probability that an edge with property P is correct is (modulo smoothing) simply the number of times edges with property P occur in correct analyses in training divided by the number of times such edges are created during the analysis process in training. The current criteria are: • The left bigram score: the probability of correct- ness of an edge considering only the following data about it: - its tag (corresponding to its major category symbol plus, for a few categories, some ad- 224 ditional distinctions derived from feature values); - for a lexical edge, its word or semantic word class (words with similar distribu- tions, such as city names, are grouped into classes to overcome data sparseness); or for a phrasal edge, the name of the final (top- most) grammar rule that was used to create it; - the tag of a neighbouring edge immediately to its left. If there are several left neigh- bours, the one giving the highest probabil- ity is used. • The right bigram score: as above, but consider- ing right neighbours. • The unigram score: the probability of correct- ness of an edge considering only the tree of grammar rules, with words or word classes at the leaves, that gave rise to it. For a lexical edge, this reduces to its word or word class, and its tag. Other criteria, such as trigrams and finer-grained tags, are obviously worth investigating, and could be applied straightforwardly within the framework described here. The minimum score derived from any of the crite- ria applied is deemed initially to be the score of the constituent. That is, an assumption of full statis- tical dependence (Yarowsky, 1994), rather than the more common full independence, is made3 When llf events El, E2,..., E,~ are fully independent, then the joint probability P(E1 A ... A En) is the product of P(EI)...P(En), but if they are maximally dependent, it is the minimum of these values. Of course, neither assumption is any more than an approximation to the truth; but assuming dependence has the advantage that the estimate of the joint probability depends much less strongly on n, and so estimates for alternative joint events can be directly compared, without any possibly tricky normalization, even if they are composed of dif- ferent numbers of atomic events. This property is de- sirable: different (sub-)paths through a chart may span different numbers of edges, and one can imagine evalu- ation criteria which are only defined for some kinds of edge, or which often duplicate information supplied by other criteria. Taking minima means that the pruning of an edge results from it scoring poorly on one criterion, regardless of other, possibly good scores assigned to it by other criteria. This fits in with the fact that on the basis of local information alone it is not usually possibly to predict with confidence that a particular edge is highly likely to contribute to the correct analysis (since global factors will also be important) but it often is possible to spot highly unlikely edges. In other words, our training procedure yields far more probability estimates close to zero than close to one. recognizer output is being processed, however, the estimate from each criterion is in fact multiplied by a further estimate derived from the acoustic score of the edge: that is, the score assigned by the speech recognizer to the best-scoring sentence hypothesis containing the word or word string for the edge in question. Multiplication is used here because acous- tic and lexicosyntactic likelihoods for a word or con- stituent would appear to be more nearly fully inde- pendent than fully dependent, being based on very different kinds of information. Next, account is taken of the connectivity of the chart. Each vertex of the chart is labelled with the score of the best path through the chart that vis- its that vertex. In accordance with the dependence assumption, the score of a path is defined as the min- imum of the scores of its component edges. Then the score of each edge is recalculated to be the minimum of its existing score and the scores of its start and end vertices, on the grounds that a constituent, how- ever intrinsically plausible, is not worth preserving if it does not occur on any plausible paths. Finally, a pruning threshold is calculated as the score of the best path through the chart multiplied by a certain fraction. For the first pruning phase we use 1/20, and for the second, 1/150, although performance is not very sensitive to this. Any con- stituents scoring less than the threshold are pruned out. 2.2 Relation to other pruning methods As the example above suggests, judicious pruning of the chart at appropriate points can greatly re- strict the search space and speed up processing. Our method has points of similarity with some very re- cent work in Constraint Grammar 2 and is an alter- native to several other, related schemes. Firstly, a remarked earlier, it generalizes tagging: it not only adjudicates between possible labels for the same word, but can also use the existence of a constituent over one span of the chart as justifi- cation for pruning another constituent over another span, normally a subsumed one, as in the "D L" ex- ample. This is especially true in the second stage of pruning, when many constituents of different lengths have been created. Furthermore, it applies equally well to lattices, rather than strings, of words, and can take account of acoustic plausibility as well as syntactic considerations. Secondly, our method is related to beam search (Woods, 1985). In beam search, incomplete parses of an utterance are pruned or discarded when, on 2Ghrister Samuelsson, personal communication, 8th April 1996; see (Karlsson et al., 1995) for background. 225 some criterion, they are significantly less plausi- ble than other, competing parses. This pruning is fully interleaved with the parsing process. In con- trast, our pruning takes place only at certain points: currently before parsing begins, and between the phrasM and full parsing stages. Potentially, as with any generate-and-test algorithm, this can mean effi- ciency is reduced: some paths will be explored that could in principle be pruned earlier. However, as the results in section 4 below will show, this is not in practice a serious problem, because the second pruning phase greatly reduces the search space in preparation for the potentially inefficient full parsing phase. Our method has the advantage, compared to beam search, that there is no need for any particu- lar search order to be followed; when pruning takes place, all constituents that could have been found at the stage in question are guaranteed already to exist. Thirdly, our method is a generalization of the strategy employed by (McCord, 1993). McCord in- terleaved parsing with pruning in the same way as us, but only compared constituents over the same span and with the same major category. Our com- parisons are more global and therefore can result in more effective pruning. 3 Grammar specialization As described in Section 1 above, the non-phrasal grammar rules are subjected to two phases of pro- cessing. In the first, "EBL learning" phase, a parsed training corpus is used to identify "chunks" of rules, which are combined by the EBL algorithm into sin- gle macro-rules. In the second phase, the resulting set of "chunked" rules is converted into LR table form, using the method of (Samuelsson, 1994a). There are two main parameters that can be ad- justed in the EBL learning phase. Most simply, there is the size of the training corpus; a larger training corpus means a smaller loss of coverage due to gram- mar specialization. (Recall that grammar special- ization in general trades coverage for speed). Sec- ondly, there is the question of how to select the rule- chunks that will be turned into macro-rules. At one limit, the whole parse-tree for each training exam- ple is turned into a single rule, resulting in a special- ized grammar all of whose derivations are completely "flat". These grammars can be parsed extremely quickly, but the coverage loss is in practice unac- ceptably high, even with very large training corpora. At the opposite extreme, each rule-chunk consists of a single rule-application; this yields a specialized grammar identical to the original one. The challenge is to find an intermediate solution, which specializes the grammar non-triviMly without losing too much coverage. Several attempts to find good "chunking crite- ria" are described in the papers by Rayner and Samuelsson quoted above. In (Rayner and Samuels- son, 1994), a simple scheme is given, which creates rules corresponding to four possible units: full utter- ances, recursive NPs, PPs, and non-recursive NPs. A more elaborate scheme is given in (Samuelsson, 1994b), where the "chunking criteria" are learned automatically by an entropy-minimization method; the results, however, do not appear to improve on the earlier ones. In both cases, the coverage loss due to grammar specialization was about 10 to 12% using training corpora with about 5,000 examples. In practice, this is still unacceptably high for most applications. Our current scheme is an extension of the one from (Rayner and Samuelsson, 1994), where the rule- chunks are trees of non-phrasal rules whose roots and leaves are categories of the following possible types: full utterances, utterance units, imperative VPs, NPs, relative clauses, VP modifiers and PPs. The resulting specialized grammars are forced to be non-recursive, with derivations being a maximum of six levels deep. This is enforced by imposing the following dominance hierarchy between the possible categories: utterance > utterance_unit > imperative_VP > NP > {tel, VP_modifier} > PP The precise definition of the rule-chunking criteria is quite simple, and is reproduced in the appendix. Note that only the non-phrasal rules are used as input to the chunks from which the specialized gram- mar rules are constructed. This has two important advantages. Firstly, since all the phrasal rules are excluded from the speciMization process, the cov- erage loss associated with missing combinations of phrasal rules is eliminated. As the experiments in the next section show, the resulting improvement is quite substantial. Secondly, and possibly even more importantly, the number of specialized rules pro- duced by a given training corpus is approximately halved. The most immediate consequence is that much larger training corpora can be used before the specialized grammars produced become too large to be handled by the LR table compiler. If both phrasal and non-phrasal rules are used, we have been unable to compile tables for rules derived from training sets of over 6,000 examples (the process was killed after running for about six hours on a Sun Sparc 20/HS21, SpecINT92=131.2). Using only non-phrasal rules, compilation of the tables for a 15,000 example train- 226 ing set required less than two CPU-hours on the same machine. 4 Experiments This section describes a number of experiments car- ried out to test the utility of the theoretical ideas presented above. The basic corpus used was a set of 16,000 utterances from the Air Travel Planning (ATIS; (Hemphill et al., 1990)) domain. All of these utterances were available in text form; 15,000 of them were used for training, with 1,000 held out for test purposes. Care was taken to ensure not just that the utterances themselves, but also the speakers of the utterances were disjoint between test and train- ing data; as pointed out in (Rayner et al., 1994a), failure to observe these precautions can result in sub- stantial spurious improvements in test data results. The 16,000 sentence corpus was analysed by the SRI Core Language Engine (Alshawi (ed), 1992), us- ing a lexicon extended to cover the ATIS domain (Rayner, 1994). All possible grammatical analyses of each utterance were recorded, and an interactive tool was used to allow a human judge to identify the correct and incorrect readings of each utterance. The judge was a first-year undergraduate student with a good knowledge of linguistics but no prior experience with the system; the process of judging the corpus took about two and a half person-months. The input to the EBL-based grammar-specialization process was limited to readings of corpus utterances that had been judged correct. When utterances had more than one correct reading, a preference heuristic was used to select the most plausible one. Two sets of experiments were performed. In the first, increasingly large portions of the training set were used to train specialized grammars. The cov- erage loss due to grammar specialization was then measured on the 1,000 utterance test set. The ex- periment was carried out using both the chunking criteria from (Rayner and Samuelsson, 1994) (the "Old" scheme), and the chunking criteria described in Section 3 above (the "New" scheme). The results are presented in Table 1. The second set of experiments tested more di- rectly the effect of constituent pruning and gram- mar specialization on the Spoken Language Transla- tor's speed and coverage; in particular, coverage was measured on the real task of translating English into Swedish, rather than the artificial one of producing a correct QLF analysis. To this end, the first 500 test- set utterances were presented in the form of speech hypothesis lattices derived by aligning and conflat- ing the top five sentence strings produced by a ver- sion of the DECIPHER (TM) recognizer (Murveit Examples 100 250 500 1000 3000 5000 7000 11000 15000 Old scheme Rules Loss 100 47.8% 181 37.6% 281 27.6% 432 22.7% 839 14.9% 1101 11.2% 1292 10.4% 1550 9.8% 1819 8.7% New scheme Rules Loss 69 35.5% 126 21.8% 180 14.7% 249 10.8% 455 7.8% 585 6.6% 668 62% 808 5.8% 937 5.0% Table 1: EBL rules and EBL coverage number of training examples loss against et al., 1993). The lattices were analysed by four dif- ferent versions of the parser, exploring the different combinations of turning constituent pruning on or off, and specialized versus unspecialized grammars. The specialized grammar used the "New" scheme, and had been trained on the full training set. Ut- terances which took more than 90 CPU seconds to process were timed out and counted as failures. The four sets of outputs from the parser were then translated into Swedish by the SLT transfer and gen- eration mechanism (Agn~ et al., 1994). Finally, the four sets of candidate translations were pairwise compared in the cases where differing translations had been produced. We have found this to be an effective way of evaluating system performance. Al- though people differ widely in their judgements of whether a given translation can be regarded as "ac- ceptable", it is in most cases surprisingly easy to say which of two possible translations is preferable. The last two tables summarize the results. Table 2 gives the average processing times per input lattice for each type of processing (times measured run- ning SICStus Prolog 3#3 on a SUN Sparc 20/HS21), showing how the time is divided between the various processing phases. Table 3 shows the relative scores of the four parsing variants, measured according to the "preferable translation" criterion. 5 Conclusions and further directions Table 2 indicates that EBL and pruning each make processing about three times faster; the combination of both gives a factor of about nine. In fact, as the detailed breakdown shows, even this underestimates the effect on the main parsing phase: when both pruning and EBL are operating, processing times for other components (morphology, pruning and prefer- ences) become the dominant ones. As we have so 227 E- E-t- E- E+ p- p- P÷ P+ Morph/lex lookup 0.53 0.54 0.54 0.49 Phrasal parsing 0.27 0.28 0.14 0.14 Pruning - - 0.57 0.56 Full parsing 12.42 2.61 3.04 0.26 Preferences 3.63 1.57 1.27 0.41 TOTAL Table 2: Breakdown of average time spent on each processing phase for each type of processing (seconds per utterance) E- E+ E- E-t- P- P- P-t- P+ E-/P- 12-24 25-63 24-65 E+/P- 24-12 31-50 26-47 E-/P+ 63-25 50-31 5-8 E+/P+ 65-24 47-26 8-5 Table 3: Comparison between translation results on the four different analysis alternatives, measured on the 500-utterance test set. The entry for a given row and column holds two figures, showing respec- tively the number of examples where the "row" vari- ant produced a better translation than the "col- umn" variant and the number where it produced a worse one. Thus for example "EBL+/pruning+" was better than "EBL-/pruning-" on 65 examples, and worse on 24. far expended little effort on optimizing these phases of processing, it is reasonable to expect substantial further gains to be possible. Even more interestingly, Table 3 shows that real system performance, in terms of producing a good translation, is significantly improved by pruning, and is not degraded by grammar specialization. (The slight improvement in coverage with EBL on is not statistically significant). Our interpretation of these results is that the technical loss of grammar cover- age due to the specialization and pruning processes is more than counterbalanced by two positive effects. Firstly, fewer utterances time out due to slow pro- cessing; secondly, the reduced space of possible anal- yses means that the problem of selecting between different possible analyses of a given utterance be- comes easier. To sum up, the methods presented here demon- strate that it is possible to use the combined pruning and grammar specialization method to speed up the whole analysis phase by nearly an order of magni- tude, without incurring any real penalty in the form of reduced coverage. We find this an exciting and significant result, and are further continuing our re- search in this area during the coming year. In the last two paragraphs we sketch some ongoing work. All the results presented above pertain to English only. The first topic we have been investigating is the application of the methods described here to processing of other languages. Preliminary exper- iments we have carried out on the Swedish version of the CLE (Gamb~ick and Rayner 1992) have been encouraging; using exactly the same pruning meth- ods and EBL chunking criteria as for English, we obtain comparable speed-ups. The loss of coverage due to grammar specialization also appears compa- rable, though we have not yet had time to do the work needed to verify this properly. We intend to do so soon, and also to repeat the experiments on the French version of the CLE (Rayner, Carter and Bouillon, 1996). The second topic is a more radical departure, and can be viewed as an attempt to make interleaving of parsing and pruning the basic principle underly- ing the CLE's linguistic analysis process. Exploiting the "stratified" nature of the EBL-specialized gram- mar, we group the chunked rules by level, and apply them one level at a time, starting at the bottom. After each level, constituent pruning is used to elim- inate unlikely constituents. The intent is to achieve a trainable robust parsing model, which can return a useful partial analysis when no single global analy- sis is found. An initial implementation exists, and is currently being tested; preliminary results here are also very positive. We expect to be able to report on this work more fully in the near future. Acknowledgements The work reported in this paper was funded by Telia Research AB. We would like to thank Christer Samuelsson for making the LR compiler available to us, Martin Keegan for patiently judging the results of processing 16,000 ATIS utterances, and Steve Pul- man and Christer Samuelsson for helpful comments. References Agn~, M-S., Alshawi, H., Bretan, I., Carter, D.M. Ceder, K., Collins, M., Crouch, R., Digalakis, V., Ekholm, B., Gamb~ick, B., Kaja, J., Karlgren, J., Lyberg, B., Price, P., Pulman, S., Rayner, M., Samuelsson, C. and Svensson, T. 1994. Spoken 228 Language Translator: First Year Report. SRI technical report CRC-0433 Alshawi, H. (ed.) 1992. The Core Language Engine. MIT Press. Andry, F., M. Gawron, J. Dowding, and R. Moore. 1994. A Tool for Collecting Domain Depen- dent Sortal Constraints From Corpora. Proc. COLING-94, Kyoto. Briscoe, Ted, and John Carroll. 1993. Gener- alized Probabilistic LR Parsing of Natural Lan- guage (Corpora) with Unification-Based Gram- mars. Computational Linguistics, 19:1, pp. 25-60. Church, Ken. 1988. A stochastic parts program and noun phrase parser for unrestricted text. Proc. 1st ANLP, Austin, Tx., pp. 136-143. Cutting, D., J. Kupiec, J. Pedersen and P. Sibun. 1992. A Practical Part-of-Speech Tagger Proc. 3rd ANLP, Trento, Italy, pp. 133-140. DeMarcken, C.G. 1990. Parsing the LOB Corpus Proc. 28th ACL, Pittsburgh, Pa., pp. 243-251 DeRose, Steven. 1988. Grammatical Category Dis- ambiguation by Statistical Optimization. Compu- tational Linguistics 14, pp. 31-39 Gamb~ick, Bj6rn, and Manny Rayner. "The Swedish Core Language Engine". Proc. 3rd Nordic Con- ference on Text Comprehension in Man and Ma- chine, LinkSping, Sweden. Also SRI Technical Re- port CRC-025. Grishman, R., N. Nhan, E. Marsh and L. Hirschmann. 1984. Automated Determination of Sublanguage Usage. Proc. 22nd COLING, Stanford, pp. 96-100. van Harmelen, Frank, and Alan Bundy. 1988. Explanation-Based Generalization = Partial Eval- uation (Research Note) Artificial Intelligence 36, pp. 401-412. Hemphill, C.T., J.J. Godfrey and G.R. Doddington. 1990. The ATIS Spoken Language Systems pilot corpus. Proc. DARPA Speech and Natural Lan- guage Workshop, Hidden Valley, Pa., pp. 96-101. Karlsson, F., A. Voutilainen, J. Heikkil/i and A. Anttila (eds). 1995. Constraint Grammar. Mouton de Gruyer, Berlin, New York. McCord, M. 1993. Heuristics for Broad- Coverage Natural Language Parsing. Proc. 1st ARPA Workshop on Human Language Technol- ogy, Princeton, NJ. Morgan Kaufmann. aAll SRI Cambridge technical reports are available through WWW from http://www, cam. sri. corn Mitchell, T., R. Keller, and S. Kedar-Cabelli. 1986. Explanation-Based Generalization: a Unifying View. Machine Learning 1:1, pp. 47-80. Murveit, H., Butzberger, J., Digalakis, V. and Wein- traub, M. 1993. Large Vocabulary Dictation us- ing SRI's DECIPHER(TM) Speech Recognition System: Progressive Search Techniques. Proc. In- ter. Conf. on Acoust., Speech and Signal, Min- neapolis, Mn. Rayner, M. 1988. Applying Explanation-Based Generalization to Natural-Language Processing. Proc. the International Conference on Fifth Gen- eration Computer Systems, Kyoto, pp. 1267-1274. Rayner, M. 1994. Overview of English Linguistic Coverage. In (Agn/is et al., 1994) Rayner, M., Alshawi, H., Bretan, I., Carter, D.M., Digalakis, V., Gamb/ick, B., Kaja, J., Karlgren, J., Lyberg, B., Price, P., Pulman, S. and Samuels- son, C. 1993. A Speech to Speech Translation S~'s- tem Built From Standard Components. Proc.lst ARPA workshop on Human Language Technol- ogy, Princeton, NJ. Morgan Kaufmann. Also SRI Technical Report CRC-031. Rayner, M., D. Carter and P. Bouillon. 1996. Adapting the Core Language Engine to French and Spanish. Proc. NLP-IA, Moncton, New Brunswick. Also SRI Technical Report CRC-061. Rayner, M., D. Carter, V. Digalakis and P. Price. 1994. Combining Knowledge Sources to Re- order N-Best Speech Hypothesis Lists. Proc.2nd ARPA workshop on Human Language Technol- ogy, Princeton, NJ., pp. 217-221. Morgan Kauf- mann. Also SRI Technical Report CRC-044. Rayner. M., and C. Samuelsson. 1990. Using Explanation-Based Learning to Increase Perfor- mance in a Large NL Query System. Proc. DARPA Speech and Natural Language Workshop, June 1990, pp. 251-256. Morgan Kaufmann. Rayner, M., and C. Samuelsson. 1994. Corpus- Based Grammar Specialization for Fast Analysis. In (Agn/is et al., 1994) Samuelsson, C. 1994. Notes on LR Parser Design. Proc. COLING-94, Kyoto, pp. 386-390. Samuelsson, C. 1994. Grammar Specialization through Entropy Thresholds. Proc. ACL-94, Las Cruces, NM, pp. 188-195. Samuelsson, C., and M. Rayner. 1991. Quantitative Evaluation of Explanation-Based Learning as an Optimization Tool for a Large-Scale Natural Lan- guage System. Proc. 12th IJCAI, Sydney, pp. 609- 615. 229 Woods, W. 1985. Language Processing for Speech Understanding. Computer Speech Processing, W. Woods and F. Fallside (eds), Prentice-Hall Inter- national. Yarowsky, D. 1994. Decision Lists for Lexical Ambi- guity Resolution. Proc. ACL-94, Las Cruces, NM, pp. 88-95. Appendix: definition of the "New" chunking rules This appendix defines the "New" chunking rules re- ferred to in Sections 3 and 4. There are seven types of non-phrasal constituent in the specialised gram- mar. We start by describing each type of constituent through examples. Utterance: The top category. Utterance_unit: Utterance_ units are minimal syntactic units capable of standing on their own: for example, declarative clauses, questions, NPs and PPs. Utterances may consist of more than one utterance_unit. The following is an utterance containing two utterance_units: "[Flights to Boston on Monday] [please show me the cheapest ones.]" Imperatlve_VP: Since imperative verb phrases are very common in the corpus, we make them a category of their own in the specialised gram- mar. To generalise over possible addition of adverbials (in particular, "please" and "now"), we define the imperative_vp category so as to leave the adverbials outside. Thus the brack- eted portion of the following utterance is an imperative_vp: "That's fine now [give me the fares for those flights]" Non_phrasalANP: All NPs which are not pro- duced entirely by phrasal rules. The following are all non_phrasal~Ps: "Boston and Denver", "Flights on Sunday morning", "Cheapest fare from Boston to Denver", "The meal I'd get on that flight" Reh Relative clauses. VP..modifier: VPs appearing as NP postmodifiers. The bracketed portions of the following are VP_modifiers: "Delta flights [arriving after seven P M] .... All flights tomorrow [ordered by arrival time]" PP: The CLE grammar treats nominal temporal adverbials, sequences of PPs, and "A to B" constructions as PPs (cf (Rayner, 1994)). The following are examples of PPs: "Tomorrow af- ternoon", "From Boston to Dallas on Friday", "Denver to San Francisco Sunday" We can now present the precise criteria which de- termine the chunks of rules composed to form each type of constituent. For each type of constituent in the specialised grammar, the chunk is a subtree ex- tracted from the derivation tree of a training exam- ple (cf (Rayner and Samuelsson, 1994)); we specify the roots and leaves of the relevant subtrees. The term "phrasal tree" will be used to mean a deriva- tion tree all of whose rule-applications are phrasal rules. Utterance: The root of the chunk is the root of the original tree. The leaves are the nodes re- suiting from cutting at maximal subtrees for utterance_units, non_phrasal_ups pps, and maximal phrasal subtrees. Utterance_unit: The root is the root of a maximal subtree for a constituent of type utterance_unit. The leaves are the nodes re- sulting from cutting at maximal subtrees for imperative_vps, nps, and pps, and maximal phrasal subtrees. Imperatlve_VP: The root is the root of a maxi- mal subtree under an application of the S --~ VP rule whose root is not an application of an adverbial modification rule. The leaves are the nodes resulting from cutting at maximal sub- trees for non_phrasal_np, and pp, and maximal phrasal subtrees. Non_phrasal_NP: The root is the root of a max- imal non-phrasal subtree for a constituent of type np. The leaves are the nodes result- ing from cutting at maximal subtrees for re1, vp_modifier, and pp, and maximal phrasal subtrees. Reh The root is the root of a maximal subtree for a constituent of type re1. The leaves are the nodes resulting from cutting at maximal sub- trees for pp, and maximal phrasal subtrees. VP.anodifier: The root is the root of a vp subtree immediately dominated by an application of the NP --+ NP VP rule. The leaves are the nodes re- sulting from cutting at maximal subtrees for pp, and maximal phrasal subtrees. PP: The root is the root of a maximal non-phrasal subtree for a constituent of type pp. The leaves are the nodes resulting from cutting at maximal phrasal subtrees. 230 | 1996 | 30 |
An Efficient Compiler for Weighted Rewrite Rules Mehryar Mohri AT&T Research 600 Mountain Avenue Murray Hill, 07974 NJ mohri@research, att. com Richard Sproat Bell Laboratories 700 Mountain Avenue Murray Hill, 07974 NJ rws@bell-labs, com Abstract Context-dependent rewrite rules are used in many areas of natural language and speech processing. Work in computa- tional phonology has demonstrated that, given certain conditions, such rewrite rules can be represented as finite-state transducers (FSTs). We describe a new algorithm for compiling rewrite rules into FSTs. We show the algorithm to be sim- pler and more efficient than existing al- gorithms. Further, many of our appli- cations demand the ability to compile weighted rules into weighted FSTs, trans- ducers generalized by providing transi- tions with weights. We have extended the algorithm to allow for this. 1. Motivation Rewrite rules are used in many areas of natural language and speech processing, including syntax, morphology, and phonology 1. In interesting ap- plications, the number of rules can be very large. It is then crucial to give a representation of these rules that leads to efficient programs. Finite-state transducers provide just such a compact representation (Mohri, 1994). They are used in various areas of natural language and speech processing because their increased compu- tational power enables one to build very large ma- chines to model interestingly complex linguistic phenomena. They also allow algebraic operations such as union, composition, and projection which are very useful in practice (Berstel, 1979; Eilen- berg, 1974 1976). And, as originally shown by Johnson (1972), rewrite rules can be modeled as 1 Parallel rewrite rules also have interesting applica- tions in biology. In addition to their formal language theory interest, systems such as those of Aristid Lin- denmayer provide rich mathematical models for bio- logical development (Rozenberg and Sa]omaa, 1980). 231 finite-state transducers, under the condition that no rule be allowed to apply any more than a finite number of times to its own output. Kaplan and Kay (1994), or equivalently Kart- tunen (1995), provide an algorithm for compiling rewrite rules into finite-state transducers, under the condition that they do not rewrite their non- contextual part 2. We here present a new algorithm for compiling such rewrite rules which is both sim- pler to understand and implement, and computa- tionally more efficient. Clarity is important since, as pointed out by Kaplan and Kay (1994), the rep- resentation of rewrite rules by finite-state trans- ducers involves many subtleties. Time and space efficiency of the compilation are also crucial. Us- ing naive algorithms can be very time consuming and lead to very large machines (Liberman, 1994). In some applications such as those related to speech processing, one needs to use weighted rewrite rules, namely rewrite rules to which weights are associated. These weights are then used at the final stage of applications to output the most probable analysis. Weighted rewrite rules can be compiled into weighted finite-state trans- ducers, namely transducers generalized by pro- viding transitions with a weighted output, under the same context condition. These transducers are very useful in speech processing (Pereira et al., 1994). We briefly describe how we have aug- mented our algorithm to handle the compilation of weighted rules into weighted finite-state trans- ducers. In order to set the stage for our own contribu- tion, we start by reviewing salient aspects of the Kaplan and Kay algorithm. 2The genera] question of the decidability of the halting problem even for one-rule semi-Thue systems is still open. Robert McNaughton (1994) has recently made a positive conjecture about the class of the rules without self overlap. Prologue o I d( Obligatory( ¢ , <i , >)) Id(Rightcontezt(p, <, >)) Replace Id(Leftcontezt(A, <, >)) Prologue -i = Id(Z~< 0 <i ¢°< > B~,< 0) o = Id(( 0 > p>0Z 0- > > p>0 0- > o = [Id(~*<,>, o)Opt(Id(<a)¢°<c>c × ¢°c>cId(>a))]* o -" Id((~0A<0 - ~0 < < Z~0 f] ~0A<0 - ~0 < < ~0)>) o Figure 1: Compilation of obligatory left-to-right rules, using the KK algorithm. (1) 2. The KK Algorithm The rewrite rules we consider here have the fol- lowing general form: ¢ --, p (2) Such rules can be interpreted in the following way: ¢ is to be replaced by ¢ whenever it is preceded by A and followed by p. Thus, A and p represent the left and right contexts of application of the rules. In general, ¢, ¢, A and p are all regular expressions over the alphabet of the rules. Several types of rules can be considered depending on their being obligatory or optional, and on their direction of application, from left to right, right to left or simultaneous application. Consider an obligatory rewrite rule of the form ¢ --+ ¢/A p, which we will assume applies left to right across the input string. Compilation of this rule in the algorithm of Kaplan and Kay (1994) (KK for short) involves composing together six transducers, see Figure 1. We use the notations of KK. In particular, denotes the alphabet, < denotes the set of context labeled brackets {<a, <i, <c}, > the set {>a, >i, >c}, and 0 an additional character representing deleted material. Subscript symbols of an expres- sion are symbols which are allowed to freely ap- pear anywhere in the strings represented by that expression. Given a regular expression r, Id(r) is the identity transducer obtained from an automa- ton A representing r by adding output labels to A identical to its input labels. The first transducer, Prologue, freely intro- duces labeled brackets from the set {<a, <i, <~, >a, >i, >~} which are used by left and right context transducers. The last transducer, Prologue -i, erases all such brackets. In such a short space, we can of course not hope to do justice to the KK algorithm, and the reader who is not familiar with it is urged to con- sult their paper. However, one point that we do need to stress is the following: while the con- struction of Prologue, Prologue -i and Replace 232 is fairly direct, construction of the other transduc- ers is more complex, with each being derived via the application of several levels of regular oper- ations from the original expressions in the rules. This clearly appears from the explicit expressions we have indicated for the transducers. The con- struction of the three other transducers involves many operations including: two intersections of automata, two distinct subtractions, and nine complementations. Each subtraction involves an intersection and a complementation algorithm 3. So, in the whole, four intersections and eleven complementations need to be performed. Intersection and complementation are classi- cal automata algorithms (Aho et al., 1974; Aho et al., 1986). The complexity of intersection is quadratic. But the classical complementation al- gorithm requires the input automaton to be de- terministic. Thus, each of these 11 operations re- quires first the determinization of the input. Such operations can be very costly in the case of the automata involved in the KK algorithm 4. In the following section we briefly describe a new algorithm for compiling rewrite rules. For rea- sons of space, we concentrate here on the com- pilation of left-to-right obligatory rewrite rules. However, our methods extend straightforwardly to other modes of application (optional, right-to-left, simultaneous, batch), or kinds of rules (two-level rules) discussed by Kaplan and Kay (1994). 3A subtraction can of course also be performed di- rectly by combining the two steps of intersection and complementation, but the corresponding algorithm has exactly the same cost as the total cost of the two operations performed consecutively. 4 One could hope to find a more efficient way of de- termining the complement of an automaton that would not require determinization. However, this problem is PSPACE-complete. Indeed, the regular expression non-universality problem is a subproblem of comple- mentation known to be PSPACE-complete (Garey and Johnson, 1979, page 174), (Stockmeyer and Meyer, 1973). This problem also known as the emptiness of complement problem has been extensively studied (Aho et al., 1974, page 410-419). 3. New Algorithm 3.1. Overview In contrast to the KK algorithm which introduces "brackets everywhere only to restrict their occur- rence subsequently, our algorithm introduces con- text symbols just when and where they are needed. Furthermore, the number of intermediate trans- ducers necessary in the construction of the rules is smaller than in the KK algorithm, and each of the transducers can be constructed more directly and efficiently from the primitive expressions of the rule, ~, ~, A, p. A transducer corresponding to the left-to- right obligatory rule ¢ --* ¢/A p can be ob- tained by composition of five transducers: r o f o replace o 11 o 12 (3) 1. The transducer r introduces in a string a marker > before every instance of p. For rea- sons that will become clear we will notate this as Z* p --~ E* > p. 2. The transducer f introduces markers <1 and <2 before each instance of ~ that is followed by >: u u {>})'{<1, <2 }5 >. In other words, this transducer/harks just those ~b that occur before p. 3. The replacement transducer replace replaces ~b with ~ in the context <1 ~b >, simultane- ously deleting > in all positions (Figure 2). Since >, <1, and <2 need to be ignored when determining an occurrence of ~b, there are loops over the transitions >: c, <1: ¢, <~: c at all states of ¢, or equivalently of the states of the cross product transducer ¢ × ~. 4. The transducer 11 admits only those strings in which occurrences of <1 are preceded by A and deletes <l at such occurrences: 5. The transducer 12 admits only those strings in which occurrences of <2 are not preceded by A and deletes <~ at such occurrences: 2*X <2-~ ~*~. Clearly the composition of these transducers leads to the desired result. The construction of the transducer replace is straightforward. In the fol- lowing, we show that the construction of the other four transducers is also very simple, and that it only requires the determinization of 3 automata and additional work linear (time and space) in the size of the determinized automata. 3.2. Markers Markers of TYPE 1 Let us start by considering the problem of con- structing what we shall call a TYPE I transducer, 233 Figure 2: Replacement transducer replace in the obligatory left-to-right case. which inserts a marker after all prefixes of a string that match a particular regular expression. Given a regular expression fl defined on the alphabet E, one can construct, using classical algorithms (Aho et al., 1986), a deterministic automaton a repre- senting E*fl. As with the KK algorithm, one can obtain from a a transducer X = Id(a) simply by assigning to each transition the same output label as the input label. We can easily transform X into a new transducer r such that it inserts an arbi- trary marker ~ after each occurrence of a pattern described by ~. To do so, we make final the non- final states of X and for any final state q of X we create a new state q~, a copy of q. Thus, q' has the same transitions as q, and qP is a final state. We then make q non-final, remove the transitions leaving q and add a transition from q to q' with input label the empty word c, and output ~. Fig- ures 3 and 4 illustrate the transformation of X into T. a:a cic Figure 3: Final state q of X with entering and leaving transitions. ata ctc Figure 4: States and transitions of r obtained by modifications of those of X. Proposition 1 Let ~ be a deterministic automa- ton representing E*/3, then the transducer r ob- tained as described above is a transducer post- marking occurrences of fl in a string ofF* by #. Proof. The proof is based on the observa- tion that a deterministic automaton representing E*/~ is necessarily complete 5. Notice that non- deterministic automata representing ~*j3 are not necessarily complete. Let q be a state of a and let u E ~* be a string reaching q6. Let v be a string described by the regular expression ft. Then, for any a E ~, uav is in ~*~. Hence, uav is accepted by the automaton a, and, since ~ is deterministic, there exists a transition labeled with a leaving q. Thus, one can read any string u E E* using the automaton a. Since by definition of a, the state reached when reading a prefix u ~ of u is final iff u ~ E ~*~, by construction, the transducer r in- serts the symbol # after the prefix u ~ iff u ~ ends with a pattern of ft. This ends the proof of the proposition, t3 Markers of TYPE 2 In some cases, one wishes to check that any occurrence of # in a string s is preceded (or fol- lowed) by an occurrence of a pattern of 8. We shall say that the corresponding transducers are of TYPE 2. They play the role of a filter. Here again, they can be defined from a deterministic au- tomaton representing E*B. Figure 5 illustrates the modifications to make from the automaton of fig- ure 3. The symbols # should only appear at final states and must be erased. The loop # : e added at final states of Id(c~) is enough for that purpose. All states of the transducer are then made final since any string conforming to this restriction is acceptable: cf. the transducer !1 for A above. #:E Figure 5: Filter transducer, TYPE 2. 5An automaton A is complete iff at any state q and for any element a of the alphabet ~ there exists at least one transition leaving q labeled with a. In the case of deterministic automata, the transition is unique. 6We assume all states of a accessible. This is true if a is obtained by determinization. 234 Markers of TYPE 3 In other cases, one wishes to check the reverse constraint, that is that occurrences of # in the string s are not preceded (or followed) by any oc- currence of a pattern of ft. The transformation then simply consists of adding a loop at each non- final state of Id(a), and of making all states final. Thus, a state such as that of figure 6 is trans- a:a c:c Figure 6: Non-final state q of a. formed into that of figure 5. We shall say that the corresponding transducer is of TYPE 3: cf. the transducer 12 for ~. The construction of these transducers (TYPE 1-3) can be generalized in various ways. In par- ticular: • One can add several alternative markers {#1,'", #k} after each occurrence of a pat- tern of 8 in a string. The result is then an automaton with transitions labeled with, for instance, ~1,'" ", ~k after each pattern of fl: cf. transducer f for ¢ above. • Instead of inserting a symbol, one can delete a symbol which would be necessarily present after each occurrence of a pattern of 8. For any regular expression a, de- fine M arker( a, type, deletions, insertions) as the transducer of type type constructed as previously described from a deterministic automaton repre- senting a, insertions and deletions being, respec- tively, the set of insertions and deletions the trans- ducer makes. Proposition 2 For any regular expression a, Marker(a, type, deletions, insertions) can be constructed from a deterministic automaton rep- resenting a in linear time and space with respect to the size of this automaton. Proof. We proved in the previous proposition that the modifications do indeed lead to the desired transducer for TYPE 1. The proof for other cases is similar. That the construction is linear in space is clear since at most one additional transition and state is created for final or non-final states 7. The overall time complexity of the construction is lin- ear, since the construction of ld(a) is linear in the ~For TYPE 2 and TYPE 3, no state is added but only a transition per final or non-final state. r = [reverse(Marker(E*reverse(p), 1, {>},0))] f = [reverse(Marker((~ U {>})*reverse(C> >), 1, {<1, <u},0))] 11 = [Marker(N*)L 2,0, {<1})]<~:<2 12 = [Marker($*A,3,@, {<2})] Figure 7: Expressions of the r, f, ll, and 12 using Marker. (4) (5) (6) (7) number of transitions of a and that other modifi- cations consisting of adding new states and transi- tions and making states final or not are also linear. D We just showed that Marker(a,type, de- letions, insertions) can be constructed in a very efficient way. Figure 7 gives the expressions of the four transducers r, f, ll, and 12 using Marker. Thus, these transducers can be constructed very efficiently from deterministic automata repre- senting s ~*reverse(p), (~ O {>})* reverse(t> >), and E*,~. The construction of r and f requires two reverse operations. This is because these two transducers insert material before p or ¢. 4. Extension to Weighted Rules In many applications, in particular in areas re- lated to speech, one wishes not only to give all possible analyses of some input, but also to give some measure of how likely each of the analyses is. One can then generalize replacements by consid- ering extended regular expressions, namely, using the terminology of formal language theory, ratio- nal power series (Berstel and Reutenauer, 1988; Salomaa and Soittola, 1978). The rational power series we consider here are functions mapping ~* to ~+ U {oo) which can be described by regular expressions over the alphabet (T~+ U {co}) x ~. S = (4a)(2b)*(3b) is an example of rational power series. It defines a function in the following way: it associates a non-null num- ber only with the strings recognized by the regu- lar expression ab*b. This number is obtained by adding the coefficients involved in the recognition of the string. The value associated with abbb, for instance, is (S, abbb) = 4 + 2 + 2 + 3 = 11. In general, such extended regular expressions can be redundant. Some strings can be matched SAs in the KK algorithm we denote by ¢> the set of the strings described by ¢ containing possibly oc- currences of > at any position. In the same way, sub- scripts such as >:> for a transducer r indicate that loops by >:> are added at all states of r. We de- note by reverse(a) the regular expression describing exactly the reverse strings of a if a is a regular expres- sion, or the reverse transducer of a if a is a transducer. 235 in different ways with distinct coefficients. The value associated with those strings is then the min- imum of all possible results. S' = (2a)(3b)(4b) + (5a)(3b*) matches abb with the different weights 2+3+4 -- 9 and 5+3+3 = 11. The mini- mum of the two is the value associated with abb: (S', abb) = 9. Non-negative numbers in the defi- nition of these power series are often interpreted as the negative logarithm of probabilities. This explains our choice of the operations: addition of the weights along the string recognition and min, since we are only interested in that result which has the highest probability 9. Rewrite rules can be generalized by letting ¢ be a rational power series. The result of the ap- plication of a generalized rule to a string is then a set of weighted strings which can be represented by a weighted automaton. Consider for instance the following rule, which states that an abstract nasal, denoted N, is rewritten as m in the context of a following labial: Y ---* m/__[+labial] (8) z Now suppose that this is only probabilistically true, and that while ninety percent of the time N does indeed become m in this environment, about ten percent of the time in real speech it be- comes n. Converting from probabilities to weights, one would say that N becomes m with weight a = -log(0.9), and n with weight fl = -log(0.1), in the stated environment. One could represent this by the following rule: N --* am + fin/__[+labial] (9) We define Weighted finite-state transducers as transducers such that in addition to input and out- put labels, each transition is labeled with a weight. The result of the application of a weighted transducer to a string, or more generally to an automaton is a weighted automaton. The corre- sponding operation is similar to the unweighted case. However, the weight of the transducer and those of the string or automaton need to be combined too, here added, during composition (Pereira et al., 1994). 9Using the terminology of the theory of languages, the functions we consider here are power series de- fined on the tropical semiring (7~+U{oo}, min, +, (x), 0) (Kuich and Salomaa, 1986). \"_/Too p:p/O N:N/O~ N:m/a N:n/l~ :0~ N:N/O Figure 8: Transducer representing the rule 9. We have generalized the composition opera- tion to the weighted case by introducing this com- bination of weights. The algorithm we described in the previous sections can then also be used to compile weighted rewrite rules. As an example, the obligatory rule 9 can be represented by the weighted transducer of Fig- ure 8 10. The following theorem extends to the weighted case the assertion proved by Kaplan and Kay (1994). Theorem 1 A weighted rewrite rule of the type defined above that does not rewrite its non- contextual part can be represented by a weighted finite-state transducer. Proof. The construction we described in the pre- vious section also provides a constructive proof of this theorem in the unweighted case. In case ¢ is a power series, one simply needs to use in that construction a weighted finite-state trans- ducer representing ¢. By definition of composition of weighted transducers, or multiplication of power series, the weights are then used in a way consis- tent with the definition of the weighted context- dependent rules, o 5. Experiments In order to compare the performance of the M- gorithm presented here with KK, we timed both algorithms on the compilation of individual rules taken from the following set (k • [0, 10]): a --* b~ c ~ (10) a --* b~ c k (11) 1°We here use the symbol ~ to denote all letters different from b, rn, n, p, and N. 236 In other words we tested twenty two rules where the left context or the right context is varied in length from zero to ten occurrences of c. For our experiments, we used the alphabet of a realistic application, the text analyzer for the Bell Labora- tories German text-to-speech system consisting of 194 labels. All tests were run on a Silicon Graph- ics IRIS Indigo 4000, 100 MhZ IP20 Processor, 128 Mbytes RAM, running IRIX 5.2. Figure 9 shows the relative performance of the two algo- rithms for the left context: apparently the per- formance of both algorithms is roughly linear in the length of the left context, but KK has a worse constant, due to the larger number of operations involved. Figure 10 shows the equivalent data for the right context. At first glance the data looks similar to that for the left context, until one no- tices that in Figure 10 we have plotted the time on a log scale: the KK algorithm is hyperexponential. What is the reason for this performance degra- dation in the right context? The culprits turn out to be the two intersectands in the expression of Rightcontext(p, <, >) in Figure 1. Consider for example the righthand intersectand, namely ~0 > P>0~0- > ~0, which is the complement of ~0 > P>0~0- > ~0- As previously in- dicated, the complementation Mgorithm. requires determinization, and the determinization of au- tomata representing expressions of the form ~*a, where c~ is a regular expression, is often very ex- pensive, specially when the expression a is already complex, as in this case. Figure 11 plots the behavior of determiniza- tion on the expression Z~0 > P>0Z~0- > ~0 for each of the rules in the set a ~ b/__c k, (k e [0, 10]). On the horizontal axis is the num- ber of arcs of the non-deterministic input machine, and on the vertical axis the log of the number of arcs of the deterministic machine, i.e. the ma- chine result of the determinization algorithm with- out using any minimization. The perfect linearity indicates an exponential time and space behav- ior, and this in turn explains the observed differ- ence in performance. In contrast, the construction of the right context machine in our algorithm in- volves only the single determinization of the au- tomaton representing ~*p, and thus is much less expensive. The comparison just discussed involves a rather artificiM ruleset, but the differences in performance that we have highlighted show up in real applications. Consider two sets of pronun- ciation rules from the Bell Laboratories German text-to-speech system: the size of the alphabet for this ruleset is 194, as noted above. The first rule- set, consisting of pronunciation rules for the ortho- graphic vowel <5> contains twelve rules, and the second ruleset, which deals with the orthographic q, o 11 /" -----nl / .../'/" / m/'= i1~11 j 0 0 0 `~'-" 0 / 0 ~ 0 0__0/0 0 0 I I I I 2 4 6 $ L=tr~lm 011.~t Comxt Figure 9: Compilation times for rules of the form a ~ b/ck , (k E [0, 10]). 9" o o ./ / I: / N,* a~,~t.vn I / o/" /./ ii/11 ~0~0~0 ".''" 0 i J i i i 2 4 6 e 10 Figure 10: Compilation times for rules of the form a ~ b/ c k, (k E [0, 10]). vowel <a> contains twenty five rules. In the ac- tual application of the rule compiler to these rules, one compiles the individual rules in each ruleset one by one, and composes them together in the order written, compacts them after each composi- tion, and derives a single transducer for each set. When done off-line, these operations of compo- Table 1: Comparison in a real example. I Rulesll KK II New I time space time space (s) states arcs (s) states arcs <5> 62 412 50,475 47 394 47,491 <a> 284 1,939 215,721 240 1,927 213,408 sition and compaction dominate the time corre- sponding to the construction of the transducer for each individual rule. The difference between the two algorithms appears still clearly for these two sets of rules. Table 1 shows for each algorithm the times in seconds for the overall construction, and the number of states and arcs of the output transducers. 6. Conclusion We briefly described a new algorithm for compiling context-dependent rewrite rules into finite-state transducers. Several additional methods can be used to make this algorithm even more efficient. The automata determinizations needed for this algorithm are of a specific type. They repre- 237 sent expressions of the type ~*¢ where ¢ is a reg- ular expression. Given a deterministic automaton representing ¢, such determinizations can be per- formed in a more efficient way using failure func- tions (Mohri, 1995). Moreover, the corresponding determinization is independent of ~ which can be very large in some applications. It only depends on the alphabet of the automaton representing ¢. One can devise an on-the-fly implementation of the composition algorithm leading to the final transducer representing a rule. Only the neces- sary part of the intermediate transducers is then expanded for a given input (Pereira et al., 1994). The resulting transducer representing a rule is often subsequentiable or p-subsequentiable. It can then be determinized and minimized (Mohri, 1994). This both makes the use of the transducer time efficient and reduces its size. We also indicated an extension of the theory of rule-compilation to the case of weighted rules, which compile into weighted finite-state transduc- ers. Many algorithms used in the finite-state the- ory and in their applications to natural language processing can be extended in the same way. To date the main serious application of this compiler has been to developing text-analyzers for text-to-speech systems at Bell Laboratories (Sproat, 1996): partial to more-or-less complete analyzers have been built for Spanish, Italian, French, Romanian, German, Russian, Mandarin and Japanese. However, we hope to also be able to use the compiler in serious applications in speech 2 - co ! O / /° /° / /° / / / / I I t SOO 1;10 1120 II S~S in I:bsr S Figure 11: Number of arcs in the non- deterministic automaton r representing PS = $ $ E~0 > P>0E>0- > E>0 versus the log of the num- ber of arcs in the automaton obtained by deter- minization of r. recognition in the future. Acknowledgements We wish to thank several colleagues of AT&T/_Bell Labs, in particular Fernando Pereira and Michael Riley for stimulating discussions about this work and Bernd MSbius for providing the German pro- nunciation rules cited herein. References Alfred V. Aho, John E. Hopcroft, and Jeffrey D. Ullman. 1974. The design and analysis of computer algorithms. Addison Wesley: Read- ing, MA. Alfred V. Aho, Ravi Sethi, and Jeffrey D. Ullman. 1986. Compilers, Principles, Techniques and Tools. Addison Wesley: Reading, MA. Jean Berstel and Christophe Reutenauer. 1988. Rational Series and Their Languages. Springer-Verlag: Berlin-New York. Jean Berstel. 1979. Transductions and Context- Free Languages. Teubner Studienbucher: Stuttgart. Samuel Eilenberg. 1974-1976. Automata, Lan- guages and Machines, volume A-B. Academic Press. 238 Michael R. Garey and David S. Johnson. 1979. Computers and Intractability. Freeman and Company, New York. C. Douglas Johnson. 1972. Formal Aspects of Phonological Description. Mouton, Mouton, The Hague. Ronald M. Kaplan and Martin Kay. 1994. Regu- lar models of phonological rule systems. Com- putational Linguistics, 20(3). Lauri Karttunen. 1995. The replace operator. In 33 rd Meeting of the Association for Compu- tational Linguistics (ACL 95), Proceedings of the Conference, MIT, Cambridge, Massachus- setts. ACL. Wener Kuich and Arto Salomaa. 1986. Semir- ings, Automata, Languages. Springer-Verlag: Berlin-New York. Mark Liberman. 1994. Commentary on kaplan and kay. Computational Linguistics, 20(3). Robert McNaughton. 1994. The uniform halt- ing problem for one-rule semi-thue systems. Technical Report 94-18, Department of Com- puter Science, Rensselaer Polytechnic Insti- tute, Troy, New York. Mehryar Mohri. 1994. Compact representations by finite-state transducers. In 32 nd Meeting of the Association for Computational Linguistics (ACL 94), Proceedings of the Conference, Las Cruces, New Mexico. ACL. Mehryar Mohri. 1995. Matching patterns of an automaton. Lecture Notes in Computer Sci- ence, 937. Fernando C. N. Pereira, Michael Riley, and Richard Sproat. 1994. Weighted rational transductions and their application to human language processing. In ARPA Workshop on Human Language Technology. Advanced Re- search Projects Agency. Grzegorz Rozenberg and Arto Salomaa. 1980. The Mathematical Theory of L Systems. Aca- demic Press, New York. Arto Salomaa and Matti Soittola. 1978. Automata- Theoretic Aspects of Formal Power Series. Springer-Verlag: Berlin-New York. Richard Sproat. 1996. Multilingual text analy- sis for text-to-speech synthesis. In Proceed- ings of the ECAI-96 Workshop on Extended Finite State Models of Language, Budapest, Hungary. European Conference on Artificial Intelligence. L. J. Stockmeyer and A. R. Meyer. 1973. Word problems requiring exponential time. In Pro- ceedings of the 5 th Annual ACM Sympo- sium on Theory of Computing. Association for Computing Machinery, New York, 1-9. | 1996 | 31 |
Efficient Tabular LR Parsing Mark-Jan Nederhof Faculty of Arts University of Groningen P.O. Box 716 9700 AS Groningen The Netherlands markj an@let, rug. nl Giorgio Satta Dipartimento di Elettronica ed Informatica Universit£ di Padova via Gradenigo, 6/A 1-35131 Padova Italy satt a@dei, unipd, it Abstract We give a new treatment of tabular LR parsing, which is an alternative to Tomita's generalized LR algorithm. The advantage is twofold. Firstly, our treatment is con- ceptually more attractive because it uses simpler concepts, such as grammar trans- formations and standard tabulation tech- niques also know as chart parsing. Second- ly, the static and dynamic complexity of parsing, both in space and time, is signifi- cantly reduced. 1 Introduction The efficiency of LR(k) parsing techniques (Sippu and Soisalon-Soininen, 1990) is very attractive from the perspective of natural language processing ap- plications. This has stimulated the computational linguistics community to develop extensions of these techniques to general context-free grammar parsing. The best-known example is generalized LR pars- ing, also known as Tomita's algorithm, described by Tomita (1986) and further investigated by, for ex- ample, Tomita (1991) and Nederhof (1994a). Des- pite appearances, the graph-structured stacks used to describe Tomita's algorithm differ very little from parse fables, or in other words, generalized LR pars- ing is one of the so called tabular parsing algorithms, among which also the CYK algorithm (Harrison, 1978) and Earley's algorithm (Earley, 1970) can be found. (Tabular parsing is also known as chart pars- ing.) In this paper we investigate the extension of LR parsing to general context-free grammars from a more general viewpoint: tabular algorithms can of- ten be described by the composition of two construc- tions. One example is given by Lang (1974) and Billot and Lang (1989): the construction of push- down automata from grammars and the simulation of these automata by means of tabulation yield dif- ferent tabular algorithms for different such construc- tions. Another example, on which our presentation is based, was first suggested by Leermakers (1989): a grammar is first transformed and then a standard tabular algorithm along with some filtering condi- tion is applied using the transformed grammar. In our case, the transformation and the subsequent ap- plication of the tabular algorithm result in a new form of tabular LR parsing. Our method is more efficient than Tomita's algo- rithm in two respects. First, reduce operations are implemented in an efficient way, by splitting them in- to several, more primitive, operations (a similar idea has been proposed by Kipps (1991) for Tomita's al- gorithm). Second, several paths in the computation that must be simulated separately by Tomita's algo- rithm are collapsed into a single computation path, using state minimization techniques. Experiments on practical grammars have indicated that there is a significant gain in efficiency, with regard to both space and time requirements. Our grammar transformation produces a so called cover for the input grammar, which together with the filtering condition fully captures the specifica- tion of the method, abstracting away from algorith- mic details such as data structures and control flow. Since this cover can be easily precomputed, imple- menting our LR parser simply amounts to running the standard tabular algorithm. This is very attrac- tive from an application-oriented perspective, since many actual systems for natural language processing are based on these kinds of parsing algorithm. The remainder of this paper is organized as fol- lows. In Section 2 some preliminaries are discussed. We review the notion of LR automaton in Section.3 and introduce the notion of 2LR automaton in Sec- tion 4. Then we specify our tabular LR method in Section 5, and provide an analysis of the algorithm in Section 6. Finally, some empirical results are giv- 239 en in Section 7, and further discussion of our method is provided in Section 8. 2 Definitions Throughout this paper we use standard formal lan- guage notation. We assume that the reader is famil- iar with context-free grammar parsing theory (Har- rison, 1978). A context-free grammar (CFG) is a 4-tuple G = (S, N, P, S), where S and N are two finite disjoint sets of terminal and nonterminal symbols, respec- tively, S E N is the start symbol, and P is a finite set of rules. Each rule has the form A ---* a with A E N and a E V*, where V denotes N U E. The size of G, written I G I, is defined as E(A--*a)EP [Aot I; by I a I we mean the length of a string of symbols a. We generally use symbols A,B,C,... to range over N, symbols a, b, c,... to range over S, symbols X, Y, Z to range over V, symbols ~, 8, 7,... to range over V*, and symbols v, w, z,... to range over S*. We write e to denote the empty string. A CFG is said to be in binary form if ~ E {e} U V t.J N 2 for all of its rules A --* c~. (The binary form does not limit the (weak) generative capaci- ty of context-free grammars (Harrison, 1978).) For technicM reasons, we sometimes use the augment- ed grammar associated with G, defined as G t = (St, N t, pt, St), where St, t> and <1 are fresh sym- bols, S t = SU {t>,<l}, N t = NU {S t } and pt = p U {S t ~ t>S<~}. A pushdown automaton (PDA) is a 5-tuple .4 = (Z, Q, T, qi,, q/in), where S, Q and T are finite sets of input symbols, stack symbols and transitions, re- spectively; qin E Q is the initiM stack symbol and q/i, E Q is the finM stack symbol. 1 Each transition has the form 61 ~-~ 62, where 61,82 E Q*, 1 < 161 l, 1 < 1621 < 2, and z = e or z = a. We generally use symbols q, r, s,... to range over Q, and the symbol 6 to range over Q*. Consider a fixed input string v E ~*. A config- uration of the automaton is a pair (6, w) consisting of a stack 6 E Q* and the remaining input w, which is a suffix of the input string v. The rightmost sym- bol of 6 represents the top of the stack. The initial configuration has the form (qi~, v), where the stack is formed by the initial stack symbol. The final con- figuration has the form (qi, q/i,, e), where the stack is formed by the final stack symbol stacked upon the initial stack symbol. ZWe dispense with the notion of state, traditionally incorporated in the definition of PDA. This does not affect the power of these devices, since states can be encoded within stack symbols and transitions. The application of a transition 81 ~-~ 82 is de- scribed as follows. If the top-most symbols of the stack are 61, then these symbols may be replaced by 62, provided that either z = e, or z = a and a is the first symbol of the remaining input. Furthermore, if z = a then a is removed from the remaining input. Formally, for a fixed PDA .4 we define the bina- ry relation t- on configurations as the least relation satisfying (881, w) ~- (662, w) if there is a transition 61 ~ 62, and (881, aw) t- (682, w) if there is a tran- sition 61 a 82. The recognition of a certain input v is obtained if starting from the initial configuration for that input we can reach the final configuration by repeated application of transitions, or, formally, if (qin, v) I"* (q~,, aria, e), where t-* denotes the re- flexive and transitive closure of b. By a computation of a PDA we mean a sequence (qi,,v) t- (61,wl) h... t- (6n,wn), n > 0. A PDA is called deterministic if for all possible configurations at most one transition is applicable. A PDA is said to be in binary form if, for all transitions 61 ~L~ 62, we have 1611 < 2. 3 Ll:t automata Let G = (S, N, P, S) be a CFG. We recall the no- tion of LR automaton, which is a particular kind of PDA. We make use of the augmented grammar G t = (st, N t, pt, S t) introduced in Section 2. Let !LR : {A ~ a • ~ I (A --~ aft) E pt}. We introduce the function closure from 2 I~'R to 2 ILR and the function goto from 2 ILR × V to 2 l~rt. For any q C ILK, closure(q) is the smallest set such that (i) q c closure(q); and (ii) (B --~ c~ • Aft) e closure(q) and (A ~ 7) e pt together imply (A --* • 7) E closure(q). We then define goto(q, X) = {A ---* ~X • fl I (A --* a • Xfl) E closure(q)}. We construct a finite set T~Lp ~ as the smallest collec- tion of sets satisfying the conditions: (i) {S t ~ t>. S<~} E ~'~Ll=t; and (ii) for every q E ~T~LR and X E V, we have goto(q, X) E 7~LR, provided goto(q, X) ~ 0. Two elements from ~Lt~ deserve special attention: qm = {S t --+ t> * S<~}, and q/in, which is defined to be the unique set in "~Ll:t containing (S t ~ t>S * <~); in other words, q/in = goto(q~n, S). 240 For A • N, an A-redex is a string qoqlq2"" "qm, m _> 0, of elements from T~Lrt, satisfying the follow- ing conditions: (i) (A ~ a .) • closure(q,,), for some a = X1X~. • • • Xm ; and (ii) goto(q~_l, Xk) = qk, for 1 < k < m. Note that in such an A-redex, (A --~ • X1Xg.... Xm) • closure(qo), and (A ~ X1...Xk * Xk+z'"Xm) E qk, for 0 < k < m. The LR automaton associated with G is now in- troduced. Definition 1 .ALR = (S, QLR, TLR, qin, q~n), where QLR "- ~'~LR, qin = {S t -'* t> • S<~}, qlin = goto(qin, S), and TLR contains: (i) q ~ q q', for every a • S and q, q~ • ~LR such that q' = goto(q, a); (ii) q5 ~-L q q', for every A • N, A-redex q~, and q' • TiLa such that q~ = goto(q, A). Transitions in (i) above are called shift, transitions in (ii) are called reduce. 4 2LR Automata The automata .At, rt defined in the previous section are deterministic only for a subset of the CFGs, called the LR(0) grammars (Sippu and Soisalon- Soininen, 1990), and behave nondeterministical- ly in the general case. When designing tabular methods that simulate nondeterministic computa- tions of ~4LR, two main difficulties are encountered: • A reduce transition in .ALrt is an elementary op- eration that removes from the stack a number of elements bounded by the size of the underly- ing grammar. Consequently, the time require- ment of tabular simulation of .AL~ computa- tions can be onerous, for reasons pointed out by Sheil (1976) and Kipps (1991). • The set 7~Lrt can be exponential in the size of the grammar (Johnson, 1991). If in such a case the computations of.ALR touch upon each state, then time and space requirements of tabular simulation are obviously onerous. The first issue above is solved here by re- casting .ALR in binary form. This is done by considering each reduce transition as a se- quence of "pop" operations which affect at most two stack symbols at a time. (See also Lang (1974), Villemonte de la Clergerie (1993) and Nederhof (1994a), and for LR parsing specifically gipps (1991) and Leermakers (19925).) The follow- ing definition introduces this new kind of automaton. I ! Definition 2 A~R = (~, QLR' TLR., qin, q1~n), where q, LR ----- 7~LR U ILR, qin = {S t "* I> • S<2}, qJin = goto(qin, S) and TLR contains: (i) q ~ q q,, for every a • S and q, q' • 7~Lrt such that q' = goto(q, a); (ii) q A. q (A --* a .), for every q • TiLR and (A • ) • closure(q); (iii) q (A --* aX • ,8) ~ (A ~ a • X,8), for every q • ~LR and (A ~ aX . ,8) • q; (iv) q (A --* * c~) A, q q', for every q, q' • 7~LR and (A ~ ~) • pt such that q' = goto(q, A). Transitions in (i) above are again called shift, tran- sitions in (ii) are called initiate, those in (iii) are called gathering, and transitions in (iv) are called goto. The role of a reduce step in .ALR is taken over in .A£K by an initiate step, a number of gathering steps, and a goto step. Observe that these steps in- volve the new stack symbols (A --~ a • ,8) • ILI~ that are distinguishable from possible stack symbols {A .-* a • ,8} • '/'~LR- We now turn to the second above-mentioned prob- lem, regarding the size of set 7dgR. The problem is in part solved here as follows. The number of states in 7~Lrt is considerably reduced by identify- ing two states if they become identical after items A --~ cr • fl from ILrt have been simplified to only the suffix of the right-hand side ,8. This is rem- iniscent of techniques of state minimization for fi- nite automata (Booth, 1967), as they have been ap- plied before to LR parsing, e.g., by Pager (1970) and Nederhof and Sarbo (1993). Let G t be the augmented grammar associated with a CFG G, and let I2LI~ -- {fl I (A ---, a,8) e pt}. We define variants of the closure and 9oto func- tions from the previous section as follows. For any set q C I2Lt~, closurel(q) is the smallest collection of sets such that (i) q C elosure'(q); and (ii) (Aft) e closure' (q) and (A ---* 7) • pt together imply (7) • closure'(q). Also, we define goto'(q, x) = {,8 I (x,8) ~ closure'(q)}. We now construct a finite set T~2Lrt as the smallest set satisfying the conditions: 241 (i) {S<l} 6 7~2LR; and (ii) for every q 6 T~2LI:t and X • V, we have goto'(q, X) • T~2LR, provided goto'(q, X) # @. As stack symbols, we take the elements from I2LR and a subset of elements from (V × ~2Lrt): Q2LR = {(X,q) I 3q'[goto'(q',X) = q]} U I2LR In a stack symbol of the form (X, q), the X serves to record the grammar symbol that has been recog- nized last, cf. the symbols that formerly were found immediately before the dots. The 2LK automaton associated with G can now be introduced. Z T ' ' Definition 3 .A2LR ---~ ( , Q2LR, 2LR, qin, qfin), where Q LR is as defined above, = (C>, q~. = (S, goto'({S.~}, S)), and T2LR contains: (i) (X,q) ~ (X,q) (a,q'), for every a • Z and (X, q), (a, q') • Q2Lrt such that q' = goto'(q, a); (ii) (X,q) ~+ (X,q)(e), for every (X,q) • Q2LR such that e • closure'(q); (iii) (Z,q)(~) ~ (Zg), for every (X,q) • Q2LR and 19 • q; (iv) (X,q) (o~) ~ (X,q) (A,q'), for every (X,q), (A,q') • Q2LR and (A ---~ c~) • pt such that q' = goto'(q, A). Note that in the case of a reduce/reduce conflict with two grammar rules sharing some suffix in the right-hand side, the gathering steps of A2Lrt will treat both rules simultaneously, until the parts of the right-hand sides are reached where the two rules differ. (See Leermakers (1992a) for a similar sharing of computation for common suffixes.) An interesting fact is that the automaton .A2LR is very similar to the automaton .ALR constructed for a grammar transformed by the transformation rtwo given by Nederhof and Satta (1994). 2 5 The algorithm This section presents a tabular LR parser, which is the main result of this paper. The parser is derived from the 2LR automata introduced in the previous section. Following the general approach presented by Leermakers (1989), we simulate computations of 2For the earliest mention of this transformation, we have encountered pointers to Schauerte (1973). Regret- tably, we have as yet not been able to get hold of a copy of this paper. these devices using a tabular method, a grammar transformation and a filtering function. We make use of a tabular parsing algorithm which is basically an asynchronous version of the CYK al- gorithm, as presented by Harrison (1978), extended to productions of the forms A ---* B and A ~ and with a left-to-right filtering condition. The al- gorithm uses a parse table consisting in a 0-indexed square array U. The indices represent positions in the input string. We define Ui to be Uk<i Uk,i. Computation of the entries of U is moderated by a filtering process. This process makes use of a function pred from 2 N to 2 N, specific to a certain context-free grammar. We have a certain nontermi- nal Ainit which is initially inserted in U0,0 in order to start the recognition process. We are now ready to give a formal specification of the tabular algorithm. Algorithm 1 Let G = (~,N,P,S) be a CFG in binary form, let pred be a function from 2 N to 2 N, let Ai,,t be the distinguished element from N, and let v = ala2. "'an 6 ~* be an input string. We compute the least (n+ 1) x (n+ 1) table U such that Ainit 6 U0,0 and (i) A 6 Uj_ 1,j if (A ~ aj) 6 P, A 6 pred(Uj_l); (ii) A 6 Uj,j if (A --+ e) 6 P, A E pred(Uj); (iii) A 6 Ui,j if B 6 Ui,~, C 6 Uk,j, (A ---. BC) 6 P, A 6 pred(Ui); (iv) A 6 Uij if B 6 Uij, (A ~ B) 6 P, A 6 pred(UO. The string has been accepted when S 6 U0,,. We now specify a grammar transformation, based on the definition of .A2LR. Definition 4 Let A2LR = (S, Q2LR, T2LR, ' qin,q~,) be the 2L1% automaton associated with a CFG G. The 2LR cover associated with G is the CFG C2I r (G) = ( Q2Lr , P2I rt, where the rules in P2LR are given by: (i) (a,q') --* a, for every (X, q) ~-~ (X, q) (a, q') E T2LR; (ii) (e) ~ ¢, for every (X, q) ~-* (X, q) (e) 6 T2LR; (iii) (X~) ~ (X, q) (~), for every (X, q) (~) ~-* (X~) 6 T2LR; 242 (iv) (A,q') --, (a), for every (X, q) (or) ~-~ (X, q) (A, q') E T2La. Observe that there is a direct, one-to-one correspon- dence between transitions of.A2La and productions of C2LR(G). The accompanying function pred is defined as fol- lows (q, q', q" range over the stack elements): pred(v) = {q I q'q" ~-~ q E T2La} U {q ] q' E r, q' ~*q'qET~La} U {q I q'Er, q'q"~-~q'qET2La}. The above definition implies that only the tabular equivalents of the shift, initiate and goto transitions are subject to actual filtering; the simulation of the gathering transitions does not depend on elements in r. Finally, the distinguished nonterminal from the cover used to initialize the table is qin'l Thus we start with (t>, {S<l)) E U0,0. The 2LR cover introduces spurious ambiguity: where some grammar G would allow a certain num- ber of parses to be found for a certain input, the grammar C2Lrt(G) in general allows more parses. This problem is in part solved by the filtering func- tion pred. The remaining spurious ambiguity is avoided by a particular way of constructing the parse trees, described in what follows. After Algorithm 1 has recognized a given in- put, the set of all parse trees can be computed as tree(q~n, O, n) where the function tree, which deter- mines sets of either parse trees or lists of parse trees for entries in U, is recursively defined by: (i) tree((a, q'), i, j) is the set {a}. This set contains a single parse tree Consisting of a single node labelled a. (ii) tree(e, i, i) is the set {c}. This set consists of an empty list of trees. (iii) tree(Xl?,i,j) is the union of the sets T. k (x~),i,j, where i < k < j, (8) E Uk,j, and there is at least one (X, q) E Ui,k and (X~) ---* (X, q) (8) in C2La(G), for some q. For each such k, select one such q. We define 7:, ~ = {t.ts I t E ( X fl ),i,j tree((X,q),i,k) A ts E tree(fl, k,j)}. Each t. ts is a list of trees, with head t and tail ts. (iv) tree( ( A, q'), i, j) is the union of the sets T. a where (~) E Uij is such that ( A,ql ),i,j ' (A, q') ---* (c~) in C2La(G). We define T ~ - (a,q'),i,j {glue(A, ts) l ts E tree(c~,i,j)}. The function glue constructs a tree from a fresh root node labelled A and the trees in list ts as immediate subtrees. We emphasize that in the third clause above, one should not consider more than one q for given k in order to prevent spurious ambiguity. (In fact, for fixed X, i, k and for different q such that (X, q) E Ui,k, tvee((X, q),i, k) yields the exact same set of trees.) With this proviso, the degree of ambiguity, i.e. the number of parses found by the algorithm for any input, is reduced to exactly that of the source grammar. A practical implementation would construct the parse trees on-the-fly, attaching them to the table entries, allowing packing and sharing of subtrees (cf. the literature on parse forests (Tomita, 1986; Ell- lot and Lang, 1989)). Our algorithm actually only needs one (packed) subtree for several ( X, q) E Ui,k with fixed X,i,k but different q. The resulting parse forests would then be optimally compact, con- trary to some other LR-based tabular algorithms, as pointed out by Rekers (1992), Nederhof (1993) and Nederhof (1994b). 6 Analysis of the algorithm In this section, we investigate how the steps per- formed by Algorithm 1 (applied to the 2LR cover) relate to those performed by .A2LR, for the same in- put. We define a subrelation ~+ of t -+ as: (6, uw) ~+ (66',w) if and only if (6, uw) = (6, zlz2".'zmw) t- (88l,z2..-zmw) ~- ... ~ (68re,w) = (86',w), for some m > 1, where I~kl > 0 for all k, 1 < k < m. Informally, we have (6, uw) ~+ (6~', w) if configura- tion (~8', w) can be reached from (6, uw) without the bottom-most part 8 of the intermediate stacks being affected by any of the transitions; furthermore, at least one element is pushed on top of 6. The following characterization relates the automa- ton .A2Lrt and Algorithm 1 applied to the 2LR cover. Symbol q E Q~Lrt is eventually added to Uij if and only if for some 6: (q;n,al...an) ~-* (di, ai+l...an) ~+ (~q, aj+l...an). In words, q is found in entry Ui,j if and only if, at input position j, the automaton would push some element q on top of some lower-part of the stack that remains unaffected while the input from i to j is being read. The above characterization, whose proof is not re- ported here, is the justification for calling the result- ing algorithm tabular LR parsing. In particular, for a grammar for which .A2Lrt is deterministic, i.e. for an LR(0) grammar, the number of steps performed 243 by J42LR and the number of steps performed by the above algorithm are exactly the same. In the case of grammars which are not LR(0), the tabular LR algo- rithm is more efficient than for example a backtrack realisation of -A2LR. For determining the order of the time complex- ity of our algorithm, we look at the most expen- sive step, which is the computation of an element (Xfl) E Ui,j from two elements (X, q) e Ui,k and (t3) E Uk,j, through (X, q) (fl) ,--% (Xfl) E T2LR. In a straightforward realisation of the algorithm, this step can be applied O(IT2LRI" Iv 13) times (once for each i, k,j and each transition), each step taking a constant amount of time. We conclude that the time complexity of our algorithm is O([ T2LR] • IV [Z). As far as space requirements are concerned, each set Ui,j or Ui contains at most I O2w.RI elements. (One may assume an auxiliary table storing each Ui.) This results in a space complexity O(I Q2LRI" Iv 12). The entries in the table represent single stack ele- ments, as opposed to pairs of stack elements follow- ing Lang (1974) and Leermakers (1989). This has been investigated before by Nederhof (1994a, p. 25) and Villemonte de la Clergerie (1993, p. 155). 7 Empirical results We have performed some experiments with Algo- rithm 1 applied to ,A2L R and .A ~ for 4 practical LR, context-free grammars. For ,4 ~ LR a cover was used analogous to the one in Definition 4; the filtering function remains the same. The first grammar generates a subset of the pro- gramming language ALGOL 68 (van Wijngaarden and others, 1975). The second and third grammars generate a fragment of Dutch, and are referred to as the CORRie grammar (Vosse, 1994) and the Deltra grammar (Schoorl and Belder, 1990), respectively. These grammars were stripped of their arguments in order to convert them into context-free grammars. The fourth grammar, referred to as the Alvey gram- mar (Carroll, 1993), generates a fragment of English and was automatically generated from a unification- based grammar. The test sentences have been obtained by au- tomatic generation from the grammars, using the Grammar Workbench (Nederhof and Koster, 1992), which uses a random generator to select rules; there- fore these sentences do not necessarily represent in- put typical of the applications for which the gram- mars were written. Table 1 summarizes the test ma- terial. Our implementation is merely a prototype, which means that absolute duration of the parsing process G =(Z,N,P,S) ALGOL 68 ~ CORRie Deltra Alvey Table 1: The test material: the four grammars and some of their dimensions, and the average length of the test sentences (20 sentences of various length for each grammar). 4 LR A2LR G space ] time space ] time ALGOL 68 327 375 234 343 CORRie 7548 28028 5131 22414 Deltra 11772 94824 6526 70333 Alvey 599 1147 354 747 Table 2: Dynamic requirements: average space and time per sentence. is little indicative of the actual efficiency of more sophisticated implementations. Therefore, our mea- surements have been restricted to implementation- independent quantities, viz. the number of elements stored in the parse table and the number of elemen- tary steps performed by the algorithm. In a practical implementation, such quantities will strongly influ- ence the space and time complexity, although they do not represent the only determining factors. Fur- thermore, all optimizations of the time and space efficiency have been left out of consideration. Table 2 presents the costs of parsing the test sen- tences. The first and third columns give the number of entries stored in table U, the second and fourth columns give the number of elementary steps that were performed. An elementary step consists of the derivation of ! one element in QLR or Q2LR from one or two other elements. The elements that are used in the filter- ing process are counted individually. We give an example for the case of .A~R. Suppose we derive an element q~ E Ui,j from an element (A -. • c~) E Ui,j, warranted by two elements ql,q2 E Ui, ql ~ q2, through pred, in the presence of ql (A --* • c~) ql q' e T~.~ and q2 (A ---* • c~) ~-~ q2 q' E T~R. We then count two parsing steps, one for ql and one for q2. Table 2 shows that there is a significant gain in space and time efficiency when moving from ,4~a to 244 G ALGOL 68 CORRie Deltra Alvey .A ! LR [T~LR[ I [Q[a[ [ [T~R[ 434 1,217 13,844 600 1,741 22,129 856 2,785 54,932 3,712 8,784 1,862,492 ,A2LR In2LRI [ [O2La[ [ IT2Lrd 109 724 12,387 185 821 15,569 260 1,089 37,510 753 3,065 537,852 Table 3: Static requirements. ,A2LR. Apart from the dynamic costs of parsing, we have also measured some quantities relevant to the con- struction and storage of the two types of tabular LR parser. These data are given in Table 3. We see that the number of states is strongly re- duced with regard to traditional LR parsing. In the case of the Alvey grammar, moving from [T~LR [ to ]T~2LR[ amounts to a reduction to 20.3 %. Whereas time- and space-efficient computation of T~LR for this grammar is a serious problem, computation of T~2La will not be difficult on any modern computer. Al- so significant is the reduction from [T~R [ to [T2LR[, especially for the larger grammars. These quanti- ties correlate with the amount of storage needed for naive representation of the respective automata. 8 Discussion Our treatment of tabular LR parsing has two impor- tant advantages over the one by Tomita: * It is conceptually simpler, because we make use of simple concepts such as a grammar trans- formation and the well-understood CYK al- gorithm, instead of a complicated mechanism working on graph-structured stacks. • Our algorithm requires fewer LR states. This leads to faster parser generation, to smaller parsers, and to reduced time and space com- plexity of parsing itself. The conceptual simplicity of our formulation of tabular LR parsing allows comparison with other tabular parsing techniques, such as Earley's algo- rithm (Earley, 1970) and tabular left-corner pars- ing (Nederhof, 1993), based on implementation- independent criteria. This is in contrast to experi- ments reported before (e.g. by Shann (1991)), which treated tabular LR parsing differently from the other techniques. The reduced time and space complexities reported in the previous section pertain to the tabular real- isation of two parsing techniques, expressed by the automata A~, R and A2La. The tabular realisation of the former automata is very close to a variant of Tomita's algorithm by Kipps (1991). The objective of our experiments was to show that the automata ~4~La provide a better basis than .A~a for tabular LR parsing with regard to space and time complexity. Parsing algorithms that are not based on the LR technique have however been left out of con- sideration, and so were techniques for unification grammars and techniques incorporating finite-state processes. 3 Theoretical considerations (Leermakers, 1989; Schabes, 1991; Nederhof, 1994b) have suggested that for natural language parsing, LR-based techniques may not necessarily be superior to other parsing techniques, although convincing empirical data to this effect has never been shown. This issue is dif- ficult to resolve because so much of the relative ef- ficiency of the different parsing techniques depends on particular grammars and particular input, as well as on particular implementations of the techniques. We hope the conceptual framework presented in this paper may at least partly alleviate this problem. Acknowledgements The first author is supported by the Dutch Organiza- tion for Scientific Research (NWO), under grant 305- 00-802. Part of the present research was done while the second author was visiting the Center for Lan- guage and Speech Processing, Johns Hopkins Uni- versity, Baltimore, MD. We received kind help from John Carroll, Job Honig, Kees Koster, Theo Vosse and Hans de Vreught in finding the grammars mentioned in this paper. Generous help with locating relevant litera- ture was provided by Anton Nijholt, Rockford Ross, and Arnd Ruflmann. 3As remarked before by Nederhof (1993), the algo- rithms by Schabes (1991) and Leermakers (1989) are not really related to LR parsing, although some notation used in these papers suggests otherwise. 245 References Billot, S. and B. Lang. 1989. The structure of shared forests in ambiguous parsing. In 27th An- nual Meeting of the ACL, pages 143-151. Booth, T.L. 1967. Sequential Machines and Au- tomata Theory. Wiley, New York. Carroll, J.A. 1993. Practical unification-based pars- ing of natural language. Technical Report No. 314, University of Cambridge, Computer Labora- tory, England. PhD thesis. Earley, J. 1970. An efficient context-free parsing al- gorithm. Communications of the ACM, 13(2):94- 102. Harrison, M.A. 1978. Introduction to Formal Lan- guage Theory. Addison-Wesley. Johnson, M. 1991. The computational complexi- ty of GLR parsing. In Tomita (1991), chapter 3, pages 35-42. Kipps, J.R. 1991. GLR parsing in time O(n3). In Tomita (1991), chapter 4, pages 43-59. Lang, B. 1974. Deterministic techniques for ef- ficient non-deterministic parsers. In Automata, Languages and Programming, 2nd Colloquium, LNCS 14, pages 255-269, Saarbrficken. Springer- Verlag. Leermakers, R. 1989. How to cover a grammar. In 27th Annual Meeting of the ACL, pages 135-142. Leermakers, R. 1992a. A recursive ascent Earley parser. Information Processing Letters, 41(2):87- 91. Leermakers, R. 1992b. Recursive ascent parsing: from Earley to Marcus. Theoretical Computer Science, 104:299-312. Nederhof, M.J. 1993. Generalized left-corner pars- ing. In Sixth Conference of the European Chapter of the ACL, pages 305-314. Nederhof, M.J. 1994a. Linguistic Parsing and Pro- gram Transformations. Ph.D. thesis, University of Nijmegen. Nederhof, M.J. 1994b. An optimal tabular parsing algorithm. In 32nd Annual Meeting of the ACL, pages 117-124. Nederhof, M.J. and K. Koster. 1992. A customized grammar workbench. In J. Aarts, P. de Haan, and N. Oostdijk, editors, English Language Cor- pora: Design, Analysis and Exploitation, Papers from the thirteenth International Conference on English Language Research on Computerized Cor- pora, pages 163-179, Nijmegen. Rodopi. Nederhof, M.J. and J.J. Sarbo. 1993. Increasing the applicability of LR parsing. In Third Interna- tional Workshop on Parsing Technologies, pages 187-201. Nederhof, M.J. and G. Satta. 1994. An extended theory of head-driven parsing. In 32nd Annual Meeting of the ACL, pages 210-217. Pager, D. 1970. A solution to an open problem by Knuth. Information and Control, 17:462-473. Rekers, J. 1992. Parser Generation for Interactive Environments. Ph.D. thesis, University of Am- sterdam. Schabes, Y. 1991. Polynomial time and space shift- reduce parsing of arbitrary context-free gram- mars. In 29th Annual Meeting of the ACL, pages 106-113. Schauerte, R. 1973. Transformationen von LR(k)-grammatiken. Diplomarbeit, Universit~it GSttingen, Abteilung Informatik. Schoorl, J.J. and S. Belder. 1990. Computation- al linguistics at Delft: A status report. Report WTM/TT 90-09, Delft University of Technology, Applied Linguistics Unit. Shann, P. 1991. Experiments with GLR and chart parsing. In Tomita (1991), chapter 2, pages 17- 34. Sheil, B.A. 1976. Observations on context-free pars- ing. Statistical Methods in Linguistics, pages 71- 109. Sippu, S. and E. Soisalon-Soininen. 1990. Pars- ing Theory, Vol. II: LR(k) and LL(k) Parsing. Springer-Verlag. Tomita, M. 1986. Efficient Parsing for Natural Lan- guage. Kluwer Academic Publishers. Tomita, M., editor. 1991. Generalized LR Parsing. Kluwer Academic Publishers. van Wijngaarden, A. et at. 1975. Revised report on the algorithmic language ALGOL 68. Acta Infor- matica, 5:1-236. Villemonte de la Clergerie, E. 1993. Automates Piles et Programmation Dynamique -- DyALog: Une application h la Programmation en Logique. Ph.D. thesis, Universit@ Paris VII. Vosse, T.G. 1994. The Word Connection. Ph.D. thesis, University of Leiden. 246 | 1996 | 32 |
Magic for Filter Optimization in Dynamic Bottom-up Processing Guido Minnen* SFB 340, University of Tfibingen Kleine Wilhelmstrafle. 113 D-72074 Tiibingen, Germany e-mail : minnen~sf s. nphil, uni-tuebingen, de Abstract Off-line compilation of logic grammars us- ing Magic allows an incorporation of fil- tering into the logic underlying the gram- mar. The explicit definite clause charac- terization of filtering resulting from Magic compilation allows processor independent and logically clean optimizations of dy- namic bottom-up processing with respect to goal-directedness. Two filter optimizations based on the program transformation tech- nique of Unfolding are discussed which are of practical and theoretical interest. 1 Introduction In natural language processing filtering is used to weed out those search paths that are redundant, i.e., are not going to be used in the proof tree corre- sponding to the natural language expression to be generated or parsed. Filter optimization often com- prises an extension of a specific processing strategy such that it exploits specific knowledge about gram- mars and/or the computational task(s) that one is using them for. At the same time it often remains unclear how these optimizations relate to each other and what they actually mean. In this paper I show how starting from a definite clause characterization of filtering derived automatically from a logic gram- mar using Magic compilation, filter optimizations can be performed in a processor independent and logically clean fashion. Magic (templates) is a general compilation tech- nique for efficient bottom-up evaluation of logic pro- grams developed in the deductive database commu- nity (Ramakrishnan et al., 1992). Given a logic pro- gram, Magic produces a new program in which the filtering as normally resulting from top-down eval- uation is explicitly characterized through, so-called, *url: http://www.sfs.nphil.uni-tuebingen/'minnen magic predicates, which produce variable bindings for filtering when evaluated bottom-up. The origi- nal rules of the program are extended such that these bindings can be made effective. As a result of the definite clause characterization of filtering, Magic brings filtering into the logic un- derlying the grammar. I discuss two filter optimiza- tions. These optimizations are direction indepen- dent in the sense that they are useful for both gen- eration and parsing. For expository reasons, though, they are presented merely on the basis of examples of generation. Magic compilation does not limit the informa- tion that can be used for filtering. This can lead to nontermination as the tree fragments enumer- ated in bottom-up evaluation of magic compiled grammars are connected (Johnson, forthcoming). More specifically, 'magic generation' falls prey to non-termination in the face of head recursion, i.e., the generation analog of left recursion in parsing. This necessitates a dynamic processing strategy, i.e., memoization, extended with an abstraction function like, e.g., restriction (Shieber, 1985), to weaken fil- tering and a subsumption check to discard redun- dant results. It is shown that for a large class of grammars the subsumption check which often influ- ences processing efficiency rather dramatically can be eliminated through fine-tuning of the magic pred- icates derived for a particular grammar after apply- ing an abstraction function in an off-line fashion. Unfolding can be used to eliminate superfluous fil- tering steps. Given an off-line optimization of the order in which the right-hand side categories in the rules of a logic grammar are processed (Minnen et al., 1996) the resulting processing behavior can be considered a generalization of the head corner gen- eration approach (Shieber et al., 1990): Without the need to rely on notions such as semantic head and chain rule, a head corner behavior can be mimicked in a strict bottom-up fashion. 247 2 Definite Clause Characterization of Filtering Many approaches focus on exploiting specific knowl- edge about grammars and/or the computational task(s) that one is using them for by making filter- ing explicit and extending the processing strategy such that this information can be made effective. In generation, examples of such extended process- ing strategies are head corner generation with its semantic linking (Shieber et al., 1990) or bottom-up (Earley) generation with a semantic filter (Shieber, 1988). Even though these approaches often accom- plish considerable improvements with respect to ef- ficiency or termination behavior, it remains unclear how these optimizations relate to each other and what comprises the logic behind these specialized forms of filtering. By bringing filtering into the logic underlying the grammar it is possible to show in a perspicuous and logically clean way how and why fil- tering can be optimized in a particular fashion and how various approaches relate to each other. 2.1 Magic Compilation Magic makes filtering explicit through characterizing it as definite clauses. Intuitively understood, filter- ing is reversed as binding information that normally becomes available as a result of top-down evaluation is derived by bottom-up evaluation of the definite clause characterization of filtering. The following is the basic Magic algorithm taken from Ramakrishnan et al. (1992). Let P be a program and q(E) a query on the program. We construct a new program ping. Initially ping is empty. 1. Create a new predicate magic_p for each predicate p in P. The arity is that of p. 2. For each rule in P, add the modified version of the rule to p-~9. If rule r has head, say, p({), the modified ver- sion is obtained by adding the literal magic_p(t) to the body. 3. For each rule r in P with head, say, p({), and for each literal q~(~) in its body, add a magic rule to ping. The head is magic_qi(~). The body con- tains the literal magic_p(t), and all the literals that precede qi in the rule. 4. Create a seed fact magic_q(5) from the query. To illustrate the algorithm I zoom in on the applica- tion of the above algorithm to one particular gram- mar rule. Suppose the original grammar rule looks as follows: s (P0, P, VForm, SSem) : - vp(Pl ,P,VForm, [CSem] ,SSem), np (P0,PI, CSem). Step 2 of the algorithm results in the following mod- ified version of the original grammar rule: s (P0, P,VForm, SSem) : - magic_s (P0,P,VForm, SSem) , vp(Pl ,P ,VForm, [CSem] , SSem) , np (P0, PI, CSem). A magic literal is added to the right-hand side of the rule which 'guards' the application of the rule. This does not change the semantics of the original grammar as it merely serves as a way to incorpo- rate the relevant bindings derived with the magic predicates to avoid redundant applications of a rule. Corresponding to the first right-hand side literal in the original rule step 3 derives the following magic rule: magic_vp (Pl, P, VForm, [CSem] , SSem) : - magic_s (P0, P, VForm, SSem) . It is used to derive from the guard for the original rule a guard for the rules defining the first right-hand side literal. The second right-hand side literal in the original rule leads to the following magic rule: magic_up (P0, P1, CSem) : - magi c_s (P0, P, VForm, SSem) , vp(Pl,P,VForm, [CSem] ,SSem) . Finally, step 4 of the algorithm ensures that a seed is created. Assuming that the original rule is defining the start category, the query corresponding to the generation of the s "John buys Mary a book" leads to the following seed: magic_s (P0 ,P, finite ,buys (john, a (book) ,mary) ). The seed constitutes a representation of the initial bindings provided by the query that is used by the magic predicates to derive guards. Note that the creation of the seed can be delayed until run-time, i.e., the grammar does not need to be recompiled for every possible query. 2.2 Example Magic compilation is illustrated on the basis of the simple logic grammar extract in figure 1. This gram- mar has been optimized automatically for generation (Minnen et al., 1996): The right-hand sides of the rules are reordered such that a simple left-to-right evaluation order constitutes the optimal evaluation order. With this grammar a simple top-down gen- eration strategy does not terminate as a result of the head recursion in rule 3. It is necessary to use 248 (1) sentence(P0,P,decl(SSem)):- s(P0,P,finite,SSem). (2) s(P0,P,VForm,SSem):- vp(P1,P,VForm,[CSem],SSem). np(P0,PI,CSem), (3) vp(P0,P,VForm,Args,SSem):- vp(PO,Pl,VForm,[CSemIArgs],SSem), np(Pl,P,CSem). (4) vp(PO,P,VForm,Args,SSem):- v(PO,P,VForm,Args,SSem). (5) np(P0,P,NPSem) :- pn (P0, P, NPSem) (6) np(P0,P,NPSem) :- det (P0 ,PI ,NSem, NPSem), n(Pl ,P, NSem). (7) det ( [alP], P,NSem, a (NSem)). (8) v( [buyslP] ,P, finite, [I ,D,S] ,buys (S ,D, I) ). (9) pn([mary[P] ,P,mary) (10)n ( [bookIP] ,P,book). Figure 1: Simple head-recursive grammar. memoization extended with an abstraction function and a subsumption check. Strict bottom-up gener- ation is not attractive either as it is extremely in- efficient: One is forced to generate all possible nat- ural language expressions licensed by the grammar and subsequently check them against the start cate- gory. It is possible to make the process more efficient through excluding specific lexical entries with a se- mantic filter. The use of such a semantic filter in bottom-up evaluation requires the grammar to obey the semantic monotonicity constraint in order to en- sure completeness(Shieber, 1988) (see below). The 'magic-compiled grammar' in figure 2 is the result of applying the algorithm in the previous sec- tion to the head-recursive example grammar and subsequently performing two optimizations (Beeri and Ramakrishnan, 1991): All (calls to) magic pred- icates corresponding to lexical entries are removed. Furthermore, data-flow analysis is used to fine-tune the magic predicates for the specific processing task at hand, i.e., generation3 Given a user-specified abstract query, i.e., a specification of the intended input (Beeri and Ramakrishnan, 1991) those argu- ments which are not bound and which therefore serve no filtering purpose are removed. The modi- fied versions of the original rules in the grammar are adapted accordingly. The effect of taking data-flow into account can be observed by comparing the rules for mag±c_vp and mag±c_np in the previous section with rule 12 and 14 in figure 2, respectively. Figure 3 shows the results from generation of the sentence "John buys Mary a book". In the case of this example the seed looks as follows: magic_sentence (decl (buys (john, a (book) ,mary) ) ). The ]acts, i.e., passive edges/items, in figure 3 re- sulted from semi-naive bottom-up evaluation (Ra- IFor expository reasons some data-flow information that does restrict processing is not taken into account. E.g., the fact that the vp literal in rule 2 is always called with a one-element list is ignored here, but see section 3.1. makrishnan et al., 1992) which constitutes a dy- namic bottom-up evaluation, where repeated deriva- tion of facts from the same earlier derived facts (as in naive evaluation; Bancilhon, 1985) is blocked. (Ac- tive edges are not memoized.) The figure 2 consist of two tree structures (connected through dotted lines) of which the left one corresponds to the filtering part of the derivation. The filtering tree is reversed and derives magic facts starting from the seed in a bottom-up fashion. The tree on the right is the proof tree for the example sentence which is built up as a result of unifying in the derived magic facts when applying a particular rule. E.g., in order to derive fact 13, magic fact 2 is unified with the magic literal in the modified version of rule 2 (in addition to the facts 12 and 10). This, however, is not represented in order to keep the figure clear. Dotted lines are used to represent when 'normal' facts are combined with magic facts to derive new magic facts. As can be reconstructed from the numbering of the facts in figure 3 the resulting processing behav- ior is identical to the behavior that would result from Earley generation as in Gerdemann (1991) ex- cept that the different filtering steps are performed in a bottom-up fashion. In order to obtain a gen- erator similar to the bottom-up generator as de- scribed in Shieber (1988) the compilation process can be modified such that only lexical entries are extended with magic literals. Just like in case of Shieber's bottom-up generator, bottom-up evalua- tion of magic-compiled grammars produced with this Magic variant is only guaranteed to be complete in case the original grammar obeys the semantic mono- tonicity constraint. ~The numbering of the facts corresponds to the order in which they are derived. A number of lexical entries have been added to the example grammar. The facts cor- responding to lexical entries are ignored. For expository reasons the phonology and semantics of lexical entries (except for vs) are abbreviated by the first letter. Fur- thermore the fact corresponding to the vp "buys Mary a book John" is not included. 249 (1) sentence (P0 ,P ,decl (SSem)) : - magic_sentence (decl (SSem)), s (P0, P, finite, SSem). (2) s(P0,P,VForm,SSem) :- magic_s (VForm, SSem), vp(P1 ,P,VForm, [CSem] ,SSem), np (P0 ,PI, CSem). (3) vp(P0,P,VForm,hrgs,SSem) :- magic_vp (VForm, SSem), vp(P0,PI ,VForm, [CSem]hrgs] ,SSem), np (Pl, P, CSem). (4) vp(PO,P,VForm,Args,SSem) :- magic_vp (VForm, SSem), v (P0,P,VForm, Args, SSem) . (5) np(P0,P,NPSem) :- magic_np (NPSem) , pn (P0, P, NPSem). (6) np(P0,P,NPSem) :- magic_np (NPSem), det (P0 ,PI ,NSem,NPSem), n (PI,P,NSem). (7) det ( [aiP] , P, NSem, a (NSem)) . (8) v ([buyslP] ,P,finite, [I,D,S] ,buys (S,D,I)). (9) pn([mary[P] ,P,mary) (i0) n( [booklP] ,P,book). (I I) magic_s (finite, SSem) : - magic_sentence (decl (SSem)) . (12) magic_vp(VForm,SSem) :- magic_s (VForm, SSem) . (13) magic_vp(VForm,SSem) :- magic_vp (VForm, SSem). (14) magic_np(CSem) :- magic_s (VForm, SSem) , vp(Pl ,P,VForm, [CSem] ,SSem). (15) magic_np(CSem) :- magic_vp (VForm, SSem) , vp (P0,Pl,VForm, [CSemlArgs] , SSem) . Figure 2: Magic compiled version 1 of the grammar in figure 1. 'FILTERING TREE' 'PROOF TREE' 11 magic.rip(j) \ • " ~ " " . • • • • • 8.magic-n~" " • li.sentence(~,buys,m,a,b[A],A,decl(buys(j,a(b),m))). ~-m~'magic-vp(fir*it~,buys(j,a(b),mi)." " " • , 13.s(~,buys,m,a,blA],A,finite,buys(j,a(b),m)). \ 3.maglc-*vp(finite,buys (j,a(b),ml)."" "~],A,finite,[jl,buys(j,a(b),m)). 2 rn gic (finite,b~s (j,a(b),rn)).. / / " • "~.vi(,buy.s:m,Ai,A tinct ,[.~.~a(b),m)). I 12 np([jlA ] Aj) 4 vp([buyslA ] A finlte,[m,a(b) 3] buys(j a(b) m)) 6 np([mIA],A m) 9 np([a blA ] A a(b)) 1.magic-sentence(decl(buys(j,a(b),m))). Figure 3: 'Connecting up' facts resulting from semi-naive generation of the sentence "John buys Mary a book" with the magic-compiled grammar from figure 2. 250 3 Filter Optimization through Program Transformation As a result of characterizing filtering by a definite clause representation Magic brings filtering inside of the logic underlying the grammar. This allows it to be optimized in a processor independent and logi- cally clean fashion. I discuss two possible filter opti- mizations based on a program transformation tech- nique called unfolding (Tamaki and Sato, 1984) also referred to as partial execution, e.g., in Pereira and Shieber (1987). 3.1 Subsumption Checking Just like top-down evaluation of the original gram- mar bottom-up evaluation of its magic compiled ver- sion falls prey to non-termination in the face of head recursion. It is however possible to eliminate the subsumption check through fine-tuning the magic predicates derived for a particular grammar in an off-line fashion. In order to illustrate how the magic predicates can be adapted such that the subsump- tion check can be eliminated it is necessary to take a closer look at the relation between the magic pred- icates and the facts they derive. In figure 4 the re- lation between the magic predicates for the example grammar is represented by an unfolding tree (Pet- torossi and Proietti, 1994). This, however, is not an ordinary unfolding tree as it is constructed on the basis of an abstract seed, i.e., a seed adorned with a specification of which arguments are to be con- sidered bound. Note that an abstract seed can be derived from the user-specified abstract query. Only the magic part of the abstract unfolding tree is rep- resented. ABSTRACT SEED L ...4- magie_sentenee(SSem),... ...4-- magic_s finite,SSem),... -.4- magic_vp (VForm,SSem),... ...+-- magic_np(CSem),... Figure 4: Abstract unfolding tree representing the relation between the magic predicates in the compiled grammar. The abstract unfolding tree in figure 4 clearly shows why there exists the need for subsumption checking: Rule 13 in figure 2 produces infinitely many magic_vp facts. This 'cyclic' magic rule is de- rived from the head-recursive vp rule in the example grammar. There is however no reason to keep this rule in the magic-compiled grammar. It influences neither the efficiency of processing with the gram- mar nor the completeness of the evaluation process. 3.1.1 Off-line Abstraction Finding these types of cycles in the magic part of the compiled grammar is in general undecidable. It is possible though to 'trim' the magic predicates by applying an abstraction function. As a result of the explicit representation of filtering we do not need to postpone abstraction until run-time, but can trim the magic predicates off-line. One can consider this as bringing abstraction into the logic as the definite clause representation of filtering is weakened such that only a mild form of connectedness results which does not affect completeness (Shieber, 1985). Con- sider the following magic rule: magic_vp(VForm, [CgemlArgs] , SSem) :- magic_vp (VForm, Args, SSem) . This is the rule that is derived from the head- recursive vp rule when the partially specified sub- categorization list is considered as filtering informa- tion (cf., fn. 1). The rule builds up infinitely large subcategorization lists of which eventually only one is to be matched against the subcategorization list of, e.g., the lexical entry for "buys". Though this rule is not cyclic, it becomes cyclic upon off-line ab- straction: magic_vp (VForm, [CSem I_3 , SSem) : - magic_vp (VForm, [CSem2l_] , SSem) . Through trimming this magic rule, e.g., given a bounded term depth (Sato and Tamaki, 1984) or a restrictor (Shieber, 1985), constructing an abstract unfolding tree reveals the fact that a cycle results from the magic rule. This information can then be used to discard the culprit. 3.1.2 Indexing Removing the direct or indirect cycles from the magic part of the compiled grammar does eliminate the necessity of subsumption checking in many cases. However, consider the magic rules 14 and 15 in fig- ure 2. Rule 15 is more general than rule 14. Without subsumption checking this leads to spurious ambigu- ity: Both rules produce a magic fact with which a subject np can be built. A possible solution to this problem is to couple magic rules with the modified version of the original grammar rule that instigated it. To accomplish this I propose a technique that can be considered the off-line variant of an index- 251 ing technique described in Gerdemann (1991). 3 The indexing technique is illustrated on the basis of the running example: Rule 14 in figure 1 is coupled to the modified version of the original s rule that insti- gated it, i.e., rule 2. Both rules receive an index: s (PO, P, VForm, SSem) : - magic _s (P0, P, VForm, SSem), vp(P1 ,P,VForm, [CSem], SSem), np (P0,P1 ,CSem, index_l). magic_rip (CSem, index_l) : - magi c_s (P0, P, VForm, SSem), vp (P1, P, VForm, [CSem], SSem). The modified versions of the rules defining nps are adapted such that they percolate up the index of the guarding magic fact that licensed its application. This is illustrated on the basis of the adapted version of rule 14: np (P0, P, NPSem, INDEX) : - magic_rip (NPSem, INDEX), pn (P0, P, NPSem). As is illustrated in section 3.3 this allows the avoid- ance of spurious ambiguities in the absence of sub- sumption check in case of the example grammar. 3.2 Redundant Filtering Steps Unfolding can also be used to collapse filtering steps. As becomes apparent upon closer investigation of the abstract unfolding tree in figure 4 the magic predi- cates magic_sentence, magic_s and magic_vp pro- vide virtually identical variable bindings to guard bottom-up application of the modified versions of the original grammar rules. Unfolding can be used to reduce the number of magic facts that are produced during processing. E.g., in figure 2 the magic_s rule: magic_s (finite, SSem) : - magic_sentence (decl (SSem)) . can be eliminated by unfolding the magic_s literal in the modified s rule: s(PO,P,VFOP~,SSem):- magic_s(VFORM,SSem), vp(P1,P,VF01~,,[CSem],SSem), np(P0,P1,CSem). This results in the following new rule which uses the seed for filtering directly without the need for an intermediate filtering step: 3This technique resembles an extension of Magic called Counting (Beeri and Ramakrishnan, 1991). How- ever, Counting is more refined as it allows to distinguish between different levels of recursion and serves entirely different purposes. s(P0,P,finite,SSem):- magic_sentence(decl(SSem)), vp(P1,P,finite,[CSem],SSem), np(P0,P1,CSem). Note that the unfolding of the magic_s literal leads to the instantiation of the argument VFORM to finite. As a result of the fact that there are no other magic_s literals in the remainder of the magic-compiled grammar the magic_s rule can be discarded. This filter optimization is reminiscent of comput- ing the deterministic closure over the magic part of a compiled grammar (DSrre, 1993) at compile time. Performing this optimization throughout the magic part of the grammar in figure 2 not only leads to a more succinct grammar, but brings about a different processing behavior. Generation with the resulting grammar can be compared best with head corner generation (Shieber et al., 1990) (see next section). 3.3 Example After cycle removal, incorporating relevant indexing and the collapsing of redundant magic predicates the magic-compiled grammar from figure 2 looks as dis- played in figure 5. Figure 6 shows the chart resulting from generation of the sentence "John buys Mary a book" .4 The seed is identical to the one used for the example in the previous section. The facts in the chart resulted from not-so-naive bottom-up evalu- ation: semi-naive evaluation without subsumption checking (Ramakrishnan et al., 1992). The result- ing processing behavior is similar to the behavior that would result from head corner generation ex- cept that the different filtering steps are performed in a bottom-up fashion. The head corner approach jumps top-down from pivot to pivot in order to sat- isfy its assumptions concerning the flow of seman- tic information, i.e., semantic chaining, and subse- quently generates starting from the semantic head in a bottom-up fashion. In the example, the seed is used without any delay to apply the base case of the vp-procedure, thereby jumping over all intermediate chain and non-chain rules. In this respect the initial reordering of rule 2 which led to rule 2 in the final grammar in figure 5 is crucial (see section 4). 4 Dependency Constraint on Grammar To which extent it is useful to collapse magic predi- cates using unfolding depends on whether the gram- mar has been optimized through reordering the 4In addition to the conventions already described re- garding figure 3, indices are abbreviated. 252 (i) sentence(P0,P,decl(SSem)):- magic_sentence(dec1(SSem)), s(P0,P,finite,SSem). (2) s(P0,P,finite,SSem):- magic_sentence(decl(SSem)), vp(Pl,P,finite,[CSem],SSem), np(P0,PI,CSem, index_l). (3) vp(P0,P,finite,Args,SSem):- magic_sentence(decl(SSem)), vp(P0,Pl,finite,[CSem)Args],SSem), np(Pi,P,CSem,index_2), (4) vp(P0,P,finite,Args,SSem):- magic_sentence(decl(SSem)), v(P0,P,finite,Args,SSem). (5) np(P0,P,NPSem, INDEX):- magic_np(NPSem, INDEX), pn(P0,P,NPSem). (6) np(P0,P,NPSem,INDEX) :- magic_up (NPSem, INDEX), det (P0,PI ,NSem,NPSem), n(Pl ,P,NSem). (7) det([aIP],P,NSem,a(NSem)). (8) v([buyslP],P,finite, [I,D,S] ,buys(S,D,I)). (9) pn ( [marylP], P ,mary) (10) n([booklP] ,P,book). (14) magic_np(CSem, index_l) :- magic_sentence (decl (SSem)), vp (PI,P, finite, [CSem], SSem). (15) magic_np (CSem, index_2) : - magic_sentence (decl (SSem)), vp (P0,PI, finite, [CSemlArgs], SSem). 11.magic_np(j,i_l). ° 6.magic.np(a(b i ,i..2). " _2) Figure 5: Magic compiled version 2 of the grammar in figure 1. lS.sentence(~,buys,m,a,bIA],A,decl(buys(j,a(b),m))). I .... 13.s([j,buys,m,a,blA],A,finite,buys(j,a(b),m)). • . . , . . . " , . , . , , , • ~,A,finite,[j],buys(j,a(b),m)). ll.nP(~mA],Aj,iA). 2.vp([buyslA],A,finite,[m,a(b)j],buys(j,a(b),m)). 4.np([mlA],A,m,i-2). 7.np([a,bIA],A,a(b),i-2 ). 1.magic_sentence(decl(buys(j,a(b),m))). Figure 6: 'Connecting up' facts resulting from not-so-naive generation of the sentence "John buys Mary a book" with the magic-compiled grammar from figure 5. right-hand sides of the rules in the grammar as dis- cussed in section 3.3. If the s rule in the running example is not optimized, the resulting processing behavior would not have fallen out so nicely: In this case it leads either to an intermediate filtering step for the non-chaining sentence rule or to the addi- tion of the literal corresponding to the subject np to all chain and non-chain rules along the path to the semantic head. Even when cycles are removed from the magic part of a compiled grammar and indexing is used to avoid spurious ambiguities as discussed in the previous sec- tion, subsumption checking can not always be elim- inated. The grammar must be finitely ambiguous, i.e., fulfill the off-line parsability constraint (Shieber, 1989). Furthermore, the grammar is required to obey what I refer to as the dependency constraint: When a particular right-hand side literal can not be evaluated deterministically, the results of its evalu- ation must uniquely determine the remainder of the right-hand side of the rule in which it appears. Fig- ure 7 gives a schematic example of a grammar that does not obey the dependency constraint. Given (1) cat_l(...):- magic_cat_l(Filter), cat_2(Filter,Dependency .... ), cat_3(Dependency). (2) magic_cat_3(Filter):- magic_cat_l(Filter), cat_2(Filter,Dependency,...). (3) cat_2(property_l,property_2 .... ). (4) cat_2(property_l,property_2 .... ). Figure 7: Abstract example grammar not obeying the dependency constraint. 253 a derived fact or seed magic_cat_l(property_l) bottom-up evaluation of the abstract grammar in figure 7 leads to spurious ambiguity. There are two possible solutions for cat_2 as a result of the fact that the filtering resulting from the magic literal in rule 1 is too unspecific. This is not problematic as long as this nondeterminism will eventually disap- pear, e.g., by combining these solutions with the so- lutions to cat_3. The problem arises as a result of the fact that these solutions lead to identical filters for the evaluation of the cat_~ literal, i.e., the solu- tions to cat_2 do not uniquely determine cat_3. Also with respect to the dependency constraint an optimization of the rules in the grammar is impor- tant. Through reordering the right-hand sides of the rules in the grammar the amount of nondeterminism can be drastically reduced as shown in Minnen et al. (1996). This way of following the intended semantic dependencies the dependency constraint is satisfied automatically for a large class of grammars. 5 Concluding Remarks Magic evaluation constitutes an interesting combi- nation of the advantages of top-down and bottom- up evaluation. It allows bottom-up filtering that achieves a goai-directedness which corresponds to dynamic top-down evaluation with abstraction and subsumption checking. For a large class of grammars in effect identical operations can be performed off- line thereby allowing for more efficient processing. Furthermore, it enables a reduction of the number of edges that need to be stored through unfolding magic predicates. 6 Acknowledgments The presented research was sponsored by Teilprojekt B4 "From Constraints to Rules: Efficient Compila- tion of HPSG Grammars" of the Sonderforschungs- bereich 340 of the Deutsche Forschungsgemeinschaft. The author wishes to thank Dale Gerdemann, Mark Johnson, Thilo G6tz and the anonymous reviewers for valuable comments and discussion. Of course, the author is responsible for all remaining errors. References Francois Bancilhon. 1985. Naive Evaluation of Re- cursively Defined Relations. In Brodie and My- lopoulos, editors, On Knowledge Base Manage- ment Systems - Integrating Database and AI Sys- tems. Springer-Verlag. Catriel Beeri and Raghu Ramakrishnan. 1991. On the Power of Magic. Journal of Logic Program- ming 10. Jochen DSrre. 1993. Generalizing Earley De- duction for Constraint-based Grammars. DSrre and Dorna, editors, Computational Aspects of Constraint-Based Linguistic Description I, DYANA-2, Deliverable R1.2.A. Dale Gerdemann. 1991. Parsing and Generation of Unification Grammars. Ph.D. thesis, University of Illinois, USA. Mark Johnson. forthcoming. Constraint-based Nat- ural Language Parsing. Brown University, Rich- mond, USA. Draft of 6 August 1995. Guido Minnen, Dale Gerdemann, and Erhard Hin- richs. 1996. Direct Automated Inversion of Logic Grammars. New Generation Computing 14. Fernando Pereira and Stuart Shieber. 1987. Pro- log and Natui'al Language Analysis. CSLI Lecture Notes, No. 10. Center for the Study of Language and Information, Chicago, USA. Alberto Pettorossi and Maurizio Proietti. 1994. Transformations of Logic Programs: Foundations and Techniques. Journal of Logic Programming 19/2o. Raghu Ramakrishnan, Divesh Srivastava, and S. Su- darshan. 1992. Efficient Bottom-up Evaluation of Logic Programs. In Vandewalle, editor, The State of the Art in Computer Systems and Software En- gineering. Kluwer Academic Publishers. Taisuke Sato and Hisao Tamaki. 1984. Enumeration of Success Patterns in Logic Programs. Theoreti- cal Computer Sience 34. Stuart Shieber, Gertjan van Noord, Robert Moore, and Fernando Pereira. 1990. Semantic Head- driven Generation. Computational Linguistics 16. Stuart Shieber. 1985. Using Restriction to Extend Parsing Algorithms for Complex Feature-based Formalisms. In Proceedings of the 23rd Annual Meeting Association for Computational Linguis- tics, Chicago, USA. Stuart Shieber. 1988. A Uniform Architecture for Parsing and Generation. In Proceedings of the 12th Conference on Computational Linguis- tics, Budapest, Hungary. Stuart Shieber. 1989. Parsing and Type Inference for Natural and Computer Languages. Ph.D. the- sis, Stanford University, USA. Hisao Tamaki and Taisuke Sato. 1984. Unfold/Fold Transformation of Logic Programs. In Proceed- ings of the 2nd International Conference on Logic Programming, Uppsala, Sweden. 254 | 1996 | 33 |
Efficient Transformation-Based Parsing Giorgio Satta Eric Brill Dipartimento di Elettronica ed Informatica Department of Computer Science Universit£ di Padova Johns Hopkins University via Gradenigo, 6/A Baltimore, MD 21218-2694 2-35131 Padova, Italy brill©cs, jhu. edu satta@dei, unipd, it Abstract In transformation-based parsing, a finite sequence of tree rewriting rules are checked for application to an input structure. Since in practice only a small percentage of rules are applied to any particular structure, the naive parsing algorithm is rather ineffi- cient. We exploit this sparseness in rule applications to derive an algorithm two to three orders of magnitude faster than the standard parsing algorithm. 1 Introduction The idea of using transformational rules in natu- ral language analysis dates back at least to Chore- sky, who attempted to define a set of transfor- mations that would apply to a word sequence to map it from deep structure to surface structure (see (Chomsky, 1965)). Transformations have also been used in much of generative phonology to cap- ture contextual variants in pronunciation, start- ing with (Chomsky and Halle, 1968). More re- cently, transformations have been applied to a di- verse set of problems, including part of speech tagging, pronunciation network creation, preposi- tional phrase attachment disambiguation, and pars- ing, under the paradigm of transformation-based error-driven learning (see (Brill, 1993; Brill, 1995) and (Brill and Resnik, 1994)). In this paradigm, rules can be learned automatically from a training corpus, instead of being written by hand. Transformation-based systems are typically deter- ministic. Each rule in an ordered list of rules is ap- plied once wherever it can apply, then is discarded, and the next rule is processed until the last rule in the list has been processed. Since for each rule the application algorithm must check for a matching at all possible sites to see whether the rule can apply, these systems run in O(rrpn) time, where 7r is the number of rules, p is the cost of a single rule match- ing, and n is the size of the input structure. While this results in fast processing, it is possible to create much faster systems. In (Roche and Schabes, 1995), a method is described for converting a list of trans- formations that operates on strings into a determin- istic finite state transducer, resulting in an optimal tagger in the sense that tagging requires only one state transition per word, giving a linear time tag- ger whose run-time is independent of the number and size of rules. In this paper we consider transformation-based parsing, introduced in (Brill, 1993), and we im- prove upon the O(Trpn) time upper bound.. In transformation-based parsing, an ordered sequence of tree-rewriting rules (tree transformations) are ap- plied to an initial parse structure for an input sen- tence, to derive the final parse structure. We observe that in most transformation-based parsers, only a small percentage of rules are actually applied, for any particular input sentence. For example, in an application of the transformation-based parser de- scribed in (Brill, 1993), 7r = 300 rules were learned, to be applied at each node of the initial parse struc- ture, but the average number of rules that are suc- cessfully applied at each node is only about one. So a lot of time is spent testing whether the conditions are met for applying a transformation and finding out that they are not met. This paper presents an original algorithm for transformation-based parsing working in O(ptlog(t)) time, where t is the total number of rules applied for an input sentence. Since in practical cases t is smaller than n and we can neglect the log(n) factor, we have achieved a time improvement of a factor of r. We emphasize that rr can be several hundreds large in actual systems where transformations are lexicalized. Our result is achieved by preprocessing the trans- formation list, deriving a finite state, determiflistic tree automaton. The algorithm then exploits the au- tomaton in a way that obviates the need for checking the conditions of a rule when that rule will not apply, thereby greatly improving parsing run-time over the straightforward parsing algorithm. In a sense, our algorithm spends time only with rules that can be applied, as if it knew in advance which rules cannot be applied during the parsing process. The remainder of this paper is organized as fol- 255 lows. In Section 2 we introduce some preliminaries, and in Section 3 we provide a representation of trans- formations that uses finite state, deterministic tree automata. Our algorithm is then specified in Sec- tion 4. Finally, in Section 5 we discuss related work in the existing literature. 2 Preliminaries We review in the following subsections some termi- nology that is used throughout this paper. 2.1 Trees We consider ordered trees whose nodes are assigned labels over some finite alphabet E; this set is denoted as ET. Let T E S T. A node of T is called leftmost if it does not have any left sibling ( a root node is a leftmost node). The height of T is the length of a longest path from the root to one of its leaves (a tree composed of a single node has height zero). We define I TI as the number of nodes in T. A tree T E y]T is denoted as A if it consists of a single leaf node labeled by A, and as A(T1,T2,... ,Ta), d >_ 1, if T has root labeled by A with d (ordered) children denoted by T1,...,Td. Sometimes in the examples we draw trees in the usual way, indicating each node with its label. What follows is standard terminology from the tree pattern matching literature, with the simplifi- cation that we do not use variable terms. See (Hoff- mann and O'Donnell, 1982) for general definitions. Let n be a node of T. We say that a tree S matches T at n if there exists a one-to-one mapping from the nodes of S to the nodes of T, such that the follow- ing conditions are all satisfied: (i) if n' maps to n", then n ~ and n I~ have the same label; (ii) the root of S maps to n; and (iii) if n ~ maps to n" and n ~ is not a leaf in S, then n ~ and n" have the same degree and the i-th child of n ~ maps to the i-th child of n% We say that T and S are equivalent if they match each other at the respective root nodes. In what follows trees that are equivalent are not treated as the same object. We say that a tree T' is a subtree of T at n if there exists a tree S that matches T at n, and T ~ consists of the nodes of T that are matched by some node of S and the arcs of T between two such nodes. We also say that T' is matched by S at n. In addition, T' is a prefix of T if n is the root of T; T' is the suffix of T at n if T' contains all nodes of T dominated by n. Example 1 Let T -- B(D, C(B(D, B), C)) and let n be the second child of T's root. S -- C(B,C) matches T at n. S' = B(D, C(B), C)) is a prefix orS and S" = C(B(D, B), C) is the suffix of T at n. [] We now introduce a tree replacement operator that will be used throughout the paper. Let S be a subtree of T and let S / be a tree having the same number of leaves as S. Let nl, n2, •.., nz and n~,n~,...,n~, 1 > 1, be all the leaves from left to B D C_ B_ E I E B E B C B f D E Figure 1: From left to right, top to bottom: tree T with subtree S indicated using underlined labels at its nodes; tree S' having the same number of leaves as S; tree T[S/S ~] obtained by "replacing" S with S ~. right of S and S', respectively. We write T[S/S'] to denote the tree obtained by embedding S ~ within T in place of S, through the following steps: (i) if the root of S is the i-th child of a node n] in T, the root of S I becomes the i-th child of n] ; and (ii) the (ordered) children of n~ in T, if any, become the children of n~, 1 < i < l. The root of T[S/S ~] is the root of T if node n] above exists, and is the root of S t otherwise. Example 2 Figure 1 depicts trees T, S I and T ~ in this order. A subtree S of T is also indicated using underlined labels at nodes of T. Note that S and S' have the same number of leaves. Then we have T' = T[S/S']. n 2.2 Tree automata Deterministic (bottom-up) tree automata were first introduced in (Thatcher, 1967) (called FRT there). The definition we propose here is a generalization of the canonical one to trees of any degree. Note that the transition function below is computed on a number of states that is independent of the de- gree of the input tree. Deterministic tree automata will be used later to implement the bottom-up tree pattern matching algorithm of (Hoffmann and O'- Donnell, 1982). Definition 1 A deterministic tree automaton (DTA) is a 5-tuple M = (Q, ~, ~, qo, F), where Q is a finite set of s~ates, ~ is a finite alphabet, qo E Q is the initial state, F C Q is the set of final states and 6 is a transition function mapping Q~ × E into O. Informally, a DTA M walks through a tree T by vis- iting its nodes in post-order, one node at a time. Every time a node is read, the current state of the device is computed on the basis of the states 256 reached upon reading the immediate left sibling and the rightmost child of the current node, if any. In this way the decision of the DTA is affected not only by the portion of the tree below the currently read node, but also by each subtree rooted in a left sib- ling of the current node. This is formally stated in what follows. Let T E ~T and let n be one of its nodes, labeled by a. The state reached by M upon reading n is recursively specified as: 6(T,n) = ~(X,X',a), (1) where X -- q0 if n is a leftmost node, X -- 6(T, n') if n' is the immediate left sibling of n; and X' -- q0 if n is a leaf node, X' = 6(T, n") if n" is the rightmost child of n. The tree language recognized by M is the set L(M) = {T [ ~(T, n) E F, T E E T, n the root of T}. (2) Example 3 Consider the infinite set L = {B(A, C), B(A, B(A, C)), B(A, B(A, B(A, C))),...} consisting of all right-branching trees with internal nodes labeled by B and with strings A'~C, n > 1 as their yields. Let M = (Q, {A,B,C}, 6, qo, {qBc}) be a DTA specified as follows: Q = {q0, qA, qnc, q-i}; 6(qo, qo,A) = qA, 6(qA,qo, C) = 5(qA, qBC, B) = qBC and q-i is the value of all other entries of 5. It is not difficult to see that L(M) = L. 1:3 Observe that when we restrict to monadic trees, that is trees whose nodes have degree not greater than one, the above definitions correspond to the well known formalisms of deterministic finite state au- tomata, the associated extended transition function, and the regular languages. 2.3 Transformation-based parsing Transformation-based parsing was first introduced in (Brill, 1993). Informally, a transformation-based parser assigns to an input sentence an initial parse structure, in some uniform way. Then the parser iteratively checks an ordered sequence of tree trans- formations for application to the initial parse tree, in order to derive the final parse structure. This results in a deterministic, linear time parser. In order to present our algorithm, we abstract away from the assignment of the initial parse to the input, and introduce below the notion of transformation- based tree rewriting system. The formulation we give here is inspired by (Kaptan and Kay, 1994) and (Roche and Schabes, 1995). The relationship between transformation-based tree rewriting sys- tems and standard term-rewriting systems will be discussed in the final section. Definition 2 A transformation-based tree rewriting system (TTS) is a pair G = (E,R), where ~ is a finite alphabet and R = (ri,r2,...,r~), 7r >_ 1, is a finite sequence of tree rewriting rules having the form Q --+ Q', with Q, Q' E ~T and such that Q and Q' have the same number of leaves. If r = (Q ~ Q'), we write lhs(r) for Q and rhs(r) for Q'. We also write lhs(R) for {lhs(r) I r E R}. (Recall that we regard lhs(r/) and lhs(rj), i # j, as different objects, even if these trees are equivalent.) We define [r I = Ilhs(r) l + I rhs(r) I. The notion of transformation associated with a TTS G = (E, R) is now introduced. Let C, C' E E T. For any node n of C and any rule r = (Q ~ Q') of G, we write C ~ C' (3) if Q does not match C at n and C = C'; or if Q matches C at n and C' = C[S/Q'], where S is the subtree of T matched by Q at n and Q'c is a fresh copy of Q'. Let <nl,n2,...,ntl, t > 1, be the post- ordered sequence of all nodes of C. We write C ~ C' (4) r,n • if Ci-i ~ Ci, 1 < i <_ t, Co = C and Ct = C'. Finally, we define the translation induced by G on Ea, as the map M(G) = {(C,C') I C E y]T, Ci_I~:~C i for 1 <i< ~r, Co =C, C~ =C'}. 3 Rule representation We develop here a representation of rule sequences that makes use of DTA and that is at the basis of the main result of this paper. Our technique im- proves the preprocessing phase of a bottom-up tree pattern matching algorithm presented in (Hoffmann and O'Donnell, 1982), as it will be discussed in the final section. Let G = (~,R) be a TTS, R = (ri,r2,...,r~). In what follows we construct a DTA that "detects" each subtree of an input tree that is equivalent to some tree in lhs(_R). We need to introduce some additional notation. Let N be the set of all nodes from the trees in lhs(R). Call Nr the set of all root nodes (in N), N,~ the set of all leftmost nodes, Nz the set of all leaf nodes, and Na the set of all nodes labeled by a E ~. For each q E 2 N, let right(q) = {n I n E N, n' E q, n has immediate left sibling n'} and let up(q) = {n [ n E N, n' E q, nhasrightmostchildn'}. Also, let q0 be a fresh symbol. Definition 3 G is associated with a DTA Aa = (2 N U {q0}, E, 6a, qo, F), where F = {q [ q E 2 N, (q f3 Nr) # 0} and 6G is specified as follows: (i) 5a(qo,qo,a) = No M Nm ANt; (it) dia(qo,q',a) = NaANmA(NtUup(q')), forq' # qo; (iii) diG(q, qo, a) = Na A Nz t] (Nr U right(q)), for q qo; (iv) 6a(q, q', a) = No M up(q') A (Nr U right(q)), for q • qo # q'. 257 Observe that each state of Ac simultaneously car- ries over the recognition of several suffixes of trees in lhs(/~). These processes are started whenever Ac reads a leftmost node n with the same label as a leftmost leaf node in some tree in lhs(R) (items (i) and (ii) in Definition 3). Note also that we do not require any matching of the left siblings when we match the root of a tree in lhs(R) (items (iii) and (iv)). B B A --~ a B A/'D B A c B -+ c c A A B B B C B A B A B Figure 2: From top to bottom: rules rl, r2 and r3 of G. Example 4 Let G = (E,R), where E = {A, B, C, D} and R = (rl,r2, r3). Rules ri are depicted in Figure 2. We write nij to denote the j-th node • in a post-order enumeration of the nodes of lhs(ri), 1 < i < 3 and 1 < j <__ 5. (Therefore n35 denotes the root node of lhs(r3) and n22 denotes the first child of the second child of the root node of lhs(r~).) If we consider only the useful states, that is those states that can be reached on an actual input, the DTA Ac --- (Q, E, 5, qo, F), is specified as follows: Q = {qi I 0 < i < I1}, where ql = {nll,n12, n22, n32}, q2 = {n21,n3x}, q3 = {n13, n23}, q4 = {n33}, q5 = {n14}, q6 = {n24}, q7 = {n34}, qs = {n15}, q9 -= {n35}, qlo = {n25}, qll = (b; F = {qs, qg, qlo}. The transition function 5, restricted to the useful states, is specified in Figure 3. Note that among the 215 + 1 possible states, only 12 are useful. [] 6(qo,qo,A) = ql 6(qo,qo,C) = q2 6(qa,qo, B) = q3 6(ql,qo,C) = q, 6(ql,qz,B) = qs 6(q2, q3,B) = qs ~(q~,q,, B) = q7 ~(qo, qs, B) = q~ 6(qo,q6,B) = q9 6(qo, qT,B) = qlo Figure 3: Transition function of G. For all (q, q~, a) E Q2× E not indicated above, 5(q, q', a) = qll- Although the number of states of Ac is exponen- tial in IN I, in practical cases most of these states are never reached by the automaton on an actual input, and can therefore be ignored. This happens whenever there are few pairs of suffix trees of trees in lhs(R) that share a common prefix tree but no tree in the pair matches the other at the root node. This is discussed at length in (Hoffmann and O'Don- nell, 1982), where an upper bound on the number of useful states is provided. The following lemma provides a characterization of Aa that will be used later. Lemma 1 Let n be a node ofT E ~T and let n ~ be the roof node of r E R. Tree lhs(r) matches Taf n if and only if n' E iG(T,n). Proof (outline). The statement can be shown by proving the following claim. Let m be a node in T and m t be a node in lhs(r). Call ml,...,m~ = m, k > 1, the ordered sequence of the left siblings of m, with m included, and call m~,..., m' k, -" m', k' > 1, the ordered sequence of the left siblings of m ~, with m' included. If m' ~ Nr, then the two following conditions are equivalent: * m' E iv(T, m); • k = k' and, for 1 < i < k, the suffix of lhs(r) at m~ matches T at mi. The claim can be shown by induction on the posi- tion of m ~ in a post-order enumeration of the nodes of lhs(r). The lemma then follows from the spec- ification of set F and the treatment of set N~ in items (iii) and (iv) in Definition 3. [] We also need a function mapping F x {1..(r + 1)} into {1..r} U {.1_}, specified as (min@ =_1_): next(q,i) = min{j [ i < j < 7r, lhs(rj) has root node in q}. (5) Assume that q E F is reached by AG upon reading a node n (in some tree). In the next section next(q, i) is used to select the index of the rule that should be next applied at node n, after the first i - 1 rules of R have been considered. 4 The algorithm We present a translation algorithm for TTS that can immediately be converted into a transformation- based parsing algorithm. We use all definitions in- troduced in the previous sections. To simplify the presentation, we first make the assumption that the order in which we apply several instances of the same rule to a given tree does not affect the outcome. Later we will deal with the general case. 4.1 Order-free case We start with an important property that is used by the algorithm below and that can be easily shown (see also (Hoffmann and O'Donnell, 1982)). Let G = (E, R) be a TTS and let ha be the maximum height 258 of a tree in lhs(R). Given trees T and S, S a subtree of T, we write local(T, S) to denote the set of all nodes of S and the first ha proper ancestors of the root of S' in T (when these nodes are defined). Lemma 2 Assume that lhs(r), r E R, matches a tree T at some node n. Let T ~'~ T' and lel S be the copy of rhs(r) used in the rewriting. For every node n' no~ included in local(T', S), we have ~a(T, n') = Oa(T',n'). [] We precede the specification of the method with an informal presentation. The following three data structures are used. An associative list state asso- ciates each node n of the rewritten input tree with the state reached by Aa upon reading n. If n is no longer a node of the rewritten input tree, state associates n with the emptyset. A set rule(i) is as- sociated with each rule ri, containing some of the nodes of the rewritten input tree at which lhs(ri) matches. A heap data structure H is also used to order the indices of the non-empty sets rule(i) ac- cording to the priority of the associated rules in the rule sequence. All the above data structures are up- dated by a procedure called update. To compute the translation M(G) we first visit the input tree with AG and initialize our data struc- tures in the following way. For each node n, state is assigned a state of AG as specified above. If rule ri must be applied first at n, n is added to rule(i) and H is updated. We then enter a main loop and re- trieve elements from the heap. When i is retrieved, rule ri is considered for application at each node n in rule(i). It is important to observe that, since some rewriting of the input tree might have occurred in between the time n has been inserted in rule(i) and the time i is retrieved from H, it could be that the current rule ri can no longer be applied at n. Information in state is used to detect these cases. Crucial to the efficiency of our algorithm, each time a rule is applied only a small portion of the current tree needs to be reread by AG, in order to update our data structures, as specified by Lemma 2 above. Finally, the main loop is exited when the heap is empty. Algorithml Let G - (~,R) be a TTS, R = (rl,r2,...,r~).and letT E ~ be an input tree. Let Aa = (2 ~ U {q0}, ~, ~a, q0, F) be the DTA as- sociated with G and ~G the reached state function. Let also i be an integer valued variable, state be an associative array, rule(i) be an initially empty set, for 1 < i < ~', and let H be a heap data structure. (n ---+ rule(i) adds n to rule(i); i ---* H inserts i in H; i ~-- H assigns to i the least element in H, ifH is not empty.) The algorithm is specified in Figure 4. [] Example 4 (continued) We describe a run of Al- gorithm 1 working with the sample TTS G = (E, R) previously specified (see Figure 2). proc update( oldset, newset, j) for each node n E oldset state(n) ~ O for each node n E newset do state(n) ~- gG(C, n) if state(n) • F and next(state(n), j) #.l_ then do if rule(next(state(n), j) ) = O then next(state(n), j) --~ Y n ~ rule(next(state(n), j)) od od main C+--T;i,-1 update(O, nodes of C, i) while H not empty do i~-H for each node n E rule(i) s.t. the root of lhs(ri) is in state(n) do S ~ the subtree of C matched by lhs(ri) at n S I *-- copy of rhs(ri) c ,-- c[s/s'] update(node~ of S, lo~al(C, S'), i + 1) od od return C. Figure 4: Translation algorithm computing M(G) for a TTS G. Let Ci E ~T, 1 < i < 3, be as depicted in Figure 5. We write mij to denote the j-th node in a post- order enumeration of the nodes of Ci, 1 < i < 3 and 1 < j < 7. Assume that CI is the input tree. After the first call to procedure update, we have state(m17) = qz0 = {n25} and state(m16) = qs = {nzh}; no other final state is associated with a node of C1. We also have that rule(l)= {m16}, rule(2) = {m17}, rule(3) = 0 and H contains indices 1 and 2. Index 1 is then retrieved from H and the only node in rule(l), i.e., mr6, is considered. Since the root of lhs(rz), i.e., node n15, belongs to q8, mz~ passes the test in the head of the for-statement in the main program. Then rz is applied to C1, yielding C2. Observe that mll = m21 and m17 -- m27; all the remaining nodes of C2 are fresh nodes. The next call to update, associated with the appli- cation of rl, updates the associative list state in such a way that state(m27) = q9 = {n35}, and no other final state is associated with a node of C2. Also, we now have rule(l) = {m16}, rule(2)= {m27} (recall that m17 = m27), rule(3) = {m27}, and H contains indices 2 and 3. Index 2 is next retrieved from H and node m27 is considered. However, at this point the root of lhs(r2), i.e., node n~5, does no longer belong to state(m27), indicating that r~ is no longer applicable to that node. The body of the for-statement in the 259 B C B A B B C B A C A D B A B A A D Figure 5: From left to right, top to bottom: trees C1, C2 and C3. In the sample TTS G we have (C1, C3) E M(G), since C1 ~=~ C~ ~=~ C2 ~=~ Ca. main program is not executed this time. Finally, index 3 is retrieved from H and node m27 is again considered, this time for the application of rule r3. Since the root of lhs(ra), i.e., node n35, be- longs to state(m27), r3 is applied to C2 at node m27, yielding C3. Data structures are again updated by a call to procedure update with the second param- eter equal to 4. Then state qs is associated with node m37, the root node of C3. Despite of the fact that qs E F, we now have next(qs, 4) = _k. There- fore rule rl is not considered for application to C3. Since H is now empty, the computation terminates returning C3. [] The results in Lemma 1 and Lemma 2 can be used to show that, in the main program, a node n passes the test in the head of the for-statement if and only if lhs(ri) matches C at n. The correctness of Algo- rithm 1 then follows from the definition of the heap data structure. We now turn to computational complexity issues. Let p = maxl<i<_~lril. For T e E T, let alsot(T) be the total number of rules that are successfully applied on a run of Algorithm i on input T, counting repetitions. Theorem 1 The running time of Algorithm 1 on input tree T is 0(I TI + pt(T) log(t(T))). Proof. We can implement our data structures in such a way that each of the primitive access oper- ations that are executed by the algorithm takes a constant amount of time. Consider each instance of the membership of a node n in a set rule(i) and represent it as a pair (n, i). We call active each pair (n, i) such that lhs(ri) matches C at n at the time i is retrieved from H. As already mentioned, these pairs pass the test in the head of the for-loop in the main program. The num- ber of active pairs is therefore t(T). All remaining pairs are called dead. Note that an active pair (n, i) can turn at most Ilhs(ri)I+hR active pairs into dead ones, through a call to the procedure update. Hence the total number of dead pairs must be O(pt(T)). We conclude that the number of pairs totally in- stantiated by the algorithm is O(pt(T)). It is easy to see that the number of pairs totMly instantiated by the algorithm is also a bound on the number of indices inserted in or retrieved from the heap. Then the time spent by the algorithm with the heap is O(pt(T) log(t(T))) (see for instance (Cor- men, Leiserson, and Rivest, 1990)). The first cMl to the procedure update in the main program takes time proportional to ]T[. All remaining operations of the algorithm will now be charged to some active pair. For each active pair, the body of the for-loop in the mMn program and the body of the update procedure are executed, taking an amount of time O(p). For each dead pair, only the test in the head of the for- loop is executed, taking a constant amount of time. This time is charged to the active node that turned the pair under consideration into a dead one. In this way each active node is charged an extra amount of time O(p). Every operation executed by the algorithm has been considered in the above analysis. We can then conclude that the running time of Algorithm 1 is O(ITI + pt(T) log(t(T))). 0 Let us compare the above result with the time performance of the standard algorithm for transformation-based parsing. The standard algo- rithm checks each rule in R for application to an initial parse tree T, trying to match the left-hand side of the current rule at each node of T. Using the notation of Theorem 1, the running time is then O(IrplTI). In practical applications, t(T) and ITI are very close (of the order of the length of the in- put string). Therefore we have achieved a time im- provement of a factor of ~r/log(t(T)). We empha- size that ~r might be several hundreds large if the learned transformations are lexicalized. Therefore we have improved the asymptotic time complexity of transformation-based parsing of a factor between two to three orders of magnitude. 4.2 Order-dependent parsing We consider here the general case for the TTS trans- lation problem, in which the order of application of several instances of rule r to a tree can affect the final result of the rewriting. In this case rule r is called critical. According to the definition of translation induced by a TTS, a critical rule should always be applied in post-order w.r.t, the nodes of the tree to be rewritten. The solution we propose here for critical rules is based on a preprocessing of the rule sequence of the system. We informally describe the technique presented below. Assume that a critical rule r is to be applied 260 at several matching nodes of a tree C. We partition the matching nodes into two sets. The first set con- tains all the nodes n at which the matching of lhs(r) overlaps with a second matching at a node n' dom- inated by n. All the remaining matching nodes are inserted in the second set. Then rule r is applied to the nodes of the second set. After that, the nodes in the first set are in turn partitioned according to the above criterion, and the process is iterated until all the matching nodes have been considered for ap- plication of r. This is more precisely stated in what follows. B B B c B C B C B C B C Figure 6: From left to right: trees Q and Qp. Node p of Q is indicated by underlying its label. We start with some additional notation. Let r = (Q ~ Q') be a tree-rewriting rule. Also, let p be a node of Q and let S be the suffix of Q at p. We say that p is periodic if (i) p is not the root of Q; and (ii) S matches Q at the root node. It is easy to see that the fact that lhs(r) has some periodic node is a necessary condition for r to be critical. Let the root of S be the i-th child of a node n/ in Q, and let Qc be acopyofQ. We write Qp to denote the tree obtained starting from Q by excising S and by letting the root of Qc be the new i-th child of hi. Finally, call nl the root of Qp and n2 the root of Q. Example 5 Figure 6 depicts trees Q and Qp. The periodic node p of Q under consideration is indicated by underlying its label. [] Let us assume that rule r is critical and that p is the only periodic node in Q. We add Qp to set lhs(R) and construct AG accordingly. Algorithm 1 should then be modified as follows. We call p-chain any sequence of one or more subtrees of C, all matched by Q, that partially overlap in C. Let n be a node of C and let q = state(n). Assume that n2 E q and call S the subtree of C at n matched by Q (S exists by Lemma 1). We distinguish two possible cases. Case 1: If nl E q, then we know that Q also matches some portion of C that overlaps with S (at the node matched by the periodic node p of Q). In this case S belongs to a p-chain consisting of at least two sub- trees and S is not the bottom-most subtree in the p-chain. Case 2: If nt ~ q, then we know that S is the bottom-most subtree in a p-chain. Let i be the index of rule r under consideration. We use an additional set chain(i). Each node n of C such that n~ 6 state(n) is then inserted in chain(i) if state(n) satisfies Case 1 above, and is inserted in rule(i) otherwise. Note that chain(i) is non-empty only in case rule(i) is such. Whenever i is retrieved from H, we process each node n in rule(i), as usual. But when we update our data structures with the procedure update, we also look for match- ings of lhs(ri) at nodes of C in chain(i). The overall effect of this is that each p-chain is considered in a bottom-up fashion in the application of r. This is compatible with the post-order application require- ment. The above technique can be applied for each peri- odic node in a critical rule, and for each critical rule of G. This only affects the size of AG, not the time requirements of Algorithm 1. In fact, the proposed preprocessing can at worst double ha. 5 Discussion In this section we relate our work with the existing literature and further discuss our result. There are several alternative ways in which one could see transformation-based rewriting systems. TTS's are closely related to a class of graph rewr.iting systems called neighbourhood-controlled embedding graph grammars (N CE grammars; see (J anssens and Rozenberg, 1982)). In fact our definition of the relation and of the underlying [/] operator has been inspired by similar definitions in the NCE formal- ism. Apart from the restriction to tree rewriting, the main difference between NCE grammars and TTS's is that in the latter formalism the productions are totally ordered, therefore there is no recursion. Ordered trees can also be seen as ground terms. If we extend the alphabet ~ with variable symbols, we can redefine the ~ relation through variable sub- stitution. In this way a TTS becomes a particular kind of term-rewriting system. The idea of imposing a total order on the rules of a term-rewriting system can be found in the literature, but in these cases all rules are reconsidered for application at each step in the rewriting, using their priority (see for in- stance the priority term-rewriting systems (Baeten, Bergstra, and Klop, 1987)). Therefore these systems allow recursion. There are cases in which a critical rule in a TTS does not give rise to order-dependency in rewriting. Methods for deciding the confluency property for a term-rewriting system with critical pairs (see (Dershowitz and Jouannaud, 1990) for def- initions and an overview) can also be used to detect the above cases for TTS. As already pointed out, the translation problem investigated here is closely related with the stan- dard tree pattern matching problem. Our automata AG (Definition 3) can be seen as an abstraction of the bottom-up tree pattern matching algorithm pre- sented in (Hoffmann and O'Donnell, 1982). While that result uses a representation of the pattern set 261 (our set lhs(R)) requiring an amount of space which is exponential in the degree of the pattern trees, as an improvement, our transition function does not de- pend on this parameter. However, in the worst case the space requirements of both algorithm are expo- nential in the number of nodes in lhs(R) (see the analysis in (Hoffmann and O'Donnell, 1982)). As already discussed in Section 3, the worst case condi- tion is hardly met in natural language applications. Polynomial space requirements can be guaranteed if one switches to top-down tree pattern matching algorithms. One such a method is reported in (Hoff- mann and O'Donnell, 1982), but in this case the running-time of Algorithm 1 cannot be maintained. Faster top-down matching algorithms have been re- ported in (Kosaraju, 1989) and (Dubiner, Galil, and Magen, 1994), but these methods seems impractical, due to very large hidden constants. A tree-based extension of the very fast algorithm described in (Roche and Schabes, 1995) is in prin- ciple possible for transformation-based parsing, but is likely to result in huge space requirements and seems impractical. The algorithm presented here might then be a good compromise between fast pars- ing and reasonable space requirements. When restricted to monadic trees, our automa- ton Ac comes down to the finite state device used in the well-known string pattern matching algorithm of Aho and Corasick (see (Aho and Corasick, 1975)), requiring linear space only. If space requirements are of primary importance or when the rule set is very large, our method can then be considered for string- based transformation rewriting as an alternative to the already mentioned method in (Roche and Sch- abes, 1995), which is faster but has more onerous space requirements. Acknowledgements The present research was done while the first author was visiting the Center for Language and Speech Processing, Johns Hopkins University, Baltimore, MD. The second author is also a member of the Cen- ter for Language and Speech Processing. This work was funded in part by NSF grant IRI-9502312. The authors are indebted with Alberto Apostolico, Rao Kosaraju, Fernando Pereira and Murat Saraclar for technical discussions on topics related to this paper. The authors whish to thank an anonymous referee for having pointed out important connections be- tween TTS and term-rewriting systems. References Aho, A. V. and M. Corasick. 1975. Efficient string matching: An aid to bibliographic search. Communications of the Association for Comput- ing Machinery, 18(6):333-340. Baeten, J., J. Bergstra, and 3. Klop. 1987. Prior- ity rewrite systems. In Proc. Second International Conference on Rewriting Techniques and Applica- tions, LNCS 256, pages 83-94, Berlin, Germany. Springer-Verlag. Brill, E. 1993. Automatic grammar induction and parsing free text: A transformation-based ap- proach. In Proceedings of the 31st Meeting of the Association of Computational Linguistics, Colum- bus, Oh. Brill, E. 1995. Transformation-based error-driven learning and natural language processing: A case study in part of speech tagging. Computational Linguistics. Brill, E, and P. Resnik. 1994. A transformation- based approach to prepositional phrase attach- ment disambiguation. In Proceedings of the Fifteenth International Conference on Computa- tional Linguistics (COLING-199~), Kyoto, Japan. Chomsky, N. 1965. Aspects of the Theory of Syntax. The MIT Press, Cambridge, MA. Chomsky, N. and M. Halle. 1968. The Sound Pat- tern of English. Harper and Row. Cormen, T. H., C. E. Leiserson, and R. L. Rivest. 1990. Introduction to Algorithms. The MIT Press, Cambridge, MA. Dershowitz, N. and J. Jouannaud. 1990. Rewrite systems. In J. Van Leeuwen, editor, Handbook of Theoretical Computer Science, volume B. Else- vier and The MIT Press, Amsterdam, The Nether- lands and Cambridge, MA, chapter 6, pages 243- 320. Dubiner, M., Z. Galil, and E. Magen. 1994. Faster tree pattern matching. Journal of the Association for Computing Machinery, 41(2):205-213. Hoffmann, C. M. and M. J. O'Donnell. 1982. Pat- tern matching in trees. Journal of the Association for Computing Machinery, 29(1):68-95. Janssens, D. and G. Rozenberg. 1982. Graph gram- mars with neighbourhood-controlled embedding. Theoretical Computer Science, 21:55-74. Kaplan, R. M. and M. Kay. 1994. Regular models of phonological rule sistems. Computational Lin- guistics, 20(3):331-378. Kosaraju, S. R. 1989. Efficient tree-pattern match- ing. In Proceedings of the 30 Conference on Foun- dations of Computer Science (FOCS), pages 178- 183. Roche, E. and Y. Schabes. 1995. Deterministic part of speech tagging with finite state transducers. Computational Linguistics. Thatcher, J. W. 1967. Characterizing derivation trees of context-free grammars through a general- ization of finite automata theory. Journal of Com- puter and System Science, 1:317-322. 262 | 1996 | 34 |
Resolving Anaphors in Embedded Sentences Saliha Azzam University of Sheffield Department of Computer Science Regent Court 211 Portobello Street Sheffield S1 4DP U.K. S. Azzam~dcs. shef. ac. uk Abstract We propose an Mgorithm to resolve anaphors, tackling mainly the problem of intrasentential antecedents. We base our methodology on the fact that such an- tecedents are likely to occur in embedded sentences. Sidner's focusing mechanism is used as the basic algorithm in a more com- plete approach. The proposed algorithm has been tested and implemented as a part of a conceptual analyser, mainly to pro- cess pronouns. Details of an evaluation are given. 1 Introduction Intrasentential antecedents, i.e., antecedents occur- ring in the same sentence as the anaphor, are a cru- cial issue for the anaphora resolution method. The main problem is to determine the constraints that intrasentential phrases must respect in anaphoric re- lations. These constraints are used to determine re- lations between a given anaphor and its antecedents. Until now, this kind of constraint has been tackled mainly in terms of syntactic aspects, see (Lappin and Leass, 1994) (Merlo, 1993) and (Hobbs, 1985). We propose to consider new kinds of criteria that com- bine semantic restrictions with sentence structure. One of these criteria is, for example, the way in which the verb meaning influences the sentence structure, then the way in which the sentence struc- ture influences the anaphoric relations between in- trasentential phrases. The structure we studied is the embedded sentence structure. Indeed, an im- portant assumption we have made is that embedded sentences favour the occurrence of intrasentential an- tecedents. We exploit the focusing mechanism pro- posed by Sidner (Sidner, 1979) (Sidner, 1981) (Sial- her, 1983) extending and refining her algorithms. The algorithm is designed for anaphors generally, even if we focus mainly on pronouns in this paper. Indeed, the distinction between different kinds of anaphors is made at the level of anaphor interpreta- tion rules. These resolution rule aspects will not be developed here; though they have been developed in the literature, e.g., see (Carter, 1987), and (Sidner, 1981) (Sidner, 1983). We focus more on the mecha- nisms that handle these different kinds of rules. We first present how intrasentential antecedents occur in embedded sentences. We recall the main ideas of the focusing approach, then expand on the main hypotheses which led the design of the anaphora resolution algorithm. 2 Intrasentential Antecedents 2.1 Embedded sentences and elementary events An embedded sentence contains either more than one verb or a verb and derivations of other verbs (see sentence 1 with verbs said and forming). 1) Three of the world's leading advertis- ing groups, Agence Havas S.A. of France, Young & Rubicam of the U.S. and Dentsu Inc. of Japan, said they are forming a global advertising joint venture. Broadly speaking embedded sentences concern more than one fact. In sentence 1 there is the fact of saying something and that of forming a joint venture. We call such a fact an elementary event (EE hereafter). Thus an embedded sentence will contain several EEs. Factors that influence embedded sentences are mainly semantic features of verbs. For example the verb to say, that takes a sentence complement favours an introduction of a new fact, i.e., "to say something", and the related fact. There are other classes of verbs such as want to, hope that, and so on. In the following, subordinate phrases, like rel- ative or causal sentences, will be also considered as embedded ones. 2.2 Embedded sentences with intrasentential antecedents First of all, we will distinguish the Possessive, Recip- rocal and Reflexive pronouns (PRR hereafter) from the other pronouns (non-PRR hereafter). 263 On the basis of 120 articles, of 4 sentences on av- erage, containing 332 pronouns altogether, we made the following assumption (1): Assumption: non-PRR pronouns can have in- trasentential antecedents, only if these pro- nouns occur in an embedded sentence. The statistics below show that of 262 non-PRR pronouns, there are 244 having intrasentential an- tecedents, all of which occur in embedded sentences and none in a "simple" sentence. The remaining 18 non-PRR pronouns have intersentential antecedents. Pronouns 332 non-PRR 262 With intrasentential antecedents 244 in an embedded sentence With intrasentential in a simple 0 sentence With intersentential antecedents 18 Our assumption means that, while the PRR pro- nouns may find their antecedents in an non embed- ded sentence (e.g., sentences 2 and 3) the non-PRR pronouns can not. 2) Vulcan made i~s initial Investment in Telescan in May, 1992. 3) The agencies HCM and DYR are ~hem- selves joint ventures. Without jumping to conclusions, we cannot avoid making a parallel with the topological relations de- fined in the binding theory (Chomsky, 1980), be- tween two coreferring phrases in the syntactic tree level. Assumption 1 redefines these relations in an informal and less rigorous way, at the semantic level, i.e., considering semantic parameters such as the type of verbs that introduce embedded sentences. 2.3 Using Sidner's Focusing Approach To resolve anaphors one of the most suitable existing approaches when dealing with anaphor issues in a conceptual analysis process is the focusing approach proposed by Sidner. However, this mechanism is not suitable for intrasentential cases. We propose to exploit its main advantages in order to build our anaphora resolution mechanism extending it to deal also with intrasentential antecedents. We describe the main elements of the focusing ap- proach that are necessary to understand our method, without going into great detail, see (Sidner, 1979) (Sidner, 1981) (Sidner, 1983). Sidner proposed a methodology, modelling the way "focus" of attention and anaphor resolution influence one another. Us- ing pronouns reflects what the speaker has focused on in the previous sentence, so that the focus is that phrase which the pronouns refer to. The resolution is organised through the following processes: 264 • The expected focus algorithm that selects an initial focus called the "expected focus". This selection may be "confirmed" or "rejected" in subsequent sentences. The expected focus is generally chosen on the basis of the verb seman- tic categories. There is a preference in terms of thematic position: the "theme" (as used by Gruber and Anderson, 1976 for the notion of the object case of a verb) is the first, followed by the goal, the instrument and the location or- dered according to their occurrence in the sen- tence; the final item is the agent that is selected when no other role suits. • The anaphora interpreter uses the state of the focus and a set of algorithms associated with each anaphor type to determine which element of the data structures is the antecedent. Each algorithm is a filter containing several interpre- tation rules (IR). Each IR in the algorithm appropriate to an anaphor suggests one or several antecedents de- pending on the focus and on the anaphor type. • An evaluation of the proposed antecedents is performed using different kinds of criteria (syn- tax, semantics, inferential, etc.) • The focusing algorithm makes use of data struc- tures, i.e., the focus registers that represent the state of the focus: the current focus (CF) repre- sentation, alternate focus list (AFL) that con- tains the other phrases of the sentence and the focus stack (FS). A parallel structure to the CF is also set to deal with the agentive pronouns. The focusing algorithm updates the state of the focus after each sentence anaphor (except the first sentence). After the first sentence, it con- firms or rejects the predicted focus taking into account the results of anaphor interpretation. In the case of rejection, it determines which phrase is to move into focus. This is a brief example (Sidner 1983) : a Alfred and Zohar liked to play baseball. b They played it every day after school before din- ner. c After lheir game, Alfred and Zohar had ice cream cones. d They tasted really good. • In a) the expected focus is "baseball" (the theme) • In b) "it" refers to "baseball" (CF). "they" refers to Alfred and Zohar (AF) • The focusing algorithm confirms the CF. • In d) "they" refers to "ice cream cones" in AFL. • The focusing algorithm decides that since no anaphor refers to the CF, the CF is stacked and "ice cream cones" is the new CF (focus move- ment). We call a basic focusing cycle the cycle that in- cludes : • the focusing algorithm • followed by the interpretation of anaphors, • then by the evaluation of the proposed an- tecedents. 2.4 What needs to be improved in the focusing approach? 2.4.1 Intrasentential antecedents The focusing approach always prefers the previ- ous sentences' entities as antecedents to the current sentences. In fact only previous sentence entities are present in the focus registers. Thus phrases of the current sentence can not be proposed as an- tecedents. This problem has already been under- lined, see (Carter, 1987) in particular who pro- posed augmenting the focus registers with the en- tities of the current sentence. For example in sen- tence 4 while the focus algorithm would propose only "John" as an antecedent for "him", in Carter's method "Bill" will also be proposed. 4) John walked into the room. He told Bill someone wanted to see him. 2.4.2 Initial Anaphors The focusing mechanism fails in the expected fo- cus algorithm when encountering anaphors occur- ring in the first sentence of a text, which we call initial anaphors, such as They in sentence (1). The problem with initial anaphors is that the focus reg- isters cannot be initialised or may be wrongly filled if there are anaphors inside the first sentence of the text. It is clear that taking the sentence in its classi- cal meaning as the unit of processing in the focusing approach, is not suitable when sentences are embed- ded. We will focus on the mechanisms and algorithmic aspects of the resolution (how to fill the registers, how to structure algorithms, etc.) and not on the rule aspects, like how IRs decide to choose Bill and not John (sentence 4). 3 Our Solution As stated above, embedded sentences include sev- eral elementary events (EEs). EEs are represented as conceptual entities in our work. We consider that such successive EEs involve the same context that is introduced by several successive short sen- tences. Moreover, our assumption states that when non-PRR anaphors have intrasentential antecedents, they occur in embedded sentences. Starting with these considerations, the algorithm is governed by the hypotheses expanded below. 3.1 Main hypotheses First hypothesis : EE is the unit of processing in the basic focusing cycle. An EE is the unit of processing in our resolution al- gorithm instead of the sentence. The basic focusing cycle is applied on each EE in turn and not sentence by sentence. Notice that a simple sentence coincides with its EE. Second hypothesis : The "initial" EE of a well formed first sentence does not contain non-PRR pronouns just as an initial simple sentence can- not. For example, in the splitting of sentence 1 into two EEs (see below), EEl does not contain non-PRR pronouns because it is the initial EE of the whole discourse. EEl) '¢rhree of the world's leading adver- tising groups, Agence I-Iavas S.A. of France, Young & Rubicam of the U.S. and Dentsu Inc. of Japan, said" EE2) "they are forming a global advertis- ing joint venture." Third hypothesis : PRR pronouns require special treatment. PRR could refer to intrasentential antecedents in simple sentences (such as in those of sentences 3 and 4). An initial EE could then contain an anaphor of the PRR type. Our approach is to add a special phase that resolves first the PRRs occurring in the initial EE before applying the expected focusing al- gorithm on the same initial EE. In all other cases, PRRs are treated equally to other pronouns. This early resolution relies on the fact that the PRR pronouns may refer to the agent, as in sentence 3, as well as to the complement phrases. However the ambiguity will not be huge at this first level of the treatment. Syntactic and semantic features can easily be used to resolve these anaphors. This relies also on the fact that the subject of the initial EE cannot be a pronoun (second hypothesis). Having mentioned this particular case of PRR in initial EE, we now expand on the whole algorithm of resolution. 3.2 The Algorithm In the following, remember that what we called the basic focusing cycle is the following successive steps • applying the resolution rules, • applying the focusing algorithm, i.e., updating the focus registers • the evaluation of the proposed antecedents for each anaphor. 265 The algorithm is based on the decomposition of the sentence into EEs and the application of the ba- sic focusing cycle on each EE in turn and not sen- tence by sentence. The complete steps are given below (see also figure 1): Step 1 Split the sentence, i.e., its semantic repre- sentation, into EEs. Step 2 Apply the expected focus algorithm to the first EE. Step 3 Perform the basic focusing cycle for every anaphor of all the EEs of the current sentence. Step 4 Perform a collective evaluation (i.e., evalu- ation that involves all the anaphors of the sen- tence), when all the anaphors of the current sen- tence are processed. Step 5 Process the next sentence until all the sen- tences are processed: • split the sentence into EEs • apply Step 3 then Step 4. F~rst Sentence 1 Sentence Splitting Algorithm [ Next EE I Sentence Splitting Algorith~r Expected Focus Algorithm [ I [ Interpretation of each Anaphor ] l Evaluation of the proposed antecedents I 1 | Next EE [ No more EEs Nex~t Collective Evaluation of the Antecedents [ ! Sentence | No more sentences Basic Focusing Cycle Figure 1: The Algorithm Main Results : 1. Intrasentential antecedents are taken into ac- count when applying the focusing algorithm. For example, in sentence 1, the intrasentential antecedent Bill will be taken into account, be- cause EEl would be processed beforehand by the expected focusing algorithm. 2. The problem of initial anaphors is then re- solved. The expected focusing algorithm is ap- plied only on the initial EE which must not con- tain anaphors. 3.3 Examples and results To illustrate the algorithm, let's consider the follow- ing sentence : Lafarge Coppee said it would buy 10 per- cent in National Gypsum, the number two plasterboard company in the US, a pur- chase which allows it to be present on the world's biggest plasterboard market. At the conceptual level, there are 3 EEs. They are involved respectively by the said, buy, and allows verbs. They correspond respectively to the following surface sentences: EEl "Lafarge Coppee said" EE2 "it would buy 10 percent in National Gypsum, the number two plasterboard company in the US" EE3 "a purchase which allows it to be present on the world's biggest plasterboard market." Consider the algorithm : • the expected focusing algorithm is applied to the first EE, EEl, which contains non-PRR anaphors. • the other phases of the algorithm, i.e., the basic focusing cycle, are applied to the subsequent EEs : - EE2 contains only one pronoun it, which is resolved by the basic focusing cycle - it in EE3 will be resolved in the same way. The anaphora resolution has been implemented as a part of a conceptual analyser (Azzam, 1995a). It dealt particularly with pronouns. It has been tested on a set of 120 news reports. We made two kinds of algorithm evaluations: the evaluation of the im- plemented procedure and an evaluation by hand. For the implementation the success rate of resolu- tion was 70%. The main cases of failure are related to the non implemented aspects like the treatment of coordination ambiguities and the appositions, or other anaphoric phenomena, like ellipsis. For the second evaluation which concerns the real evaluation of the approach,i.e., without going into the practical issues concerning implementation, the success rate was 95%. The main cases of failure were due to the cases that were not considered by the algorithm, like for example the pronouns occurring before their antecedents , i.e., cataphors. Such cases occur for example in sentences 5 and 6 pointed out 266 by Hobbs (IIobbs, 1985) to discuss the cases that are not handled easily in the literature. 5) Mary sacked out in his apartment before Sam could kick her out. 6) Girls who he has dated say that Sam is charming. Our algorithm fails in resolving his in 5, because the algorithm searches only for the entities that pre- cede the anaphor in the text. The same applies for he in 6. However improving our algorithm to process classical cases of cataphors, such as that in sentence 6, should not require major modifications, only a change in the order in which the EEs are searched. For example, to process pronouns of the sentence 6 split into two EES (see below), the algorithm must consider EE2 before EEl. This means applying the step 2 of the algorithm to EE2, then step 3 to EEl. The sentence 5 should require specific treatment, though. EEl) "that Sam is charming" EE2) "Girls who he has dated say" IIobbs also pointed out the cases of "picture noun" examples, as in sentences 7 and 8: 7) John saw a picture of him. 8) John's father's portrait of him. In 7 our algorithm is successful, i.e., it will not iden- tify him with John because of our previous assump- tion (section 2.2). However our algorithm would fail in 8 because the non-PRR pronoun him could refer to John which occurs in the same EE. Notice that Hobbs' (I-Iobbs, 1985) remark that "the more deeply the pronoun is embedded and the more elaborate the construction it occurs in, the more acceptable the non reflexive" is consistent with our assumption. For example in the embedded sentence 9 where ei- ther the reflexive (himself) or non reflexive pronouns (him) may be used, it is more natural to make use of him. 9) John claimed that the picture of him hanging in the post office was a fraud. 4 The Conceptual Level We comment here on the main aspects of the con- ceptual analysis that are related to the anaphora resolution process. They concern mainly the way of splitting embedded sentences and the problems of determining the theme and of managing the other ambiguities and the several readings. The conceptual analyser's strategy consists of a continuous step-by-step translation of the original natural language sentences into conceptual struc- tures (CS hereafter). This translation uses the re- sults of the syntactic analysis (syntactic tree). It is a progressive substitution of the NL terms located in the syntactic tree with concepts and templates of the conceptual representation language. Triggering rules are evoked by words of the sentence and allow the activation of well-formed CS templates when the syntactico-semantic filter is unified with the syntac- tic tree. The values caught by the filter variables are the arguments of the CS roles, i.e., they fill the CS roles. If they are anaphors, they are considered to be unbound variables and result in unfilled roles in the CS. The anaphora resolution aims therefore at filling the unfilled roles with the corresponding antecedents. 4.1 Splitting into EEs The splitting of a sentence in EE is done on the corresponding CS. A minimal CS is a template com- prising a predicate that identifies the basic type of the represented event and a set of roles or predicate cases. For example, the sentence "to say that they agree to form a joint venture" is represented, in a simpli- fied way, with three templates, corresponding to the predicates: • move information (from "to say"), • produce an agreement (from "to agree"), • produce a joint venture (from "to form"). Given that one template at the semantic level repre- sents an elementary event, the splitting is implicitly already done when these templates are created in the triggering phase. Indeed, the syntactico-semantic filter of the triggering rules takes into account the semantic features of words (mainly verbs) for recog- nising in the surface sentence those that are able to trigger an elementary event. 4.2 Determining the theme Gruber and Anderson characterise the theme as fol- lows: if a verb describes a change to some entity, whether of position, activity, class or possession, then the theme is the changed entity, (Gruber, 1976) and (Anderson, 1977). As Carter (Carter, 1987) demonstrated, this definition of Gruber and Ander- son is sufficient to apply the focusing mechanism. This assumption is particularly apt when we dispose of a conceptual representation. Indeed, to deter- mine the thematic roles, we established a set of the- matic rules that affect for a given predicative occur- rence, its thematic functions according to the predi- cate type, the role type and the argument's semantic class. 4.3 Managing other ambiguities An important aspect appears when one designs a concrete system, namely how to make other dis- ambiguation processes cohabit. In the concep- tual analyser, the general disambiguation module (GDM) deals with other ambiguities, like preposi- tional phrase attachment. It coordinates the treat- 267 ment of different kinds of ambiguities. This is nec- essary because the conceptual structures (CS) on which the rules are performed could be incomplete because of other types of ambiguities not being re- solved. For example, if the CF of the sentence is a PP object that is not attached yet in the CS the thematic rules fail to fill the CF. The GDM ensures that every disambiguation module intervenes only if previous ambiguities have already been resolved. The process of co-ordinating ambiguity processing is fully expanded in (Azzam, 1995b). 4.4 Multiple readings When dealing with ambiguities, another important aspect is managing multiple readings. At a certain point when the GDM calls the anaphora module to deal with a given anaphor, the status of the concep- tual analysis could be charaeterised by the following parameters : • The set of conceptual structures for the current reading Ri on which the resolution is performed, given that several readings could arise from pre- vious ambiguity processing. • The set of conceptual structures of the current sentence Si where the anaphor occurs; • The set of conceptual structures of the current elementary event EEi where the anaphor occurs after the Si splitting. • The state of the focus (content of the registers), SFi The main assumption is that the anaphora resolu- tion algorithm always applies to a single state, (Ri, Si , EEi, SFi) when resolving a given anaphor (Step 3): a If several antecedents are still possible after the individual evaluation of the anaphor, Ri is then duplicated, in Rij, as many times as there are possibilities. b When performing the collective evaluation of all Si anaphors, every inconsistent Rij is sup- pressed. c The result is a set of readings (Rij, Sj , EEj, SFi). 5 Conclusion We have proposed a methodology to resolve anaphors occurring in embedded sentences. The main idea of the methodology is the use of other kinds of restrictions between the anaphor and its an- tecedents than the syntactic ones. We demonstrated that anaphors with intrasentential antecedents are closely related to embedded sentences and we showed how to exploit this data to design the anaphora resolution methodology. Mainly, we ex- ploited Sidner's focusing mechanism, refining the classical unit of processing, that is the sentence, to that of the elementary event. The algorithm has been implemented (in Common Lisp, Sun Spare) to deal with pronouns as a part of a deep analyser. The main advantages of the proposed algorithm is that it is independent from the knowledge representation language used and the deep understanding approach in which it is integrated. Thus, it could be set up in any conceptual analyser, as long as a semantic rep- resentation of the text is available. Moreover Sid- ner's approach does not impose its own formalisms (syntactic or semantic) for its application. The im- provement of the proposed algorithm requires deal- ing with special cases of anaphors such as cataphors and also with specific cases which are not easily han- dled in the literature. For example, we saw that a solution to processing cataphors could be to recon- sider the order in which the conceptual structures (elementary events beforehand) are searched. 6 Acknowledgements This work has been supported by the Euro- pean Community Grant LE1-2238 (AVENTINUS project). References Anderson, S.R. 1977. Formal syntax. In Wasow and Akmajian, editors, Comment on the paper by Wasow in Culicover. Academic Press, pages 361- 376. Azzam, Saliha. 1995a. Computation of Ambiguities (Anaphors and PPs) in NL texts. CLAM : The prototype. Ph.D. thesis, Paris Sorbonne Univer- sity. Azzam, Saliha. 1995b. Anaphors, pps and disam- biguation process for conceptual analysis. In 14th International Joint Conference on Artificial In- telligence (IJCAI'95). San Mateo (CA): Morgan Kaufmann. Carter, David. 1987. Interpreting Anaphors in nat- ural language texts. Chichester : Ellis Horwood. Gruber, J.S. 1976. Lezical structures in syntax and semantics. North-Holland. Hobbs, Jerry. 1985. Resolving pronoun references. In B. Grosz K. Sparck-Jones B. Webber, editor, Readings in Natural Language, volume 44. Morgan Kaufmann Publishers Los Altos California, pages 311-338. Lappin, S. and H.J. Leass. 1994. An algorithm for pronominal anaphora resolution. Computational Linguistics, 20:535-561. Merlo, P. 1993. For an incremental computation of intrasentential coreference. In the 13th Interna- tional Joint Conference on Artificial Intelligence (IJCAI'93), pages 1216-1221. San Mateo (CA): Morgan Kaufmann. 268 Sidner, C. 1979. Toward a computation of in- trasentential coreference. Technical Report TR- 537, MIT. Artificial Intelligence Laboratory. Sidner, C. 1981. Focusing for interpretation of pro- nouns. American Journal of Computational Lin- guistics, 7:217-231. Sidner, C. 1983. Focusing in the comprehension of definite anaphora. In Brady. M and Berwick R.C, editors, Computational Models of Discourse. Cambridge (MA) : The MIT Press. 269 | 1996 | 35 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.