text
stringlengths
0
316k
year
stringclasses
50 values
No
stringclasses
911 values
Empirical Study of Predictive Powers of Simple Attachment Schemes for Post-modifier Prepositional Phrases Greg Whittemore Kathleen Ferrara Electronic Data Systems Corp. Texas A&~M University 5951 Jefferson Street N.E. Albuquerque, NM 87109-3432 Hans Brunner US West Advanced Technologies 6 June 1990 Abstract This empirical study attempts to find answers to the question of how a natural language (henceforth NL) system could resolve attachment of preposi- tional phrases (henceforth PPs) by examining nat- urally occurring PP attachments in typed dialogue. Examination includes testing predictive powers of existing attachment theories against the data. The result of this effort will be an algorithm for inter- preting PP attachment. Introduction Difficulty in resolving structural ambiguity in- volving PPs arises because of the great variety of syntactic structures which PPs can modify and the varying distances PPs may be from the con- stituents with which they are associated. Simple schemes to resolve attachments utilize information drawn from reported tendencies in the human pars- ing mechanism, such as the preference for PPs to attach to constituents that immediately precede them. It is always tempting to utilize such schemes in computer NL processors because they provide clear models for resolution that are both easy and cheap (in terms of steps involved) to implement. The problem with these schemes is that they can easily be made to fail by manipulating parameters that they 'know' nothing about, such as semantics, context, and intonation. Clearly, more elaborate schemes for attachment resolution are needed, but what these schemes should contain and how they should be implemented remain open. This study attempts to find answers to the ques- tion of how a computer program should resolve at- tachment by examining naturally occurring PP at- tachments in a typed dialogue domain drawn from a study by Brunner, Whittemore, Ferrara, and Hsu (1989). Various previously developed theories of PP attachment are tested against the data to see how well they predict correct attachments of PPs in the typed dialogues. The result of this effort will be a hypothesis of attachment resolution that seems to fit the data. Empirical overview The methods for generating the 13 naturally oc- curring dialogues are described in Brunner, et al. (1989). In essence, this study employed a "wiz- ard of Oz" paradigm in which a human confeder- ate -- the Wizard - simulates an advanced com- puter system engaged in written/interactive dia- logue with the experimental participant. Partici- pants of the study were each asked to plan a spe- cific travel agenda of their choice with information obtained solely by typing natural language mes- sages and requests through a VT220 terminal to a human-assisted travel information system located in a separate room. In response to this, the Wizard, who had access to both computerized and hard- copy travel data, was instructed to engage in con- structive and free-form dialogue with the partici- pant in order to best obtain the reservations and flight information required by them. Each dialogue took one and a half hours to complete, allowing enough time for about 70 sentences per dialogue 23 for a total of 910 sentences. In another study, Whittemore, Ferrara, and Brunner (1989) quantify the occurrence of PPs in the 13 dialogues in terms of the syntactic types to which they attach and the overall syntactic environ- ments in which they appear. Data is presented in terms of Tension Sites to illustrate possible syntac- tic attachment interpretations and actual interpre- tations that occurred. For instance in the sentence John eats his bananas in his backyard, potential at- tachment ambiguity lies in the fact that the PP in his backyard can attach to the noun phrase object his bananas or to the verb eats. Such positions were referred to as Tension Sites. All such Tension Sites for sentences with PPs were recorded along with actual attachments. Some instances were simple as in the example above with only a minimum of Tension Sites, while others were quite involved and had up to seven Tension Sites in which a verb and np-object along with the objects of five other prepo- sitions were available as attachment sites. Of the 910 sentences in the 13 dialogues, 745 had instances of potential ambiguity in attachment. Much of the analysis presented in this paper is drawn from the Whittemore, et al. study. Theories of Preferenclng for Post-modifier PP Attachment Several of the PP attachment schemes available in the literature were used as a backdrop for ex- amining attachment tendencies in the typed dia- logues. These predictors (listed below) were basi- cally employed as individual templates which were applied against the data. Percentages of correct predictability were recorded and some investigation into their failures was made. Only attachments to nouns and verbs were made in this study, giving a corpus of 724 examples. The attachment predictors tested were: RIGHT ASSOCIATION (RA) - the tendency for constituents to associate with adjacent items to their right (Kimball 1973), also known as low at- tachment. Late Closure (Frazier 1979) is a similar notion. MINIMAL ATTACHMENT (MA) - the tendency to attach in a manner in which the least number of syntactic rules are employed (Frazier 1979). LEXICAL PREFERENCE VIA VERBS (LP) - the tendency for PPs to attach to verbs that have a preference for them (Ford, Bresnan, and Kaplan 1982). LEXICAL PREFERENCE VIA NOUNS (LP) - is sim- ilar to verb LP, but PPs attach to nouns that may have a preference for them as discussed briefly ill Rappaport (1983). LEXICAL PREFERENCE VIA PREPOSITIONS (LP) - is similar to verb and noun LP, but prepositions themselves may have a tendency to seek out cer- tain kinds of constructions. For instance, temporal PPs may have a preference for attaching to enti- ties such as events that have temporal qualities to them. Prepositions acting as functors like this are mentioned in Wilks, Huang, and FaNs (1985). REFERENTIAL SUCCESS (P~S) - dictates that one first checks to see if there are any 'like' entities ill the discourse, namely ones that have similar PPs as modifiers. If there are matches, then attachment takes on the same look as the antecedent. There are also notions of presupposition in the theory that make predictions about definite, indefinite, generic, and generic plural noun phrases (Crain and Steed- man 1984). In a streamlined version of the theory (Hirst 1987), definite noun phrases require the re- cipient of discourse to try to make a connection to existing knowledge. Because of this added ef- fort in which one must search his discourse space, it has been predicted that attachment to a definite noun phrase would be less preferred. Other noun phrases -- indefinites, generics, and bare plurals -- along with verbs are preferred over definites as attachment sites since they supposedly require less search over discourse space. Success of Preferencing Schemes Against the Data The 'effect' that each of the preferencing schemes reviewed above has on the attachment of the post- modifiers is explored in the remaining sections. Not every possible PP attachment found in the corpus is examined. An attempt is made to explain only attachments to nouns and verbs (thus those made to adverbs, adjectives, prepositions themselves, or within idiomatic expressions are excluded). RIGHT ASSOCIATION From the data evident in the dialogues it can be seen that RA seems to have a fairly strong influ- ence within the typed discourse domain of travel. As noted in the Tension Site tabulations (Whitte- more, et al.), low attachment was observed 55% of the time. However, its almost equally high failure 24 rate of 45% dictates that RA by itself is not a sat- isfactory scheme for deciding PP attachments. MINIMAL ATTACHMENT The success of MA in the attachment of PPs in the 13 dialogues is rather poor. Out of 488 in- stances in which there was an opportunity for MA to take a role, only 177 examples (or 36%) behaved according to a strict notion of MA. By a strict notion we mean that whenever possible, the least number of rules are applied. REFERENTIAL SUCCESS AND PRESUPPOSITION Using only definite NPs as a guide for indicat- ing that a noun phrase is being used to refer to some antecedent, strict notions of RS failed miser- ably -- out of 101 definite noun phrases only 12 instances of exact match with some antecedent oc- curred. There were also 17 cases in which some subsequent phrase was used to 'restrict' or refer to some semantic subset of an antecedent. There was one additional case in which a subsequent noun phrase was a rephrasing of an antecedent. For the remaining 71 instances, no antecedent could be lo- cated within the text. Altogether there were only 30 out of 101 that could be deemed successful. It should also be noted that for a NL understanding system to correctly interpret just these few exam- ples much machinery would be required to 'under- stand' when something was a 'rephrasing' or 're- striction' of an antecedent. The accompanying notion of presupposition, in which PP attachment to definite NPs is avoided when no such NP+PP already exists in the dis- course, would, numerically, need to be regarded as a semi-successful predictor of attachment site. Disregarding the 30 cases in which an antecedent for an NP was found in the discourses, one would have to say that avoiding attachment to NP was successful since for the remaining 694 instances (724 total minus the 30 cases above) correct de- cision attachment was made to avoid attachment to definite NPs 623 times (694 cases minus the 71 cases of non-anaphoric NP+PPs) for a 90% suc- cess rate. However, predicting correct attach- ment beyond avoiding definite NPs was not suc- cessfully performed. It is not enough to just try to avoid attaching to definite NPs; there must also be a way of specifying how PPs are to link up with other non-definites and verbs. In the study, Hirst's (1987) modified version was used in which one at- taches to the last occurring non-definite or verb in a RA fashion. Employing a combined presupposi- tion/RA approach, the success is still low -- only 52% (or 362 attachments) are correctly predicted. VERB LEXICAL PREFERENCING To determine the success of LP of verbs in the 13 travel dialogues, each verb used within the dia- logues was examined for its potential for LP. Some verbs were determined to have a very strong LP such as some two part verbs like involved in or verbs like live that have an obviously strong pref- erence for locative PPs. The rest were determined to be LP verbs through a consensus of 3 individu- als, and when possible, further substantiated to be LP verbs through the aid of two sources on verbs and their complements - A COMPLETE GRAM- MAR OF ENGLISH by Quirk, Greenbaum, Leech, and Svartvik (1972) and VALENCY OF VERBS by Allerton (1982). 1 After a complete list of the verbs was derived, the number of times that the verbs appeared with sought-after prepositions was determined and tab- ulated. Next, the success of the LP verbs was de- termined by quantifying the times that they failed versus the times they succeeded. Reasons for fail- ure in LP verbs were then sought out through all analysis of the sentences in which LP verbs and pos- sible PPs that could go with LP verbs were present, but the two were not associated with each other. A synopsis of the findings on verb LP is below. The main point to be gleaned from this synopsis is that there seem to be a fairly large number of PP attachments that could be construed to be the result of verb LP -- 228 out of 724 total. This is significant because it indicates that the incorpora- tion of an accurate LP scheme could be beneficial in a PP attachment resolution scheme. 2 verb lexical preferencing: 228 instances of verb LP 1There have been several methods suggested in the liter- ature for determining lexical preferencing, but it was felt at the time that their predictive powers were somewhat unreli- able, though the authors could very well be wrong. Readers should refer to chapter one in Somers (1987) for a good dis- cussion of various preference-determining schemes. 2Closer scrutiny of the different LP verbs also made it apparent that the number of domain-specific LP verbs is comparatively quite large. For instance, the verbs begin, book, change, depart, fly, get, and leave, to name some, all have senses that seemed particular to the travel domain. 25 47 different verbs examples: arranged through, arrive at, begin from, fly from/to, start at The tabulations shown above are only for correct attachments in which it could be decided that a particular LP verb did attach to a PP. There were also 21 LP verbs that failed to link up with existing PPs that they normally seek. Verb-LP alone failed in 18 of the 21 instances, seemingly because of the presence of multiple LP verbs. In (1) is an example from the dialogues. (I). Before deciding that I want to know the flight times for United Air Lines LEAVING from Austin and GOING TO JFK in New York on August 30. The verb LEAVE was determined to have a preference for the prepo- sition TO, as was the verb GO. However, in the example TO attaches only to GO To account for the attachments some added ma- chinery is needed. It was earlier demonstrated that there was a 54% tendency for attachment of PPs to be to the most immediate low constituent to their left, or Right Association - RA. RA has also been shown in the work of Wilks et al. (1985) and Fra- zier (1979) to be beneficial when choosing between two LP verbs. They predict that when multiple LP verbs appear a sought after PP attaches to the last LP verb that precedes it. In the travel domain in this study, with a combi- nation of RA and verb LP it was found that in every case in which 2 verbs were vying for the same PP at- tachment, attachment was made to the lower verb. With this additional machinery all but 3 of the in- correct attachments in sentences with LP verbs can be explained. In the 3 remaining instances in which attach- ment goes against the notion of LP, attachments were made to nouns. In (2) is one of the instances. In (2), show was deemed as normally calling for a PP headed by lo, but attachment went to the NP object following the verb. Under a strict notion of verb LP there is no provision to allow the at- tachment of PPs to nouns following LP verbs. The possibility of nouns having LP characteristics will be explored in the next section, and the example below should be re-examined in light of the data there. (2). I need to know would you like for me to SHOW you some FLIGHT schedules to Dublin? NOUN LP FOR PPS The methodology for exploring noun LP was sim- ilar to that of verb LP. Shown below are the overall results for noun LP. As indicated, the number of PPs attaching to LP nouns is again comparatively quite large, almost as large as the number of at- tachments to LP verbs -- 183 versus 228. Thus, as is the case for LP verbs, noun LP seems to be a significant means by which PP attachments can be predicted. 3 noun lexical preferencing 183 instances of noun LP 24 different ip nouns examples: (air)fare(s) from/to, bus to, carrier from/to, and travel(ing) by, Under the LP noun analysis, all instances in which there was a single LP noun were correctly accounted for by a noun LP scheme. Under a LP noun analysis PPs that were at a proximal, such as (3), or great distance, such as (4), were able to correctly link up with appropriate nouns. (3). Would you like for me to show you some FLIGHTS TO Dublin? (4). What is the round trip FAKE for Aer Lingus and for British Airlines FROM JFK on August 30 TO Dublin returning Sept 217 There were three sentences in which multiple LP words appeared in which there was first an LP noun, and later either another LP noun or an LP verb. With these, using the same RA analysis that was employed for LP words, correct predic- tions about attachment can be made - when any 3Again, as with the LP verbs, there are many nouns that seem to have LP for the travel domain. The nouns bus, carrier, ehan#e, connectians, dollars, airfare, flights, one way, travel, and roundtrip all seem to have senses particulaa" to the domain at hand. 26 two LP words that seek the same PP are present, no matter if they are nouns or verbs, attachment is made to the latter LP word. For instance, sentence (5) has two LP nouns, tr/p and flight, both of which were deemed to have a preference for the singly oc- curring PP headed by from. By enforcing RA, in which the attachment of the from PP is made to the last occurring and lowest LP noun (in this case flight), the correct interpretation can be derived. (5). Then what you would rather have is a round TRIP to London) with a sepa- rate FLIGHT from London to Dublin. Similarly, when deriving interpretations in which LP verbs are followed by LP nouns, RA between the competing LP words makes the correct interpreta- tion. Thus in the 3 sentences in which LP verbs are followed by LP nouns, and LP verbs and nouns pre- fer the same PPs, RA attachment is favored with attachment to the three last occurring LP nouns. The combined noun and verb LP scheme is: If an LP verb or LP noun is present, apply verb or noun LP. If two LP verbs or nouns are present that seek the same PP use the notion of RA and attach the PP to the last word that seeks it. MODIFYIN~ PPS (OR 1"1" L1") The verb and noun LP schemes demonstrated above were successful but only for the cases in which LP verbs and nouns appeared. Excluding the 411 PPs that seemed to be accounted for via LP, there still remain to be explained 313 PPs, 43% of the cases. Since for the remaining PPs, the predominant general preference schemes were either not appro- priate (verb LP, noun LP, or RS) or shown not to be powerful enough predictors by themselves (RA and MA), the PPs were examined in terms of the func- tions they served in hopes that some generalities amongst them would become evident. This proved to be a promising exercise since most of the PPs were found to belong to two function types, tem- poral and locative indicators. Of the remaining PPs, 189 (60% of the remaining) were temporal, 90 (28%) were locative, and 34 (12%) were of a mixed variety. Some examples of these are provided in (0). (6). TEMPORAL. British Airlines has a flight that leaves AT 12:30. LOCATIVE. Could you suggest a few hotels in a moderate price range IN a nice part of London? OTHER/MIXED. Please book me on these flights WITH an aisle seat. For the PPs involved in LP, it could be argued that their attachment is determined by the near ne- cessity that some argument position for a LP head be filled. With the remaining PPs, there seemed to be something else required in order to make their attachment. Instead of having something look for the PPs, it appeared that there needed to be a way by which the PPs could serve as functors in which they seek out arguments (a notion also defended ill Bresnan, 1982). The items to which the temporal and locative PPs attach are ones that have some temporal or locative quality to them. For temporals, attachment sites are either ac- tions that can occur at some particular time or some state that must last for some period of time. In the type-written dialogues in the travel domain, the combination of leftward search for a temporal- accepting noun or verb and RA proved to be suc- cessful. With a combined PP LP/RA algorithm in which temporal-PPs look for the first NP or VP to their left that has a temporal quality, the attach- ment of temporal-PPs was successfully predicted in all but one of the 81 instances. For locative-PP modifiers, using the same scheme as was used for temporal-PP modifiers in which af- ter noun and verb LP fail a search is performed for the last locative-accepting item to the left, pre- dictability of attachment of locative-PPs was again almost 100%. 4 The resulting preferencing scheme for temporal- locative-PP LP is: - MUST be ordered after noun and verb LP - If there is a locative PP, attach to the most adjacent constituent to the 4Actually, out of the 90 instances of locative PPs (this excludes those PPs that are called for by LP words) 8 re- quire further elaboration. Examples of further elaboration are permitting gapping out of complex NPs so that PPs can attach to their 'extracted' elements as in (a) and having mechanisms to derive compound nouns and adjective/noun combinations as in (b). a. Which airport do you want to fly to *GAP* in Paris? b. Provide DEPARTURE TIMES fi'om Dublin o,~ 9/20/86 to Boston with ARRIVAL TIMES in Boston. 27 left that has a head with a locative quality. - If there is a temporal PP, attach to the most adjacent constituent to the left that has a head with a temporal quality. added notes: Must be able to link up with EXTRACTED elements. Characteristics of EXTRACTED elements must be ~ssociated with their gaps before linking locative PPs is attempted. Must first link any temporal/locative qualities of modifying adjectives to the modified head. OTHER PP MODIFIERS The remaining PP modifiers, those that are prob- ably not sought after by an LP verb or noun and do not belong to the class of temporal-PPs or locative- PPs, were treated together. The reason for this particular grouping was that there were a num- ber of functions evident in some PPs that occurred very infrequently and since one of the major foci of the study was to try to find general means of deciding attachment of PPs, individualization of these PPs was, at first, discounted. In some of the prior attachment schemes, there were some el- ements that were given the power to seek out some other constituent (e.g. LP verb sought out cer- tain case types presented in particular PPs and temporal PPs sought out temporal-bearing nouns or verbs). Attempting to use LP with the varied other group was not possible since no one function type (e.g. such as temporality) and no single pref- erence characteristic was evident. Other schemes were necessary for this group. What proved to be succesful was the Hirst (1987) modified version of presupposition in which attach- ment to definites is generally avoided. Adding the notion of RA, one can also decide between equally weighted non-definites and verbs when both are present. The combined presupposition-RA algorithm is expressed below. When coming upon a PP that was of the other type, an attachment is made to the most recent verb or non-definite noun in a RA fashion. Avoid attachment to definite NPs and attach to most recently occurring verb or non-definite NP to the left. As shown below under this scheme, correct pre- diction was made 100% of the time for the non- definite+verb grouping. However, when examining the success of attachment with the definite NPs, the rate of successful prediction was much lower. In 13 instances, avoiding attachment to definite NPs was the correct thing to do, but 7 times it was not, resulting in a 65% success rate. Thus if one permits the RA+non-definite noun preferenc- ing scheme, the only items needing further expla- nation are the definite NPs. of correct predictions of attaching "other" PPs to last occurring avail- able verb or non-definite noun to the left I00~ of correct prediction to avoid attachment to definite NPs. 65X With the limited group of 7 definite NPs (these were the remaining, unresolved definite NPs), it was easy to identify a single class to which the con- flicting NPs belonged. All the nouns but one 5 that could be associated with PPs were ones that could be used in partitive expressions. Partitive nouns can be separated out from other nouns as those noun expressions that denote a kind or quantity and are typically followed by the preposition of. In (6) are two examples from the dialogues. (6) a. the legs of your trip. b. the size of the hotel The algorithm for the other group is: Check to see if preceding lowest consti- tuent is a definite NP and part of a partitive expression, If it is, attach the PP to the preceding definite NP, Otherwise, attach to the most recently occurring verb or non-definite NP. 5The sole exception was with the noun ]eeling in the ex- pression the ]eeling o] the community. It is highly probable that this is an idiomatic noun phrase and should be entered in an idiomatic lexicon. 28 Overall Algorithm As laid out below after some preliminary tasks are performed, namely associating nouns with their adjectives and extracted items with their gaps, the first preference to apply is noun and verb LP. If noun and verb LP fails, the two-stepped tempo- ral/locative modifer preference can step in and per- form attachments of which it is capable. When all else fails, the other modifier routine finishes off anything left over. Associate adjectives with locative (and possibly temporal) qualities to the nouns they modify. Associate extracted items with their respective 'gaps.' If an LP verb or LP noun is present, apply verb or noun LP. If two LP verbs or nouns are present that seek the same PP, use the notion of RA and attach the PP to the last word that seeks it. Otherwise, if a temporal PP is present, attach it to the most adjacent consti- tuent to the left whose head contains a temporal quality. Otherwise, if a locative PP is present, attach it to the most adjacent consti- tuent to the left whose head contains a locative quality. Otherwise, if an OTHER modifier (not a temporal or a locative) is present and if the immediately preceding element is a definite NP that could be part of a partitive expression, then attach the PP to the NP, Otherwise attach to the last occurring verb or non-definite NP. Conclusion The study indicates that there seems to be a way of predicting PP attachment in the typed in- teractive mode of communication by fairly sim- ple means. By using LP for nouns, verbs and prepositions (temporal and locative PPs seek out temporal- or locative-accepting elements) and a variation on the Crain and Steedman notion of presupposition, attachments are essentially always predictable. Correct interpretation of the 724 instances it~ which there existed structural ambiguity in the at- tachment of PPs to nouns or verbs occurred as fol- lows: Verb LP 228 instances Noun LP 183 instances Temporal prep. LP 189 instances Locative prep. LP 90 instances Other modifiers 34 instances (presupposition + RA) :added note - two items were not accounted for: --- one seemed to be an idiomatic expression --- one may possibly have been contextually related RA played a role within each preferencing scheme as did a weak notion of plausibility. RA was used as the arbitrator whenever there remained an intra- conflict in a preferencing algorithm (and sometimes when there was inter-conflict between schemes). The use of plausibility to talk about relationships between verbs or nouns and associated PPs was thought to be a necessary notion in that simple searches for only prepositions were deemed to be too weak of a notion. When verb or noun LP was at work, nouns and verbs sought out PPs (as op- posed to single prepositions) that as a whole had some attribute(s) necessary to fulfill some semantic requirements. Sometimes PPs also had to be con- cluded to be of a particular type in order to search out a unique kind of noun or verb. Apparently, PP Lexical Preferencing allowed PPs that were tempo- ral or locative in nature to look for nouns and verbs that bore temporal or locative characteristics, re- spectively. Referential Success in its pure sense was a poor predictor of attachments. However, the re- lated notions of presupposition regarding definites, indefinites, etc. were good predictors of attachment for a small number of PPs. Finally, a more cognitive finding resulting from the study was the great predictability of attach- ment, suggesting that there is something about the typed interactive mode of communication that coil- strains the possibilities on attachment such that attachment always goes with the unmarked ce, sc. There are at least three pressures that may help to make these constraints come about. One is the 29 lack of the spoken element which carries with it intonation patterns and variations in pausing that can influence the ways that one parses. One must rely on only the cues available by written means to aid in disambiguating attachments. Secondly, the added comparative slowness at which interlocu- tors type and the resulting tendency to leave out unnecessary punctuation marks often useful in dis- ambiguating text makes yet a further constrained subset. Thirdly, a speaker may be aware of the time lag (hence taxation on memory) that exists between typing some modified element and its as- sociated PP. The lag may have an effect on how such pairs are presented. Prominent ways of high- lighting the links may depend more on notions such as LP or RA that might not be needed as much in other modes of communication. These factors to- gether may make it necessary for participants in typed interactive communication to rely on a set of default structures that each can cue on easily. A cknowledgements We wish to thank Joyce Conner for her time and energy spent in collecting and analyzing the data, Melissa Macpherson for her insights into the notions presented in the paper, and Laurie Whit- temore and Jim Barnett for their editing efforts. Also, much of the work on this paper was car- ried out when Greg Whittemore and Kathy Ferrara were employees of MCC, and thanks goes to MCC personnel, particularly Elaine Rich, who made it possible for the study to be performed. References [1] Allerton, D. 1982. VALENCY AND THE EN- GLISH VERB. London: Academic Press. [2] Brunner, H., Whittemore, G., Ferrara, K., Hsu, J., 1988. An assessment of written/interactive dialogue for information retrieval applications. (MCC Technical Report #ACT-HI-245-89). [3] Crain, S. and Steedman, M. 1984. On not be- ing led up the garden path: the use of context by the psychological syntax processor. In Dowty, D., Karttunen, L., and Zwicky, A. (eds.). NATU- RAL LANGUAGE PROCESSING. Cambridge: Cambridge University Press. [4] Ford, M., Bresnan, J., and Kaplan, R. 1982. A competence based theory of syntactic closure, in Bresnan, J. (ed.). THE MENTAL REPRESEN- TATION OF GRAMMATICAL RELATIONS. Cambridge, MA: MIT Press. [5] Frazier, L. 1979. On Comprehending Sentences: Syntactic parsing strategies. Ph.D. thesis, Uni- versity of Massachusetts. [6] Hirst, G. 1987. SEMANTIC INTERPRETA- TION AND THE RESOLUTION OF AMBIGU- ITY. Cambridge: Cambridge University Press. [7] Kimball, J. 1973. Seven principles of surface structure parsing in natural language. COGNI- TION 2(1), 1973, 15-47. [8] Quirk. R., Greenbaum, S., Leech. G., and Svartvik, J. 1972. COMPREHENSIVE GRAM- MAR OF THE ENGLISH LANGUAGE. Lon- don: Longman. [9] Rappaport, M. 1983. On the nature of derived nominals. In Levin, L., Rappaport, M., Zae- man, A. (eds.) PAPERS IN LEXICAL FUNC- TIONAL GRAMMAR. Indiana University Lin- guistics Club. [10] Somers, H. 1987. VALENCY AND CASE IN COMPUTATIONAL LINGUISTICS. In Michaelson, S. and Wilks, Y. (eds.) VOL. 3 of EDINBURG INFORMATION TECHNOLOGY SERIES. Great Britain: Edinburg University Press. [11] Whittemore, G., Ferrara, K., and Brunner, H. 1989. Post-modifier prepositional phrase am- biguity in written interactive dialogues. MCC Technical Report #ACT-HI-247-89. [12] Wilks, Y., Huang, X., and Fass, D. 1985. Syn- tax, preference, and right attachment. PROC. IJCAI-85, Aug. 18-23, Los Angeles, CA, pgs. 779-784. 30
1990
4
STRUCTURAL DISAMBIGUATION WITH CONSTRAINT PROPAGATION Hiroshi Maruyama IBM Research, Tokyo Research Laboratory 5-19 Sanbancho, Chiyoda-ku, Tokyo 102 Japan [email protected] Abstract We present a new grammatical formalism called Con- straint Dependency Grammar (CDG) in which every grammatical rule is given as a constraint on word- to-word modifications. CDG parsing is formalized as a constraint satisfaction problem over a finite do- main so that efficient constraint-propagation algo- rithms can be employed to reduce structural am- biguity without generating individual parse trees. The weak generative capacity and the computational complexity of CDG parsing are also discussed. 1 INTRODUCTION We are interested in an efficient treatment of struc- tural ambiguity in natural language analysis. It is known that "every-way" ambiguous constructs, such as prepositional attachment in English, have a Cata- lan number of ambiguous parses (Church and Patil 1982), which grows at a faster than exponential rate (Knuth 1975). A parser should be provided with a disambiguation mechanism that does not involve generating such a combinatorial number of parse trees explicitly. We have developed a parsing method in which an intermediate parsing result is represented as a data structure called a constraint network. Every solution that satisfies all the constraints simultaneously corre- sponds to an individual parse tree. No explicit parse trees are generated until ultimately necessary. Pars- ing and successive disambiguation are performed by adding new constraints to the constraint network. Newly added constraints are efficiently propagated over the network by Constraint Propagation (Waltz 1975, Montanari 1976) to remove inconsistent values. In this paper, we present the basic ideas of a formal grammatical theory called Constraint Depen- dency Grammar (CDG for short) that makes this parsing technique possible. CDG has a reasonable time bound in its parsing, while its weak generative capacity is strictly greater than that of Context Free Grammar (CFG). We give the definition of CDG in the next section. Then, in Section 3, we describe the parsing method based on constraint propagation, using a step-by- step example. Formal properties of CDG are dis- cussed in Section 4. 31 2 CDG: DEFINITION Let a sentence s = wlw2 ... w,, be a finite string on a finite alphabet E. Let R -- {rl,r2,...,rk} be a finite set of role-iris. Suppose that each word i in a sentence s has k-different roles rl(i), r2(i) .... , rk(i). Roles are like variables, and each role can have a pair <a, d> as its value, where the label a is a member of a finite set L = {al,a2,...,at} and the modifiee d is either 1 < d < n or a special symbol nil. An analysis of the sentence s is obtained by assigning appropriate values to the n x k roles (we can regard this situation as one in which each word has a frame with k slots, as shown in Figure 1). An assignment A of a sentence s is a function that assigns values to the roles. Given an assignment A, the label and the modifiee of a role x are determined. We define the following four functions to represent the various aspect of the role x, assuming that x is an rj-role of the word i: rt-role r=-role W~ W= W. I I [---] I I l- I r.-ro,e I I 1" t I I Figure 1: Words and their roles. • pos(x)~ f the position i • rid(x)~ r the role id rj • lab(x)d-~ f the label of x • mod(x)d-~ f the modifiee of x We also define word(i) as the terminal symbol occurring at the position i. 1 An individual grammar G =< ~, R, L, C > in the CDG theory determines a set of possible assignments of a given sentence, where • ~ is a finite set of terminal symbols. • R is a finite set of role-ids. • L is a finite set of labels. • C is a constraint that an assignment A should satisfy. A constraint C is a logical formula in a form Vxlx2...xp : role; PI&P2&...&P,~ where the wHables Xl, x2, ..., xp range over the set of roles in an assignment A and each subformula P~ consists only of the following vocabulary: • Variables: xl, x2, ..., xp • Constants: elements and subsets of E U L U RU {nil, l,2,...} • Function symbols: word(), posO, rid(), lab(), and modO lIn this paper, when referring to a word, we purposely use the position (1,2,...,n) of the word rather than the word itself (Wl,W2, ,--,Wn), because the same word can occur in many different positions in a sentence. For readability, however, we sometimes use the notation word~os~tion. • Predicate symbols: =, <, >, and E • Logical connectors: &, l, "~, and Specifically, we call a subformula Pi a unary con- straint when P.i contains only one variable, and a binary constraint when Pi contains exactly two vari- ables. The semantics of the functions have been defined above. The semantics of the predicates and the logi- cal connectors are defined as usual, except that com- paring an expression containing nil with another value by the inequality predicates always yields the truth value false. These conditions guarantee that, given an assign- ment A, it is possible to compute whether the values of xl, x2 .... , xp satisfy C in a constant time, regard- less of the sentence length n. Definition • The degree of a grammar G is the size k of the role-id set R. • The arity of a grammar G is the number of vari- ables p in the constraint C. Unless otherwise stated, we deal with only ar- ity-2 cases. • A nonnull string s over the alphabet ~ is gener- ated iff there exits an assignment A that satisfies the constraint C. • L(G) is a language generated by the grammar G iff L(G) is the set of all sentences generated by a grammar G. Example Let us consider G1 =< E1,R1,L1,C1 > where • = • R1 = {governor} • nl = {DET,SUBJ,ROOT} • C1 = Vxy : role; P1. The formula P1 of the constraint C1 is the con- junction of the following four subformulas (an infor- mal description is attached to each constraint): GI-1) word(pos(x))=D ~ ( lab(x)=DgT, word(mod(x))=N, pos(x) < rood(x) ) "A determiner (D) modifies a noun (N) on the right with the label DET." 32 Role Value governor( "al" ) governor("dog2") governor( "runs3" ) <DET,2> <SUBJ,3> <R00T,nil> Figure 2: Assignment Satisfying (GI-1) to (G1-4) ~SUB3 (G1-2) word(pos(x))=N ~ ( lab(x)=SUBJ, word(mod(x))=V, pos(x) < mod(x) ) "A noun modifies a verb (V) on the right with the label SUBJ." (G1-3) word(pos(x))=V ~ ( lab(x)=ROOT, mod(x)=nil ) "A verb modifies nothing and its label should be ROOT." (G1-4) (mod(x)=mod(y), lab(x)=lab(y) ) ~ x=y "No two words can modify the same word with the same label." Analyzing a sentence with G1 means assigning a label-modifiee pair to the only role "governor" of each word so that the assignment satisfies (GI-1) to (G1-4) simultaneously. For example, sentence (1) is analyzed as shown in Figure 2 provided that the words "a," "dog," and "runs" are given parts-of- speech D, N, and V, respectively (the subscript at- tached to the words indicates the position of the word in the sentence). (1) A1 dog2 runs3. Thus, sentence (1) is generated by the grammar G1. On the other hand, sentences (2) and (3) are not generated since there are no proper assignments for such sentences. (2) A runs. (3) Dog dog runs. We can graphically represent the parsing result of sentence (1) as shown in Figure 3 if we interpret the governor role of a word as a pointer to the syntactic governor of the word. Thus, the syntactic structure produced by a CDG is usually a dependency structure (Hays 1964) rather than a phrase structure. Figure 3: Dependency tree 3 PARSING WITH CONSTRAINT PROPAGATION CDG parsing is done by assigning values to n × k roles, whose values are selected from a finite set L x {1,2,...,n, nil}. Therefore, CDG parsing can be viewed as a constraint satisfaction problem over a finite domain. Many interesting artificial intelli- gence problems, including graph coloring and scene labeling, are classified in this group of problems, and much effort has been spent on the development of efficient techniques to solve these problems. Con- straint propagation (Waltz 1975, Montanari 1976), sometimes called filtering, is one such technique. One advantage of the filtering algorithm is that it allows new constraints to be added easily so that a better solution can be obtained when many candidates re- main. Usually, CDG parsing is done in the following three steps: 1. Form an initial constraint network using a "core" grammar. 2. Remove local inconsistencies by filtering. 3. If any ambiguity remains, add new constraints and go to Step 2. In this section, we will show, through a step-by-step example, that the filtering algorithms can be effec- tively used to narrow down the structural ambigui- ties of CDG parsing. The Example We use a PP-attachment example. Consider sen- tence (4). Because of the three consecutive preposi- tional phrases (PPs), this sentence has many struc- tural ambiguities. (4) Put the block on the floor on the table in the room. 33 Pu._t the block on the floor on the table in the room V, NI~ PP3 PP4 PPs ~'rMO0 Figure 4: Possible dependency structure One of the possible syntactic structures is shown in Figure 42 . To simplify tile following discussion, we treat the grammatical symbols V, NP, and PP as terminal sym- bols (words), since the analysis of the internal struc- tures of such phrases is irrelevant to the point be- ing made. The correspondence between such simpli- fied dependency structures and the equivalent phrase structures should be clear. Formally, the input sen- tence that we will parse with CDG is (5). (5) V1 NP2 PP3 PP4 PP5 First, we consider a "core" grammar that con- tains purely syntactic rules only. We define a CDG G2a =< E2, R2, L2, C2 > as follows: • E2 = {V,NP,PP} • R2 = {governor} • L2 = {ROOT, 0B J, LOC,POSTMOD} • C2 = Vxy : role; P2, L1 P2P3 L1 P2P3P4 1 11 Rnil 1 111 Rnil {Rn,I}/-A--~ 1 / ('~ {L1P2 p3 p4} ' ' ',.,.2.3.. /, L1 01 1 P21111 Figure 5: Initial constraint network (the values Rnil, L1, P2, ... should be read as <ROOT,nil>, <LOC,I>, <POSTMOD,2>, ..., and so on.) (G2a-4) word(pos(x))=NP =~ ( word(mod(x))=V, lab(x)=OBJ, mod(x) < pos(x) ) "An NP modifies a V on the left with the label OBJ." (G2a-5) word(pos(x))=V ~ ( mod(x)=nil, lab(x)=KOOT ) "A Y modifies nothing with the label ROOT." (G2a-6) mod(x) < pos(y) < pos(x) =~ mod(x) < mod(y) < pos(x) "Modification links do not cross each other." where the formula P2 is the conjunction of the following unary and binary constraints : (G2a-1) word(pos(x))=PP ~ (word(mod(x)) 6 {PP,NP,V}, rood(x) < pos(x) ) "A PP modifies a PP, an NP, or a V on the left." (G2a-2) word(pos(x))=PP, word(rood(x)) 6 {PP,NP} lab(x)=POSTMOD "If a PP modifies a PP or an NP, its label should be POSTMOD." (G2a-3) word(pos(x) )=PP, word(mod(x) )=V lab(x) =LOC "If a PP modifies a V, its label should be L0¢." 2In linguistics, arrows are usually drawn in the opposite direction in a dependency diagram: from a governor (modifiee) to its dependent (modifier). In this paper, however, we draw an arrow from a modifier to its modifiee in order to emphasize that this information is contained in a modifier's role. According to the grammar G2a , sentence (5) has 14 (= Catalan(4)) different syntactic structures. We do not generate these syntactic structures one by one, since the number of the structures may grow more rapidly than exponentially when the sentence becomes long. Instead, we build a packed data struc- ture, called a constraint network, that contains all the syntactic structures implicitly. Explicit parse trees can be generated whenever necessary, but it may take a more than exponential computation time. Formation of initial network Figure 5 shows the initial constraint network for sen- tence (5). A node in a constraint network corre- sponds to a role. Since each word has only one role governor in the grammar G2, the constraint network has five nodes corresponding to the five words in the 34 sentence. In the figure, the node labeled Vl repre- sents the governor role of the word Vl, and so on. A node is associated with a set of possible values that the role can take as its value, called a domain. The domains of the initial constraint network are com- puted by examining unary constraints ((G2a-1) to (G2a-5) in our example). For example, the modifiee of the role of the word Vl must be ROOT and its label must be nil according to the unary constraint (G2a- 5), and therefore the domain of the corresponding node is a singleton set {<R00T,nil>). In the figure, values are abbreviated by concatenating the initial letter of the label and the modifiee, such as Rnil for <R00T,nil>, 01 for <0BJ,I>, and so on. An arc in a constraint network represents a bi- nary constraint imposed on two roles. Each arc is associated with a two-dimensional matrix called a constraint matlqx, whose xy-elements are either 1 or 0. The rows and the columns correspond to the possible values of each of the two roles. The value 0 indicates that this particular combination of role values violates the binary constraints. A constraint matrix is calculated by generating every possible pair of values and by checking its validity according to the binary constraints. For example, the case in which governor(PP3) = <LOC,I> and governor(PP4) -- <POSTMOD,2> violates the binary constraint (G2a-6), so the L1-P2 element of the con- straint matrix between PPs and PPa is set to zero. The reader should not confuse the undirected arcs in a constraint network with the directed modifica- tion links in a dependency diagram. An arc in a constraint network represents the existence of a bi- nary constraint between two nodes, and has nothing to do with the modifier-modifiee relationships. The possible modification relationships are represented as the modifiee part of the domain values in a constraint network. A constraint network contains all the information needed to produce the parsing results. No grammati- cal knowledge is necessary to recover parse trees from a constraint network. A simple backtrack search can generate the 14 parse trees of sentence (5) from the constraint network shown in Figure 5 at any time. Therefore, we regard a constraint network as a packed representation of parsing results. Filtering A constraint network is said to be arc consistent if, for any constraint matrix, there are no rows and no columns that contain only zeros. A node value cor- responding to such a row or a column cannot partici- pate in any solution, so it can be abandoned without further checking. The filtering algorithm identifies such inconsistent values and removes them from the domains. Removing a value from one domain may make another value in another domain inconsistent, so the process is propagated over the network until the network becomes arc consistent. Filtering does not generate solutions, but may sig- nificantly reduce the search space. In our example, the constraint network shown in Figure 5 is already arc consistent, so nothing can be done by filtering at this point. Adding New Constraints To illustrate how we can add new constraints to nar- row down the ambiguity, let us introduce additional constraints (G2b-1) and (G2b-2), assuming that ap- propriate syntactic and/or semantic features are at- tached to each word and that the function /e(i) is provided to access these features. (G2b-1) word(pos(x))=PP, on_table E ]e(pos(x)) ~(:floor e /e(mM(x)) ) "A floor is not on a table." (G2b-2) lab(x)=LOC, lab(y)=LOC, mod(x)=mod(y), ward(mod(x) )--V ~ x=y "No verb can take two locatives." Note that these constraints are not purely syntac- tic. Any kind of knowledge, syntactic, semantic, or even pragmatic, can be applied in CDG parsing as long as it is expressed as a unary or binary constraint on word-to-word modifications. Each value or pair of values is tested against the newly added constraints. In the network in Figure 5, the value P3 (i.e. <POSTMOD,3>) of the node PP4 (i.e.; "on the table (PP4)" modifies "on the floor (PP3)") vi- olates the constraint (G2b-1), so we remove P3 from the domain of PP4. Accordingly, corresponding rows and columns in the four constraint matrices adjacent to the node PP4 are removed. The binary constraint (G2b-2) affects the elements of the constraint ma- trices. For the matrix between the nodes PP3 and 35 I L1 P2 P3 P4 ~ilil 1 1 1 {FInIIi{"UT'D_ 1 / ~ {L1 p2 p3 p4} / " T t ' ' I\ ,,o.oo, / \ /,.=")':~- / W P2tl 1 0 1 I L1P2P3P_4 .~/ ,,.~. ! L1 i"Z ~/~,. \ 011'i, 1' J_. ~ \/Ftni, l, 1/ ~.__~_.. S / {L1,P2} x ILl P2 P3 P4 ~ 2 L1 0"0 1 1 P2 1 1 11 Figure 6: Modified network ! L1 I P4 Rnill 1 Rnill 1 {Ol}(.~/ ".~ S ~ {L1} !p2 ~ 011 1 Figure 8: Unambiguous parsing result Flnil L1P2P4 Rnill 1 1 1 1 1 / o,,,, / \ 1 P2 1 I Figure 7: Filtered network Since the sentence is still ambiguous, let us con- sider another constraint. (G2c-1) Iab(x)=POSTMOD, lab(y)=POSTMOD, mod(x)=mod(y), on e fe(po~(x)), on e fe(pos(y)) ~ x=y "No object can be on two distinct objects." This sets the P2-P2 element of the matrix PP3-PP4 to zero. Filtering on this network again results in the network shown in Figure 8, which is unambiguous, since every node has a singleton domain. Recovering the dependency structure (the one in Figure 4) from this network is straightforward. Related Work PP4, the element in row L1 (<LOC,I>) and column L1 (<LOC, 1>) is set to zero, since both are modifica- tions to Vl with the label LOC. Similarly, the L1-L1 elements of the matrices PP3-PP5 and PP4-PP5 are set to zero. The modified network is shown in Fig- ure 6, where the updated elements are indicated by asterisks. Note that the network in Figure 6 is not arc consistent. For example, the L1 row of the matrix PP3-PP4 consists of all zero elements. The filtering algorithm identifies such locally inconsistent values and eliminates them until there are no more incon- sistent values left. The resultant network is shown in Figure 7. This network implicitly represents the remaining four parses of sentence (5). Several researchers have proposed variant data struc- tures for representing a set of syntactic structures. Chart (Kaplan 1973) and shared, packed for- est (Tomita 1987) are packed data structures for context-free parsing. In these data structures, a substring that is recognized as a certain phrase is represented as a single edge or node regardless of how many different readings are possible for this phrase. Since the production rules are context free, it is unnecessary to check the internal structure of an edge when combining it with another edge to form a higher edge. However, this property is true only when the grammar is purely context-free. If one in- troduces context sensitivity by attaching augmenta- tions and controlling the applicability of the produc- tion rules, different readings of the same string with 36 the same nonterminal symbol have to be represented by separate edges, and this may cause a combinato- rial explosion. Seo and Simmons (1988) propose a data structure called a syntactic graph as a packed representation of context-free parsing. A syntactic graph is similar to a constraint network in the sense that it is dependency- oriented (nodes are words) and that an exclusion ma- trix is used to represent the co-occurrence conditions between modification links. A syntactic graph is, however, built after context-free parsing and is there- fore used to represent only context-free parse trees. The formal descriptive power of syntactic graphs is not known. As will be discussed in Section 4, the formal descriptive power of CDG is strictly greater than that of CFG and hence, a constraint network can represent non-context-free parse trees as well. Sugimura et al. (1988) propose the use of a con- straint logic program for analyzing modifier-modifiee relationships of Japanese. An arbitrary logical for- mula can be a constraint, and a constraint solver called CIL (Mukai 1985) is responsible for solving the constraints. The generative capacity and the compu- tational complexity of this formalism are not clear. The above-mentioned works seem to have concen- trated on the efficient representation of the output of a parsing process, and lacked the formalization of a structural disambiguation process, that is, they did not specify what kind of knowledge can be used in what way for structural disambiguation. In CDG parsing, any knowledge is applicable to a constraint network as long as it can be expressed as a constraint between two modifications, and an efficient filtering algorithm effectively uses it to reduce structural am- biguities. 4 FORMAL PROPERTIES Weak Generative Capacity of CDG Consider the language Lww = {wwlw E (a+b)*}, the language of strings that are obtained by con- catenating the same arbitrary string over an alpha- bet {a,b}. Lww is known to be non-context-free (Hopcroft and Ullman 1979), and is frequently men- tioned when discussing the non-context-freeness of the "respectively" construct (e.g. "A, B, and C do D, E, and F, respectively") of various natural lan- guages (e.g., Savitch et al. 1987). Although there 37 = (a, b} L = (l} R = (partner} C = conjunction of the following subformulas: • (word(pos(x))=a ~ word(mod(x))=a) & (word(pos(x))=b ~ word(mod(x))=b) • mod(x) = pos(y) ~ rood(y) = pos(x) • rood(x) ¢ pos(x) & rood(x) • nil • pos(x) < pos(y) < mod(y) pos(x) < mod(x) < mod(y) • rood(y) < pos(y) < pos(x) mod(y) < mod(x) < pos(x) Figure 9: Definition of Gww ~ a a a b Figure 10: Assignment for a sentence of Lww is no context-free grammar that generates Lww, the grammar Gww =< E,L,R,C > shown in Figure 9 generates it (Maruyama 1990). An assignment given to a sentence "aabaab" is shown in Figure 10. On the other hand, any context-free language can be generated by a degree=2 CDG. This can be proved by constructing a constraint dependency grammar GCDG from an arbitrary context-free gram- mar GCFG in Greibach Normal Form, and by show- ing that the two grammars generate exactly the same language. Since GcFc is in Greibach Normal Form, it is easy to make one-to-one correspondence between a word in a sentence and a rule application in a phrase-structure tree. The details of the proof are given in Maruyama (1990). This, combined with the fact that Gww generates Lww, means that the weak generative capacity of CDG with degree=2 is strictly greater than that of CFG. Computational complexity of CDG parsing Let us consider a constraint dependency grammar G =< E, R, L, C > with arity=2 and degree=k. Let n be the length of the input sentence. Consider the space complexity of the constraint network first. In CDG parsing, every word has k roles, so there are n × k nodes in total. A role can have n x l possible values, where l is the size of L, so the maximum domain size is n x l. Binary constraints may be imposed on arbitrary pairs of roles, and therefore the number of constraint matrices is at most proportional to (nk) 2. Since the size of a constraint matrix is (nl) 2, the total space complexity of the constraint network is O(12k~n4). Since k and l are grammatical constants, it is O(n 4) for the sentence length n. As the initial formation of a constraint network takes a computation time proportional to the size of the constraint network, the time complexity of the initial formation of a constraint network is O(n4). The complexity of adding new constraints to a con- straint network never exceeds the complexity of the initial formation of a constraint network, so it is also bounded by O(n4). The most efficient filtering algorithm developed so far runs in O(ea 2,) time, where e is the number of arcs and a is the size of the domains in a con- straint network (Mohr and Henderson 1986). Since the number of arcs is at most O((nk)2), filtering can be performed in O((nk)2(nl)2), which is O(n 4) with- out grammatical constants. Thus, in CDG parsing with arity 2, both the ini- tial formation of a constraint network and filtering are bounded in O(n 4) time. 5 CONCLUSION We have proposed a formal grammar that allows effi- cient structural disambiguation. Grammar rules are constraints on word-to-word modifications, and pars- ing is done by adding the constraints to a data struc- ture called a constraint network. The initial forma- tion of a constraint network and the filtering have a polynomial time bound whereas the weak generative capacity of CDG is strictly greater than that of CFG. CDG is actually being used for an interac- tive Japanese parser of a Japanese-to-English ma- chine translation system for a newspaper domain (Maruyama et. al. 1990). A parser for such a wide domain should make use of any kind of information available to the system, including user-supplied in- formation. The parser treats this information as an- other set of unary constraints and applies it to the constraint network. 38 References 1. Church, K. and Patil, R. 1982, "Coping with syntactic ambiguity, or how to put the block in the box on the table," American Journal of Computational Linguistics, Vol. 8, No. 3-4. 2. Hays, D.E. 1964, "Dependency theory: a for- malism and some observations," Language, Vol. 40. 3. Hopcroft, J.E. and Ullman, J.D., 1979, Intro- duction to Automata Theory, Languages, and Computation, Addison-Wesley. 4. Kaplan, R.M. 1973, "A general syntactic pro- cessor," in: Rustin, R. (ed.) Natural Language Processing, Algorithmics Press. 5. Maruyama, H. 1990, "Constraint Dependency Grammar," TRL Research Report RT0044, IBM Research, Tokyo Research Laboratory. 6. Maruyama, H., Watanabe, H., and Ogino, S, 1990, "An interactive Japanese parser for ma- chine translation," COLING 'gO, to appear. 7. Mohr, R. and Henderson, T. 1986, "Arc and path consistency revisited," Artificial Intelli- gence, Vol. 28. 8. Montanari, U. 1976, "Networks of constraints: Fundamental properties and applications to pic- ture processing," Information Science, Vol. 7. 9. Mukai, K. 1985, "Unification over complex inde- terminates in Prolog," ICOT Technical Report TR-113. 10. Savitch, W.J. et al. (eds.) 1987, The Formal Complexity of Natural Language, Reidel. 11. Seo, J. and Simmons, R. 1988, "Syntactic graphs: a representation for the union of all ambiguous parse trees," Computational Linguis. tics, Vol. 15, No. 7. 12. Sugimura, R., Miyoshi, H., and Mukai, K. 1988, "Constraint analysis on Japanese modification," in: Dahl, V. and Saint-Dizier, P. (eds.) Natu- ral Language Understanding and Logic Program- ming, II, Elsevier. 13. Tomita, M. 1987, "An efficient augmented- context-free parsing algorithm," Computational Linguistics, Vol. 13. 14. Waltz, D.L. 1975, "Understanding line draw- ings of scenes with shadows," in: Winston, P.H. (ed.): The Psychology of Computer Vision, McGraw-Hill.
1990
5
MEMORY CAPACITY AND SENTENCE PROCESSING Edward Gibson Department of Philosophy, Carnegie Mellon University Pittsburgh, PA 15213-3890 [email protected] ABSTRACT The limited capacity of working memory is intrinsic to human sentence processing, and therefore must be addressed by any theory of human sentence processing. This paper gives a theory of garden-path effects and pro- cessing overload that is based on simple as- sumptions about human short term memory capacity. 1 INTRODUCTION The limited capacity of working memory is intrinsic to human sentence processing, and therefore must be addressed by any theory of human sentence process- ing. I assume that the amount of short term memory that is necessary at any stage in the parsing process is determined by the syntactic, semantic and pragmatic properties of the structure(s) that have been built up to that point in the parse. A sentence becomes unaccept- able for processing reasons if the combination of these properties produces too great a load for the working memory capacity (cf. Frazier 1985): (1) n E Aixi > K i=1 where: K is the maximum allowable processing load (in processing load units or PLUs), xl is the number of PLUs associated with prop- erty i, n is the number of properties, Ai is the number of times property i appears in the structure in question. Furthermore, the assumptions described above pro- vide a simple mechanism for the explanation of com- mon psycholinguistic phenomena such as garden-path effects and preferred readings for ambiguous sentences. Following Fodor (1983), I assume that the language processor is an automatic device that uses a greedy al- gorithm: only the best of the set of all compatible rep- resentations for an input string are locally maintained from word to word. One way to make this idea explicit is to assume that restrictions on memory allow at most one representation for an input string at any time (see, for example, Frazier and Fodor 1978; Frazier 1979; Marcus 1980; Berwick and Weinberg 1984; Pritchett 1988). This hypothesis, commonly called the serial 39 hypothesis, is easily compatible with the above view of processing load calculation: given a choice between two different representations for the same input string, simply choose the representation that is associated with the lower processing load. The serial hypothesis is just one way of placing local memory restrictions on the parsing model, however. In this paper I will present an alternative formulation of local memory restrictions within a parallel framework. There is a longstanding debate in the psycholinguis- tic literature as to whether or not more than one rep- resentation for an input can be maintained in parallel (see, for example, Kurtzman (1985) or Gorrell (1987) for a history of the debate). It turns out that the par- aUel view appears to handle some kinds of data more directly than the serial view, keeping in mind that the data are often controversial. For example, it is difficult to explain in a serial model why relative processing load increases as ambiguous input is encountered (see, for example, Fodor et al. 1968; Rayner et al. 1983; GorreU 1987). Data that is normally taken to be support for the serial hypothesis includes garden-path effects and the existence of preferred readings of ambiguous input. However, as noted above, limiting the number of allowable representations is only one way of con- straining parallelism so that these effects can also be accounted for in a parallel framework. As a result of the plausibility of a parallel model, I propose to limit the difference in processing load that may be present between two structures for the same in- put, rather than limit the number of structures allowed in the processing of an input (cf. Gibson 1987; Gibson and Clark 1987; Clark and Gibson 1988). Thus I as- sume that the human parser prefers one structure over another when the processing load (in PLUs) associated with maintaining the first is markedly lower than the processing load associated with maintaining the sec- ond. That is, I assume there exists some arithmetic preference quantity P corresponding to a processing load, such that if the processing loads associated with two representations for the same string differ by load P, then only the representation associated with the smaller of the two loads is pursued. 1 Given the existence of a lit is possible that the preference factor is a geometric one rather than an arithmetic one. Given a geometric preference factor, one structure is preferred over another when the ratio of their processing loads reaches a threshold value. I explore only the arithmetic possibility in this paper; it is possible that the geometric alternative gives results that are as good, although I leave this issue for future research. preference factor P, it is easy to account for garden-path effects and preferred readings of ambiguous sentences. Both effects occur because of a local ambiguity which is resolved in favor of one reading. In the case of a garden-path effect, the favored reading is not compati- ble with the whole sentence. Given two representations for the same input string that differ in processing load by at least the factor P, only the less computationally expensive structure will be pursued. If that structure is not compatible with the rest of the sentence and the discarded structure is part of a successful parse of the sentence, a garden-path effect results. If the parse is successful, but the discarded structure is compatible with another reading for the sentence, then only a pre- ferred reading for the sentence has been calculated. Thus if we know where one reading of a (temporarily) ambiguous sentence becomes the strongly preferred reading, we can write an inequality associated with this preference: (2) n B ZA,x,- Z ,x, i=1 i=1 where: P is the preference factor (in PLUs), xi is the number of PLUs associated with prop- erty i, n is the number of properties, Ai is the number of times property i appears in the unpreferred structure, Bz is the number of times property i appears in the preferred structure. Given a parsing algorithm together with n proper- ties and their associated processing loads x~ ...xn, we may write inequalities having the form of (1) and (2) corresponding to the processing load at various parse states. An algebraic technique called iinearprogram- ruing can then be used to solve this system of linear inequalities, giving an n-dimensional space for the val- ues ofxi as a solution, any point of which satisfies all the inequalities. In this paper I will concentrate on syntactic properties: 2 in particular, I present two properties based on the 0-Criterion of Government and Binding Theory (Chomsky 1981). 3 It will be shown that these properties, once associated with processing loads, pre- dict a large array of garden-path effects. Furthermore, it is demonstrated that these properties also make de- 2Note that I assume that there also exist semantic and pragmatic properties which are associated with significant processing loads, but which axe not discussed here. 3In another syntactic theory, similar properties may be ob- tained from the principles that correspond to the 0-Criterion in that theory. For example, the completeness and coherence conditions of Lexical Functional Grammar (Bresnan 1982) would derive properties similar to those derived from the 0-Criterion. The same empirical effects should result from these two sets of properties. sirable predictions with respect to unacceptability due to memory capacity overload. The organization of this paper is given as follows: first, the structure of the underlying parser is described; second, the two syntactic properties are proposed; third, a number of locally ambiguous sentences, in- cluding some garden-paths, are examined with respect to these properties and a solution space for the process- ing loads of the two properties is calculated; fourth, it is shown that this space seems to make the right predic- tions with respect to processing overload; conclusions are given in the final section. 2 THE UNDERLYING PARSER The parser to which the memory limitation constraints apply must construct representations in such a way so that incomplete input will be associated with some structure. Furthermore, the parsing algorithm must, in principle, allow more than one structure for an input string, so that the general constraints described in the previous section may apply to restrict the possibilities. The parsing model that I will assume is an extension of the model described in Clark and Gibson (1988). When a word is input, representations for each of its lexical entries axe built and placed in the buffer, a one cell data structure that holds a set of tree structures. The parsing model contains a second data structure, the stack-set, which contains a set of stacks of buffer cells. The parser builds trees in parallel based on possible attachments made between the buffer and the top of each stack in the stack-set. The buffer and stack-set are formally defined in (3) and (4). (3) A buffer cell is a set of structures { SI,...,S, }, where each Si represents the same segment of the input string. The buffer contains one buffer cell. (4) The stack-set is a set of stacks of buffer cells, where each stack represents the same segment of the input string: 40 { ( { S1,1,1,S1,1,2, ...,Sl,l,nl,l }, { S1,2,1, S1,2,2,..., S1,2,nt,2 } .... { S1,.,,,1,S1,.,1,2 .... , $1,.,, ,.,,., } ) i"{ s.,,,1,s.,1,2, ...,s.,,,..,, ). { s.,2,1, s.,2,2, ...,s.,2,.... } .... ( .... } ) } where: p is the number of stacks; ml is the number of buffer cells in stack i; and nij is the number of tree structures in the jth buffer cell of stack i. The motivation for these data structures is given by the desire for a completely unconstrained parsing algorithm upon which constraints may be placed: this algorithm should allow all possible parser operations to occur at each parse state. There are exactly two parser operations: attaching a node to another node and pushing a buffer cell onto a stack. In order to allow both of these operations to be performed in parallel, it is necessary to have the given data structures: the buffer and the stack-set. For example, consider a parser state in which the buffer is non-empty and the stack-set contains only a single cell stack: (5) Stack-set: { { { $1, ...,Sn } } } Buffer: { Bt, ...,Bin } Suppose that attachments are possible between the buffer and the single stack cell. The structures that result from these attachments will take up a single stack cell. Let us call these resultant structures A1, Az, ...,Ak. If all possible operations are to take place at this parser state, then the contents of the current buffer must also be pushed on top of the current stack. Thus two stacks, both representing the same segment of the input string will result: (6) Stack 1: { { {at,...,ak } } } Stack 2: { { { B1, ...,Bin } { St, ...,S, } } } Since these two stacks break up the same segment of the input string in different ways, the stack-set data structure is necessary. 3 TWO SYNTACTIC PROPERTIES DERIVABLE FROM THE 0-CRITERION Following early work in linguistic theory, I distin- guish two kinds of categories: functional categories and thematic or content categories (see, for example, Fukui and Speas (1986) and Abney (1987) and the ref- erences cited in each). Thematic categories include nouns, verbs, adjectives and prepositions; functional categories include determiners, complementizers, and inflection markers. There are a number of properties that distinguish functional elements from thematic ele- ments, the most crucial being that functional elements mark grammatical or relational features while thematic elements pick out a class of objects or events. I will as- sume as a working hypothesis that only those syntactic properties that have to do with the thematic elements of an utterance are relevant to preferences and overload in processing. One principle of syntax that is directly involved with the thematic content of an utterance in a Government-Binding theory is the 0-Criterion: (7) Each argument bears one and only one 0-role (the- matic role) and each 0-role is assigned to one and only one argument (Chomsky 1981:36). I hypothesize that the human parser attempts to lo- caUy satisfy the 0-Criterion whenever possible. Thus given a thematic role, the parser prefers to assign that role, and given a thematic element, the parser prefers to assign a role to that element. These assumptions are made explicit as the following properties: (8) The Property of Thematic Reception (PTR): Associate a load of XrR PLUs of short term memory to each thematic element that is in a position that can receive a thematic role in some co-existing structure, but whose 0-assigner is not unambiguously identifiable in the structure in question. (9) The Property of Thematic Assignment (PTA): Associate a load of XTA PLUs of short term memory to each thematic role that is not assigned to a node containing a thematic element. Note that the Properties of Thematic Assignment and Reception are stated in terms of thematic elements. Thus the Property of Thematic Reception doesn't apply to functional categories, whether or not they are in positions that receive thematic roles. Similarly, if a thematic role is assigned to a functional category, the Property of Thematic Assignment does not notice until there is a thematic element inside this constituent. 41 4 AMBIGUITY AND THE PROPERTIES OF THEMATIC ASSIGNMENT AND RECEPTION Consider sentence (10) with respect to the Properties of Thematic Assignment and Reception: (10) John expected Mary to like Fred. The verb expect is ambiguous: either it takes an NP complement as in the sentence John expected Mary or it takes an IP complement as in (10). 4 Consider the state of the parse of (10) after the word Mary has been processed: (11) a. [re Lvt, John ] [v? expected ~e Mary ]]] b. [tp [~p John ] [vp expected [tp Lvp Mary ] ]]] In (1 la), the NP Mary is attached as the NP com- plement of expected. In this representation there is no load associated with either of the Properties of The- matic Assignment or Reception since no thematic ele- ments need thematic roles and no thematic roles are left unassigned. In (llb), the NP Mary is the specifier of a hypothesized IP node which is attached as the com- plement of the other reading of expected. 5 This rep- resentation is associated with at least xrR PLUs since the NP Mary is in a position that can be associated with a thematic role, the subject position, but whose 0-assigner is not yet identifiable. No load is associated with the Property of Thematic Assignment, however, since both thematic roles of the verb expected are as- signed to nodes that contain thematic elements. Since 4Following current notation in GB Theory, IP (Inflection Phrase) = S and CP (Complementizer Phrase) = S' (Chomsky 1986). 51 assume some form of hypothesis-driven node projec- tion so that noun phrases are projected to those categories that they may specify. Motivation for this kind of projection algo- rithm is given by the processing of Dutch (Frazier 1987) and the processing of certain English noun phrase constructions (Gibson 1989). there is no difficulty in processing sentence (10), the load difference between these two structures cannot be greater than P PLUs, the preference factor in inequality (2). Thus the inequality in (12) is obtained: (12) xrR < P Since the load difference between the two struc- tures is not sufficient to cause a strong preference, both structures are maintained. Note that this is an im- portant difference between the theories presented here and the theory presented in Frazier and Fodor (1978), Frazier (1979) and Pritchett (1988). In each of these theories, only one representation can be maintained, so that either (lla) or (llb) would be preferred. In order to account for the lack of difficulty in parsing (10), Frazier and Pritchett both assume that reanalysis in certain situations is not expensive. No such stipu- lation is necessary in the framework given here: it is simply assumed that all reanalysis is expensive. 6 Consider now sentence (13) with respect to the Prop- erties of Thematic Assignment and Reception: (13) John expected her mother to like Fred. Consider the state of the parse of (13) after the word her has been processed. In one representation the NP her will be attached as the NP complement of expected: (14) [tp [up John ] [vp expected Lvv her ]]] In this representation there is no load associated with either of the Properties of Thematic Assignment or Re- ception since no thematic objects need thematic roles and no thematic roles are left unassigned. In another representation the NP her is the specifier of a hypoth- esized NP which is pushed onto a substack containing the other reading of the verb expected: (15){ { [tp [ueJohn] [vpexpected [tp e]]] } { [~p ~p her ] ] } } This representation is associated with at least xra PLUs since the verb expected has a thematic role to as- sign. However, no load is associated with the genitive NP specifier her since its a-assigner, although not yet present, is unambiguously identified as the head of the NP to follow (Chomsky (1986a)). 7 Thus the total load associated with (15) is xra PLUs. Since there is no dif- ficulty in processing sentence (10), the load difference 6See Section 4.1 for a brief comparison between the model proposed here and serial models such as those proposed by Frazier and Fodor (1978) and Pritchett (1988). 7Note that specifiers do not always receive their thematic roles from the categories which they specify. For example, a non-genitive noun phrase may specify any major category. In particular, it may specify an IP or a CP. But the specifier of these categories may receive its thematic role through chain formation from a distant 0-assigner, as in (16): (16) John appears to like beans. Note that there is no NP that corresponds to (16) (Chomsky (1970)): (17) * John's appearance to like beans. 42 between these two structures cannot be greater than P PLUs. Thus the second inequality, (18), is obtained: (18) xra < P Now consider (19): s (19) # I put the candy on the table in my mouth. This sentence becomes ambiguous when the prepo- sition on is read. This preposition may attach as an argument of the verbput or as a modifier of the NP the candy: (20) a. I [vv Iv, Iv put ] Lvv the candy ] [ee on ] ]] b. I [vv Iv, Iv put ] Lvv the candy [ep on ] ] ]] At this point the argument attachment is strongly preferred. However, this attachment turns out to be incompatible with the rest of the sentence. When the word mouth is encountered, no pragmatically coherent structure can be built, since tables are not normally found in mouths. Thus a garden-path effect results. Consider the parse state depicted in (20) with respect to the Properties of Thematic Assignment and Reception. The load associated with the structure resulting from argument attachment is XrA PLUs since, although the a- grid belonging to the verbput is filled, the thematic role assigned by the preposition on remains unassigned. On the other hand, the load associated with the modifier attachment is 2 *XrA +xrR PLUs since 1) both the verb put and the preposition on have thematic roles that need to be assigned and 2) the PP headed by on receives a thematic role in the argument attachment structure, while it receives no such role in the structure under consideration. Thus the difference between the loads associated with the two structures is XrA + XrR PLUs. Since the argument attachment structure is strongly preferred over the other structure, I hypothesize that this load is greater than P PLUs: (21) Xra + XTR > P Now consider the the well-known garden-path sen- tence in (22): (22) # The horse raced past the barn fell. The structure for the input the horse raced is am- biguous between at least the two structures in (23): (23) a. be bvp the horse ] [vp raced ]] b. bp Lvp the Lv, Lv, horse/] [cp Oi raced ] ]] ] Structure (23a) has no load associated with it due to either the PTA or the PTR. Crucially note that the verb raced has an intransitive reading so that no load is required via the Property of Thematic Assignment. On the other hand, structure (23b) requires a load of 2 • xrR PLUs since 1) the noun phrase the horse is in a position that can receive a thematic role, but currently does not and 2) the operator Oi is in a position that may be associated with a thematic role, but is not yet sI will prefix sentences that are difficult to parse because of memory limitations with the symbol "#". Hence sen- tences that are unacceptable due to processing overload will be prefixed with "#", as will be garden-path sentences. associated with one. 9 Thus the difference between the processing loads of structures (23a) and (23b) is 2 • xrR PLUs. Since this sentence is a strong garden- path sentence, it is hypothesized that a load difference of 2 • xrR PLUs is greater than the allowable limit, P PLUs: (24) 2 • xrR > P A surprising effect occurs when a verb which op- tionally subcategorizes for a direct object, like race, is replaced by a verb which obligatorily subcategorizes for a direct object, likefind: (25) The bird found in the room was dead. Although the structures and local ambiguities in (25) and (22) are similar, (22) causes a garden-path effect while, surprisingly, (25) does not. To determine why (25) is not a garden-path sentence we need to examine the local ambiguity when the word found is read: (26) a. be Me the bird ] Ire Iv, Iv found ] [He ] ]]] b. [m Lvt, the ~, ~, bird/] [c/, Oi found ] ]] ] The crucial difference between the verb found and the verb raced is that found requires a direct object, while raced does not. Since the 0-grid of the verb found is not filled in structure (26a), this representation is associated with xrA PLUs of memory load. Like structure (23b), structure (26b) requires 2 • xrR PLUs. Thus the difference between the processing loads of structures (26a) and (26b) is 2 *xrR - XTA PLUs. Since no garden-path effect results in (25), I hypothesize that this load is less than or equal to P PLUs: (27) 2 * xrR - XTA <_ P Furthermore, these results correctly predict that sen- tence (28) is not a garden-path sentence either: (28) The bird found in the room enough debris to build a nest. Hence we have the following system of inequalities: (29) a. xrR < P b. XTA < P C. XTA "4-XTR > P d. 2*XTR > P e. 2 * XTR -- XrA < P This system of inequalities is consistent. Thus it identifies a particular solution space. This solution space is depicted by the shaded region in Figure 1. Note that, pretheoretically, there is no reason for this system of inequalities to be consistent. It could have been that the parser state of one of the example sentences forced an inequality that contradicted some previously obtained inequality. This situation would have had one of three implications: theproperties being considered might be incorrect; the properties being considered might be incomplete; or the whole approach 9In fact, this operator will be associated with a thematic role as soon as a gap-positing algorithm links it with the object of the passive participle raced. However, when the attachment is initially made, no such link yet exists: the operator will initially be unassociated with a thematic role. Xrl \ z XrA ~-P / " ~ -xz~-~ P ,e.'- ~R _< P 2xm > P P ~"- Xa-A \ - xrA +x~ >P Figure 1: The Solution Space for the Inequalities in (29) 43 might be incorrect. Since this situation has not yet been observed, the results mutually support one another. 4.1 A COMPARISON WITH SERIAL MODELS Because serial models of parsing can maintain at most one representation for any input string, they have dif- ficulty explaining the lack of garden-path effects in sentences like (10) and (25): (10) John expected Mary to like Fred. (25) The bird found in the room was dead. As a result of this difficulty Pritchett (1988) proposes the Theta Reanalysis Constraint:l° (30) Theta Reanalysis Constraint (TRC): Syntactic re- analysis which interprets a 0-marked constituent as outside its current 0-Domain and as within an exist- ing 0-Domain of which it is not a member is costly. (31) 0-Domain: c~ is in the 7 0-Domain of/3 iff c~ receives the 7 0-role from/3 or a is dominated by a constituent that receives the 3' 0-role from/3. As a result of the Theta Reanalysis Constraint, the necessary reanalysis in each of (10) and (25) is not expensive, so that no garden-path effect is predicted. Furthermore, the reanalysis in sentences like (22) and (19) violates the TRC, so that the garden-path effects are predicted. However, there are a number of empirical problems with Pritchett's theory. First of all, it turns out that the l°Frazier and Rayner (1982) make a similar stipulation to account for problems with the theory of Frazier and Fodor (1978). However, their account fails to explain the lack of garden-path effect in (25). See Pritcheu (1988) for a description of further problems with their analysis. Theta Reanalysis Constraint as defined in (30) incor- rectly predicts that the sentences in (32) do not induce garden-path effects: (32) a. # The horse raced past the barn was failing. b. # The dog walked to the park seemed small. c. # The boat floated down the river was a canoe. For example, consider (32a). When the auxiliary verb was is encountered, reanalysis is forced. How- ever, the auxiliary verb was does not have a thematic role to assign to its subject, the dog, so the TRC is not violated. Thus Pritchett's theory incorrectly predicts that these sentences do not cause garden-path effects. Other kinds of local ambiguity that do not give the human parser difficulty also pose a challenge to serial parsers. Marcus (1980) gives the sentences in (33) as evidence that any deterministic parser must be able to look ahead in the input string: 11 (33) a. Have the boys taken the exam today? b. Have the boys take the exam today. Any serial parser must be able to account for the lack of difficulty with either of the sentences in (33). It turns out that the Theta Reanalysis Constraint does not help in cases like these: no matter which analysis is pursued first, reanalysis will violate the TRC. 4.2 EMPIRICAL SUPPORT: FURTHER GARDEN-PATH EFFECTS Given the Properties of Thematic Assignment and Re- ception and their associated loads, we may now explain many more garden-path effects. Consider (34): (34) # The Russian women loved died. Up until the last word, this sentence is ambiguous between two readings: one where loved is the matrix verb; and the other where loved heads a relative clause modifier of the noun Russian. The strong preference for the matrix verb interpretation of the word loved can be easily explained if we examine the possible structures upon reading the word women: (35) a. [u, [we the Russian women] ] b. [u, [We the IN, [W, Russian/] [cl, [We Oi ] [tP [We women ] ]] ]] ] Structure (35a) requires xrR PLUs since the NP the Russian women needs but currently lacks a thematic role. Structure (35b), on the other hand, requires at least 3 • xTR PLUs since 1) two noun phrases, the Rus- sian and women, need but currently lack thematic roles; and 2) the operator in the specifier position of the mod- ifying Comp phrase can be associated with a thematic role, but currently is not linked to one. Since the dif- ference between these loads is 2 • XTR, a garden-path effect results. Consider now (36): (36) # John told the man that Mary kissed that Bill saw Phil. 11Note that model that I am proposing here is a parallel model, and therefore is nondeterministic. 44 When parsing sentence (36), people will initially analyze the CP that Mary kissed unambiguously as an argument of the verb told. It turns out that this hypothesis is incompatible with the rest of the sentence, so that a garden-path effect results. In order to see how the garden-path effect comes about, consider the parse state which occurs after the word Mary is read: (37) a. [tp ~P John ] Ice Iv, Iv told ] [wp the man ] [cp that ] be ~P Mary ] ]] ]]] b. bp [We John ] [vp [v, [v told ] [wp the [W, [W, man/] [cp bvp O/] that bp bvp Mary ] ]] ]]7 Structure (37a) requires no load by the PTA since the 0-grid of the only 0-assigner is filled with struc- tures that each contain thematic elements. However, the noun phrase Mary requires XrR PLUs by the Prop- erty of Thematic Reception since this NP is in a the- matic position but does not yet receive a thematic role. Thus the total load associated with structure (37a) is xrR PLUs. Structure (37b), on the other hand, requires a load OfXTA +2*XTR since 1) the thematic role PROPOSI- TION is not yet assigned by the verb told; 2) the operator in the specifier position of the CP headed by that is not linked to a thematic role; and 3) the NP Mary is in thematic position but does not receive a thematic role yet. Thus the load difference is xrA +XrR PLUs, enough for the more expensive one to be dropped. Thus only structure (37a) is maintained and a garden-path effect eventually results, since this structure is not compati- ble with the entire sentence. Hence the Properties of Thematic Assignment and Reception make the correct predictions with respect to (36). Consider the garden-path sentence in (38): (38) # John gave the boy the dog bit a dollar. This sentence causes a garden-path effect since the noun phrase the dog is initially analyzed as the direct object of the verb gave rather than as the subject of a relative clause modifier of the NP the boy. This garden- path can be explained in the same way as previous examples. Consider the state of the parse after the NP the dog has been processed: (39) a. be [We John ] [vP Iv, [v gave ][Ne the boy ] [W~, the dog 1]]] b. [u, ~t, John ] [re [v, [v gave ] [wp the [N, [W, boyi ] Ice [we Oi] be [we the dog ] ]] [we ] 777]7 While structure (39a) requires no load at this stage, structure (39b) requires 2 • xrR + XrA PLUs since 1) one thematic role has not yet been assigned by the verb gave; 2) the operator in the specifier position of the CP modifying boy is not linked to a thematic role; and 3) the NP the dog is in a thematic position but does not yet receive a thematic role. Thus structure (39a) is strongly preferred and a garden-path effect results. The garden-path effect in (40) can also be easily explained in this framework: (40) # The editor authors the newspaper hired liked laughed. Consider the state of the parse of (40) after the word authors has been read: (41) a. [o, bop the editor ] [w, Iv, Iv authors ] bee ] ]]] b. [n, ~e the be, be, editor/] [cp Lvp Oi ] [11, Me authors ] ]] ]]] The word authors is ambiguous between nominal and verbal interpretations. The structure including the verbal reading is associated with XrA PLUs since the 0-grid for the verb authors includes an unassigned role. Structure (41b), on the other hand, includes three noun phrases, each of which is in a position that may be linked to a thematic role but currently is not linked to any 0-role. Thus the load associated with structure (41b) is 3 • XrR PLUs. Since the difference between the loads associated with structures (41b) and (41a) is so high (3 • XrR -- XTA PLUs), only the inexpensive structure, structure (41a), is maintained. 5 PROCESSING OVERLOAD The Properties of Thematic Assignment and Recep- tion also give a plausible account of the unacceptability of sentences with an abundance of center-embedding. Recall that I assume that a sentence is unacceptable because of short term memory overload if the com- bination of memory associated with properties of the structures built at some stage of the parse of the sen- tence is greater than the allowable processing load K. Consider (42): (42) # The man that the woman that the dog bit likes eats fish. Having input the noun phrase the dog the structure for the partial sentence is as follows: (43) [o, [top the [to, [/¢, mani ] [o, ~p Oi ] that [tP [s,P the [~, ~, womanj ] [cP [NP Oj ] that [lP [NP the dog ] ]]] In this representation there are three lexical noun phrases that need thematic roles but lack them. Fur- thermore, there are two non-lexical NPs, operators, that are in positions that may prospectively be linked to thematic roles. Thus, under my assumptions, the load associated with this representation is at least 5 • xrR PLUs. I assume that these properties are responsible for the unacceptability of this sentence, resulting in the inequality in (44): (44) 5 * xTR > K Note that sentences with only one relative clause modifying the subject are acceptable, as is exemplified in (45) (45) The man that the woman likes eats fish. Since (45) is acceptable, its load is below the max- imum at all stages of its processing. After processing the noun phrase the woman in (45), there are three noun phrases that currently lack 0-roles but may be linked to 0-roles as future input appears. Thus we arrive at the inequality in (46): (46) 3 • XTR <_ K 45 Thus I assume that the maximum processing load that people can handle lies somewhere above 3 • xrR PLUs but somewhere below 5 • xrR PLUs. Although these data are only suggestive, they clearly make the right kinds of predictions. Future research should es- tablish the boundary between acceptability and unac- ceptability more precisely. 6 CONCLUSIONS Since the structural properties that are used in the for- marion of the inequalities are independently motivated, and the system of inequalities is solvable, the theory of human sentence processing presented here makes strong, testable predictions with respect to the process- ability of a given sentence. Furthermore, the success of the method provides empirical support for the particu- lar properties used in the formation of the inequalities. Thus a theory of PLUs, the preference factor P and the overload factor K provides a unified account of 1) acceptability and relative acceptability; 2) garden-path effects; and 3) preferred readings for ambiguous input. 7 ACKNOWLEDGEMENTS I would like to thank Robin Clark, Dan Everett, Rick Kazman, Howard Kurtzman and Eric Nyberg for com- ments on earlier drafts of this work. All remaining errors are my own. 8 REFERENCES Abney, Stephen P. 1987 The English Noun Phrase in its Sentential Aspect. Ph.D. Thesis, MIT, Cam- bridge, MA. Berwick, Robert C. and Weinberg, Amy S. 1984 The Grammatical Basis for Linguistic Performance. MIT Press, Cambridge, MA. Bresnan, Joan 1982 The Mental Representation of Grammatical Relations. MIT Press, Cambridge, MA. Chomsky, Noam 1970 Remarks on Nominalization. In R. Jacobs and P. Rosenbaum (eds.), Readings in English Transformational Grammar, Ginn, Waltham, MA: 184-221. Chomsky, Noam 1981 Lectures on Government and Binding. Foris, Dordrecht, The Netherlands. Chomsky, Noam 1986 Barriers. Linguistic Inquiry Monograph 13, MIT Press, Cambridge, MA. Clark, Robin and Gibson, Edward 1988 A Parallel Model for Adult Sentence Processing. In: Pro- ceedings of the Tenth Cognitive Science Confer- ence, McGill University, Montreal, Quebec:270- 276. Fodor, Jerry A. 1983 Modularity of Mind. MIT Press, Cambridge, MA. Fodor, Jerry A.; Garrett, Merrill F. and Beret, Tom G. 1968 Some Syntactic Determinants of Senten- tial Complexity. Perception and Psychophysics 2:289-96. Frazier, Lyn 1979 On Comprehending Sentences: Syntactic Parsing Strategies. Ph.D. Thesis, Uni- versity of Massachusetts, Amherst, MA. Frazier, Lyn 1985 Syntactic Complexity. In Dowty, David, Karttunen, Lauri, and Arnold Zwicky (eds.), Natural Language Processing: Psycho- logical, Computational and Theoretical Perspec- tives, Cambridge University Press, Cambridge, United Kingdom: 129-189. Frazier, Lyn 1987 Syntactic Processing Evidence from Dutch. Natural Language and Linguistic Theory 5:519-559. Frazier, Lyn and Fodor, Janet Dean 1978 The Sausage Machine: A New Two-stage Parsing Model. Cog- nition 6:291-325. Fukui, Naoki and Speas, Margaret 1986 Specifiers and Projections. MIT Working Papers in Linguistics 8, Cambridge, MA: 128-172. Gibson, Edward 1987 Garden-Path Effects in a Parser with Parallel Architecture. In: Proceedings of the Fourth Eastern States Conference on Linguistics, The Ohio State University, Columbus, OH:88-99. Gibson, Edward 1989 Parsing with Principles: Pre- dicting a Phrasal Node Before Its Head Appears. In: Proceedings of the First International Work- shop on Parsing Technologies, Carnegie Mellon University, Pittsburgh, PA:63-74. Gibson, Edward and Clark, Robin 1987 Positing Gaps in a Parallel Parser. In: Proceedings of the Eigh- teenth North East Linguistic Society Conference, University of Toronto, Toronto, Ontario: 141-155. Gorrell, Paul G. 1987 Studies of Human Syntactic Processing: Ranked-Parallel versus Serial Mod- els. Ph.D. Thesis, University of Connecticut, Storrs, CT. Kurtzman, Howard 1985 Studies in Syntactic Ambi- guity Resolution. Ph.D. Thesis, MIT, Cambridge, MA. Marcus, Mitchell P. 1980 A Theory of Syntactic Recognition for Natural Language. MIT Press, Cambridge, MA. Pritchett, Bradley 1988 Garden Path Phenomena and the Grammatical Basis of Language Processing. Language 64:539-576. Rayner, Keith; Carlson, Marcia and Frazier, Lyn 1983 The Interaction of Syntax and Semantics during Sentence Processing: Eye Movements in the Analysis of Semantically Biased Sentences. Journal of Verbal Learning and Verbal Behavior 22:358-374. 46
1990
6
TRANSFORMING SYNTACTIC GRAPHS INTO SEMANTIC GRAPHS* Hae-Chang Rim Jungyun Seo Robert F. Simmons Department of Computer Sciences and Artificial Intelligence Laboratory Taylor Hall 2.124, University of Texas at Austin, Austin, Texas 78712 ABSTRACT In this paper, we present a computational method for transforming a syntactic graph, which represents all syntactic interpretations of a sentence, into a semantic graph which filters out certain interpretations, but also incorporates any remaining ambiguities. We argue that the result- ing ambiguous graph, supported by an exclusion matrix, is a useful data structure for question an- swering and other semantic processing. Our re- search is based on the principle that ambiguity is an inherent aspect of natural language communi- cation. INTRODUCTION In computing meaning representations from natural language, ambiguities arise at each level. Some word sense ambiguities are resolved by syn- tax while others depend on the context of dis- course. Sometimes, syntactic ambiguities are re- solved during semantic processing, but often re- main even through coherence analysis at the dis- course level. Finally, after syntactic, semantic, and discourse processing, the resulting meaning structure may still have multiple interpretations. For example, a news item from Associated Press, November 22, 1989, quoted a rescued hostage, "The foreigners were taken to the Estado Mayor, army headquarters. I left that hotel about quarter to one, and by the *This work is sponsored by the Army Research Office under contract DAAG29-84-K-0060. 47 time I got here in my room at quarter to 4 and turned on CNN, I saw myself on TV getting into the little tank," Blood said. The article was datelined, Albuquerque N.M. A first reading suggested that Mr. Blood had been flown to Albuquerque, but further thought sug- gested that "here in my room" probably referred to some sleeping section in the army headquarters. But despite the guess, ambiguity remains. In a previous paper [Seo and Simmons 1989] we argued that a syntactic graph -- the union of all parse trees -- was a superior representation for further semantic processing. It is a concise list of syntactically labeled triples, supported by an ex- clusion matrix to show what pairs of triples are incompatible. It is an easily accessible represen- tation that provides succeeding semantic and dis- course processes with complete information from the syntactic analysis. Here, we present methods for transforming the syntactic graph to a func- tional graph (one using syntactic functions, SUB- JECT, OBJECT, IOBJECT etc.) and for trans- forming the functional graph to a semantic graph of case relations. BACKGROUND Most existing semantic processors for natural language systems (NLS) have depended on a strat- egy of selecting a single parse tree from a syntac- tic analysis component (actual or imagined). If semantic testing failed on that parse, the system would sel~,ct another -- backing up if using a top- down parser, or selecting another interpretation vpp(8) ppn 0 1 (SNP saw John) 1 2 (VNP saw man) 2 3 (DET man a) 3 4 (NPP man on) 4 5 (VPP saw on) 5 6 (DET hill the) 6 , 7 (PPN on hill) 7 S (VPP saw with) S 9 (NPP man with) 9 ,(11) 10 (NPP hill with) 10 11 (PPN with telescope) 11 12 (DET telescope a) 12 0 1 ~13 4 51617 1 1 S 9 10 11 12 1 1 1 1 1 1 Figure 1: Syntactic Graph and Exclusion Matrix for "John saw a man on the hill with a telescope." from an all-paths chart. Awareness has grown in recent years that this strategy is not the best. At- tempts by Marcus [1980] to use a deterministic (look-ahead) tactic to ensure a single parse with- out back-up, fail to account for common, garden- path sentences. In general, top-down parsers with backup have unpleasant implications for complex- ity, while efficient all-paths parsers limited to com- plexity O(N 3) [Aho and Ullman 1972, Early 1970, Tomita 1985] can find all parse trees in little more time than a single one. If we adopt the economical parsing strategy of obtaining an all-paths parse, the question remains, how best to use the parsing information for subsequent processing. Approaches by Barton and Berwick [1985] and Rich et al. [1987] among others have suggested what Rich has called ambiguity procrastina- tion in which a system provides multiple potential syntactic interpretations and postpones a choice until a higher level process provides sufficient in- formation to make a decision. Syntactic repre- sentations in these systems are incomplete and may not always represent possible parses. Tomita [1985] suggested using a shared-packed-forest as an economical method to represent all and only the parses resulting from an all-paths analysis. Unfor- tunately, the resulting tree is difficult for a person to read, and must be accessed by complex pro- grams. It was in this context that we [Seo and Simmons 1989] decided that a graph composed of the union of parse trees from an all-paths parser would form a superior representation for subse- quent semantic processing. 48 SYNTACTIC GRAPHS In the previous paper we argued that the syntac- tic graph supported by an exclusion matrix would provide all and "only" the information given by a parse forest. 1 Let us first review an example of a syntactic graph for the following sentence: Exl) John saw a man on the hill with a tele- scope. There are at least five syntactic interpreta- tions for Exl from a phrase structure grammar. The syntactic graph is represented as a set of dominator-modifier triples 2 as shown in the mid- dle of Figure 1 for Exl. Each triple consists of a label, a head-word, and a modifier-word. Each triple represents an arc in a syntactic graph in the left of Figure 1. An arc is drawn from the head-word to the modifier-word. The label of each triple, SNP, VNP, etc. is uniquely determined according to the grammar rule used to generate the triple. For example, a triple with the label SNP is generated by the grammar rule, SNT --+ NP + VP, VPP is from the rule VP --+ VP ÷ PP, and PPN from PP ---+ Prep÷ NP, etc. We can notice that the ambiguities in the graph are signalled by identical third terms (i.e., the same modifier-words with the same sentence posi- tion) in triples because a word cannot modify two different words in one syntactic interpretation. In 1 We proved the "all" but have discovered that in certain cases to be shown later, the transformation to a semantic graph may result in arcs that do not occur in any complete analysis. 2Actually each word in the triples also includes notation for position, and syntactic class and features of the word. Figure 2: Syntactic Graph and Exclusion Matrix for "The monkey lives in tropical jungles near rivers and streams." a graph, each node with multiple in-arcs shows an ambiguous point. There is a special arc, called the root are, which points to the head word of the sentence. The arc (0) of the syntactic graph in Figure 1 represents a root arc. A root arc contains information (not shown) about the modalities of the sentence such as voice: passive, active, mood: declarative or wh-question, etc. Notice that a sen- tence may have multiple root arcs because of syn- tactic ambiguities involving the head verb. One interpretation can be obtained from a syn- tactic graph by picking up a set of triples with no repeated third terms. In this example, since there are two identical occurrences of on and three of with, there are 2.3 = 6 possible sentence interpre- tations in the graph represented above. However, there must be only five interpretations for Exl. The reason that we have more interpretations is that there are triples, called exclusive triples, which cannot co-occur in any syntactic interpre- tation. In this example, the triple (vpp saw on) and (npp man with) cannot co-occur since there is no such interpretation in this sentence. 3 That's why a syntactic graph must maintain an exelu- slon matrix. An exclusion matrix, (Ematrix), is an N • N matrix where N is the number of triples. If Ematrix(i,j) = 1 then the i-th and j-th triple 3Once the phrase "on the hill" is attached to saw, "with a telescope" must be attached to either hill or saw, not m0~n. cannot co-occur in any reading. The exclusion ma- trix for Exl is shown in the right of Figure 1. In Exl, the 'triples 5 and 9 cannot co-occur in any interpretation according to the matrix. Trivially exclusive triples which share the same third term are also marked in the matrix. It is very impor- tant to maintain the Ematrix because otherwise a syntactic graph generates more interpretations than actually result from the parsing grammar. Syntactic graphs and the exclusion matrix are computed from the chart (or forest) formed by an all-paths chart parser. Grammar rules for the parse are in augmented phrase structure form, but are written to minimize their deviation from a pure context-free form, and thus, limit both the conceptual and computational complexity of the analysis system. Details of the graph form, the grammar, and the parser are given in (Seo and Simmons 1989). COMPUTING SEMANTIC GRAPHS FROM SYNTACTIC GRAPHS 49 An important test of the utility of syntactic graphs is to demonstrate that they can be used di- rectly to compute corresponding semantic graphs that represent the union of acceptable case analy- ses. Nothing would be gained, however, if we had to extract one reading at a time from the syntactic graph, transform it, and so accumulate the union of case analyses. But if we can apply a set of rules ,ubj(~s) s~S" ~~p(lO) 0 1 2 3 9 t01 112 14 15 16 17 50 51 52 53 54 55 0 1 2 3 9 1012141516175051]525354:55 1 1 1 1 1 1 1 i 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 11 1 1 1 1 1 1 1 1 1 1 1 1 1 i 1 1 .1 1 1 1 1 1 1 1 i1 1 1 1 1 1 1 1 1 i 1 1 I 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 !1 1 1 1 Figure 3: Functional Graph and Exclusion Matrix for "The monkey lives in tropical jungles near rivers and streams." directly to the syntactic graph, mapping it into the semantic graph, then using the graph can result in a significant economy of computation. We compute a semantic graph in a two-step pro- cess. First, we transform the labeled dependency triples resulting from the parse into functional no- tation, using labels such as subject, object, etc. and transforming to the canonical active voice. This results in a functional graph as shown in Figure 3. Second, the functional graph is trans- formed into the semantic graph of Figure 5. Dur- ing the second transformation, filtering rules are applied to reduce the possible syntactic interpre- tations to those that are semantically plausible. COMPUTING FUNCTIONAL GRAPHS To determine SUB, OBJ and IOBJ correctly, the process checks the types of verbs in a sentence and its voice, active or passive. In this process, a syntactic triple is transformed into a functional triple: for example, (snp X Y) is transformed into (subj X Y) in an active sentence. However, some transformation rules map several syntactic triples into one functional triple. For example, in a passive sentence, if three triples, (voice X passive), (vpp X by), and (ppn by Y), are in a syntactic graph and they are not ex- clusive with each other, the process produces one functional triple (subj X Y). Since prepositions are used as functional relation names, two syn- tactic triples for a prepositional phrase are also reduced into one functional triple. For example, 50 (vpp lives in) and (ppn in jungles) are trans- formed into (in lives jungles). These transfor- mations are represented in Prolog rules based on general inference forms such as the following: (stype X declarative) & (voice X passive) & (vpp X by) & (ppn by Y) => (subject X Y) (vpp X P) ~ (ppn P Y) &: not(volce X pas- sive) => (P X Y). When the left side of a rule is satisfied by a set of triples from the graph, the exclusion matrix is consulted to ensure that those triples can all co- occur with each other. This step of transformation is fairly straight- toward and does not resolve any syntactic ambigu- ities. Therefore, the process must carefully trans- form the exclusion matrix of the syntactic graph into the exclusion matrix of the functional graph so that the transformed functional graph has the same interpretations as the syntactic graph has 4. Intuitively, if a functional triple, say F, is pro- duced from a syntactic triple, say T, then F must be exclusive with any functional triples pro- duced from the syntactic triples which are exclu- sive with T. When more than one syntactic triple, say T[s are involved in producing one functional triple, say F1, the process marks the exclusion 4At a late stage in our research we noticed that we could have written our grammar to result directly in syntactic- functional notation; but one consequence would be increas- ing the complexity of our grammar rules, requiring frequent tests and transformations, thus increasing conceptual and computational complexities. N : the implausible triple which will be removed. The process starts by calling remove-all-Dependent-arcs([N]). remove-all-dependent-arcs(Arcs-to-be-removed) for all Arc in Arcs-to-be-removed do begin i] Arc is not removed yet then find all arcs pointing to the same node as Arc: call them Alt-arcs find arcs which are exclusive with every arc in Alt-arcs, call them Dependent-arcs remove Arc remove entry of Arc from the exclusion matrix remove-all-Dependent-arcs(Dependent-arcs) end Figure 4: Algorithm for Finding Dependent Relations matrix so that F1 can be exclusive with all func- tional triples which are produced from the syntac- tic triples which are exclusive with any of T/~s. The syntactic graph in Figure 2 has five possible syntactic interpretations and all and only the five syntactic-functional interpretations must be con- tained in the transformed functional graph with the new exclusion matrix in Figure 3. Notice that, in the functional graph, there is no single, func- tional triple corresponding to the syntactic triples, (~)-(8), (11) and (13). Those syntactic triples are not used in one-to-one transformation of syntac- tic triples, but are involved in many-to-one trans- formations to produce the new functional triples, (50)-(55), in the functional graph. COMPUTING SEMANTIC GRAPHS Once a functional graph is produced, it is trans- formed into a semantic graph. This transforma- tion consists of the following two subtasks: given a functional triple (i.e., an are in Figure 3), the process must be able to (1) check if there is a se- mantically meaningful relation for the triple (i.e., co-occurrence constraints test), (2) if the triple is semantically implausible, find and remove all func- tional triples which are dependent on that triple. The co-occurrence constraints test is a matter of deciding whether a given functional triple is se- mantically plausible or not. 5 The process uses a type hierarchy for real world concepts and rules that state possible relations among them. These relations are in a case notation such as agt for agent, ae for affected-entity, etc. For example, the 5 Eventually we will incorporate more sophisticated tests as suggested by Hirst(1987) and others, but our current emphasis is on the procedures for transforming graphs. 51 subject(I) arc between lives and monkey numbered (1) in Figure 3 is semantically plausible since an- imal can be an agent of live if the animal is a subj of the live. However, the subject arc between and and monkey numbered (15) in Figure 3 is se- mantically implausible, because the relation con- jvp connects and and streams, and monkey can not be a subject of the verb streams. In our knowledge base, the legitimate agent of the verb streams is a flow-thing such as a river. When a given arc is determined to be seman- tically plausible, a proper case relation name is assigned to make an arc in the semantic graph. For example, a case relation agt is found in our knowledge base between monkey and lives under the constraint subject. If a triple is determined to be semantically im- plausible, then the process removes the triple. Let us explain the following definition before dis- cussing an interesting consequence. Definition 1 A triple, say T1, is dependent on another triple, say T2, if every interpretation which uses 7"1 always uses T2. Then, when a triple is removed, if there are any triples which are dependent on the removed triple, those triples must also be removed. Notice that the dependent on relation between triples is transitive. Before presenting the algorithm to find depen- dent triples of a triple, we need to discuss the fol- lowing property of a functional graph. Property 1 Each semantic interpretation de- rived from a functional graph must contain every node in each position once and only once. (2 attr(S) ~rles near(51) 0 1 2 3 9 10 12 50 51 52 53 54 55 1 1 1 1 1 1 1 1 1 1 11 I1 1 1 1 1 1 1 1 1 1 1 1 1 i 1 1]1 1 1 1 1 !1 1 1 Figure 5: Semantic Graph and Exclusion Matrix for "The monkey lives in tropical jungles near rivers and streams." Here the position means the position of a word in a sentence. This property ensures that all words in a sentence must be used in a semantic interpre- tation once and only once. The next property follows from Property 1. Property 2 Ira triple is determined to be seman- tically implausible, there must be at least one triple which shares the same modifier-word. Otherwise, the sentence is syntactically or semantically ill- formed. Lemma 1 Assume that there are n triples, say 7"1 .... , Tn, sharing a node, say N, as a modifier- word (i.e. third term) in a functional graph. If there is a triple, say T, which is exclusive with T1,..., T/-1, Ti+ l ..... Tn and is not exclusive with T~, T is dependent on Ti. This lemma is true because T cannot co-occur with any other triples which have the node N as a modifier-word except T/in any interpretation. By Property 1, any interpretation which uses T must use one triple which has N as a modifier-word. Since there is only one triple, 7~ that can co-occur with T, any interpretations which use T use T/.[3 Using the above lemma, we can find triples which are dependent on a semantically implausible triple directly from the functional graph and the corresponding exclusion matrix. An algorithm for finding a set of dependent relations is presented in Figure 4. For example, in the functional graph in Fig- ure 3, since monkey cannot be an agt of streams, the triple (15.) is determined to be semantically 52 implausible. Since there is only one triple, (1), which shares the same modifier-word, monkey, the process finds triples which are exclusive with (1). Those are triples numbered (14), (15), (16), and (17). Since these triples are dependent on (16), these triples must also be removed when (16) is re- moved. Similarly, when the process removes (14), it must find and remove all dependent triples of (14). In this way, the process cascades the remove operation by recursively determining the depen- dent triples of an implausible triple. Notice that when one triple is removed, it removes possibly multiple ambiguous syntactic interpretations--two interpretations are removed by removing the triple (16) in this example, but for the sentence, It is transmitted by eating shell- fish such as oysters living in infected waters, or by drinking infected water, or by dirt from soiled fingers, 189 out of 378 ambiguous syntactic inter- pretations are removed when the semantic relation (rood water drinking) is rejected, e This saves many operations which must be done in other ap- proaches which check syntactic trees one by one to make a semantic structure. The resulting seman- tic graph and its exclusion matrix derived from the functional graph in Figure 3 have three seman- tic interpretations and are illustrated in Figure 5. This is a reduction from five syntactic interpre- tations as a result of filtering out the possibility, (agt streams monkey). There is one arc in Figure 5, labeled near(51), that proved to be of considerable interest to us. 6In "infec'~ed drinking water", (rood water drinking) is plausible but not in "drinking infected water". If we attempt to generate a complete sentence us- ing that arc, we discover that we can only pro- duce, "The monkey lives in tropical jungles near rivers." There is no way that that a generation with that arc can include "and streams" and no sentence with "and streams" can use that arc. The arc, near(51), shows a failure in our ability to rewrite the exclusion matrix correctly when we removed the interpretation "the monkey lives ... and streams." There was a possibility of the sen- tence, "the monkey lives in jungles, (lives) near rivers, and (he) streams." The redundant arc was not dependent on subj(16) (in Figure 3) and thus remains in the semantic graph. The immediate consequence is simply a redundant arc that will not do harm; the implication is that the exclusion matrix cannot filter certain arcs that are indirectly dependent on certain forbidden interpretations. DISCUSSION AND CONCLUSION The utility of the resultant semantic graph can be appreciated by close study of Figure 5. The graph directly answers the following questions, (assuming they have been parsed into case nota- tion): • Where does the monkey live? 1. in tropical jungles near rivers and streams, 2. near rivers and streams, 3. in tropical jungles near rivers, 4. in tropical jungles. • Does the monkey live in jungles? Yes, by agt(1) and in(53) which are not exclusive with each other. • Does the monkey live in rivers? No, because in(52) is exclusive with conj(lO), and in(SS) is pointing to jungles not rivers. • Does the monkey live near jungles? No, be- cause near(50) and conj(12) are exclusive, so no path from live through near(50) can go through eonj(12) to reach jungle, and the other path from live through near(51) goes to rivers which has no exiting path to jungle. Thus, by matching paths from the question through the graph, and ensuring that no arc in the answering path is forbidden to co-occur with any other, questions can be answered directly from the graph. In conclusion, we have presented a computa- tional method for directly computing semantic graphs from syntactic graphs. The most crucial and economical aspect of the computation is the 53 capability of applying tests and transformations directly to the graph rather than applying the rules to one interpretation, then another, and an- other, etc. When a semantic filtering rule rejects one implausible relation, then pruning all depen- dent relations of that relation directly from the syntactic graph has the effect of excluding sub- stantially many syntactic interpretations from fur- ther consideration. An algorithm for finding such dependent relations is presented. In thispaper, we did not consider the multi- ple word senses which may cause more seman- tic ambiguities than we have illustrated. Incor- porating and minimizing word sense ambiguities is part of our continuing research. We are also currently investigating how to integrate semantic graphs of previous sentences with the current one, to maintain a continuous context whose ambigu- ity is successively reduced by additional incoming sentences. References [1] Alfred V. Aho, and Jeffrey D. Ullman, The Theory of Parsing, Translation and Compil- ing, Vol. 1, Prentice-Hall, Englewood Cliffs, NJ, 1972. [2] G. Edward Barton and Robert C. Berwick, "Parsing with Assertion Sets and Informa- tion Monotonicity," Proceedings of IJCAI-85: 769-771, 1985. [3] Jay Early, "An Efficient Context-free Pars- ing algorithm," Communications of the A CM, Vol. 13, No. 2: 94-102, 1970. [4] Graeme Hirst, Semantic Interpretation and the Resolution of Ambiguity, Cambridge Uni- versity Press, Cambridge, 1987. [5] Mitchell P. Marcus, A Theory of Syntac- tic Recognition for Natural Language, MIT Press, Cambridge, 1980. [6] Elain Rich, Jim Barnett, Kent Wittenburg and David Wroblewski, "Ambiguity Procras- tination," Proceedings of AAAL87: 571-576, 1987. [7] Jungyun Seo and Robert F. Simmons, "Syn- tactic Graphs: A Representation for the Union of All Ambiguous Parse Trees," Com- putational Linguistics, Vol. 15, No. 1: 19-32, 1989. [8] Masaru Tomita, Efficient Parsing for Natu- ral Language, Kluwer Academic Publishers, Boston, 1985.
1990
7
A Compositional Semantics for Focusing Subjuncts Daniel Lyons* MCC 3500 West Balcones Center Drive Austin, TX 78759, USA lyons~mcc.com Graeme Hirst Department of Computer Science University of Toronto Toronto, Canada MSS 1A4 gh~ai.toronto.edu Abstract A compositional semantics for focusing subjuncts-- words such as only, even, and also--is developed from Rooth's theory of association with focus. By adapting the theory so that it can be expressed in terms of a frame-based semantic formalism, a seman- tics that is more computationally practical is arrived at. This semantics captures pragmatic subtleties by incorporating a two-part representation, and recog- nizes the contribution of intonation to meaning. 1 Introduction Focusing subjuncts such as only, even, and also are a subclass of the sentence-element class of ad- verbials (Quirk et al., 1985). They draw attention to a part of a sentence the focus of the focusing subjunct--which often represents 'new' information. Focusing subjuncts are usually realized by adverbs, but occasionally by prepositional phrases. Focusing subjuncts emphasize, approximate, or restrict their foci. They modify the force or truth value of a sen- tence, especially with respect to its applicability to the focused item (Quirk et al., 1985, §8.116). 1.1 The problem with focusing subjuncts There are several reasons why developing any se- mantics for focusing subjuncts is a difficult task. First, focusing subjuncts are 'syntactically promiscuous'. They can adjoin to any maximal pro- jection. They can occur at almost any position in a sentence. Second, focusing subjuncts are also 'semantically promiscuous'. They may focus (draw attention to) almost any constituent. They can precede or fol- low the item that they focus, and need not be adja- cent to this item. The focus need only be contained somewhere within the syntactic sister of the focus- ing subjunct. Because of this behavior, it is difficult to determine the intended syntactic argument (ad- junct) and focus of a focusing subjunct. Sentences *The work described in this paper was done at the University of Toronto. such as those in (1) can be ambiguous, even when uttered aloud with intonational effects. 1 (1) 1. John could also (SEE) his wife from the doorway (as well as being able to talk to her). 2. John could also see (his WIFE) from the doorway (as well as her brother). 3. John could also see his wife (from the DOORway) (as well as from further inside the room). 4. John could also (see his wife from the DOORway) (as well as being able to do other things). Third, the location of intonational stress has an important effect on the meaning of a sentence con- taining a focusing subjunct. Sentences may be partly disambiguated by intonational stress: inter- pretations in which stress falls outside the intended focus of the focusing subjunct are impossible. For example, the sentence (2) *John could also see (his wife) from the DOORway. is impossible on the indicated reading, since stress on door cannot confer focus on his wife. On the other hand, stress does not help to disambiguate between readings such as (1.3) and (1.4). Fourth, focusing subjuncts don't fit into the slot- filler semantics that seem adequate for handling many other sentence elements (see Section 1.3)~ At best, their semantic effect is to transform the se- mantic representation of the constituent they modify in some predictable compositional way (Hirst, 1987, p. 72). Finally, focusing subjuncts carry pragmatic "bag- gage". The meaning of a focusing subjunct includes distinct asserted and non-asserted parts (Horn, 1969), (Karttunen and Peters, 1979). For example, 1 In the example sentences in this paper, small capitals de- note intonational stress. Angle brackets 0 enclose the focus of a focusing subjunct and square brackets [ ] set off the con- stituent to which the focusing subjunct adjoins. Unacceptable sentences are preceded by an asterisk. 54 (3) asserts (4.1) but only presupposes (4.2) (Horn, 1969): (3) Only Muriel voted for Hubert. (4) 1. No one other than Muriel voted for Hu- bert. 2. Muriel voted for Hubert. Analogously, (5) asserts (6.1) and presupposes (6.2) (Karttunen and Peters, 1979): (5) Even Bill likes Mary. (6) 1. Bill likes Mary. 2. Other people besides Bill like Mary; and of the people under consideration, Bill is the least likely to like Mary. The precise status of such pragmatic inferences is controversial. We take no stand here on this issue, or on the definition of "presupposition". We will simply say that, for example, (4.1) is due to the asserted meaning of only, and that (4.2) is produced by the non-asserted meaning of only. 1.2 Requirements of a semantics for focusing subjuncts We desire a semantics for focusing subjuncts that is compositional (see Section 1.3), computation- ally practical, and amenable to a conventional, structured, near-first-order knowledge representa- tion such as frames. It must cope with the se- mantic and syntactic problems of focusing subjuncts by being cross-categorial, being sensitive to in- tonation, and by distinguishing asserted and non- asserted meaning. By cross-categorial semantics we mean one that can cope with syntactic variability in the arguments of focusing subjuncts. We will demonstrate the following: • Intonation has an effect on meaning. A focus feature is useful to mediate between intona- tional information and meaning. • It is desirable to capture meaning in a multi- part semantic representation. • An extended frame-based semantic representa- tion can be used in place of higher-order logics to capture the meaning of focusing subjuncts. 1.3 Syntactic and semantic frameworks In this paper, we will use a compositionM, frame- based approach to semantics. Focusing subjuncts have been thought difficult to fit into a composi- tional semantics because they change the meaning of their matrix sentences in ways that are not straight- forward. A compositional semantics is characterized by the following properties: • Each word and well-formed syntactic phrase is represented by a distinct semantic object. • The semantic representation of a syntactic phrase is a systematic function of the represen- tation of its constituent words and/or phrases. In a compositional semantics, the syntax drives the semantics. To each syntactic phrase construction rule there corresponds a semantic rule that speci- ties how the semantic objects of the constituents are (systematically) combined or composed to obtain a semantic object for the phrase. Proponents of com- positionM semantics argue that natural language it- self is for the most part compositional. In addition, using a composition semantics in semantic interpre- tation has numerous computational advantages. The particular incarnation of a compositional se- mantics that serves as the semantic framework for this work is the frame-based semantic representa- tion of Hirst's Absity system (Hirst, 1987, 1988). Absity's underlying representation of the world is a knowledge base consisting of frames. A frame is a collection of stereotypical knowledge about some topic or concept (Hirst, 1987, p. 12). A frame is usuMly stored as a named structure having associ- ated with it a set of slots or roles that may be as- signed values or fillers. Absity's semantic objects belong to the types in a frame representation lan- guage called Frail (Charniak, 1981). Absity uses the following types of semantic object: • a frame name • a slot name • a frame determiner • a slot-filler pair • a frame description (i.e. a frame with zero or more slot-filler pairs) • eiLher an instance or frame statement (atom or frame determiner with frame description) A frame determiner is a function that retrieves frames or adds them to the knowledge base. A frame description describes a frame in the knowledge base. The filler of a slot is either an atom, or it is an in- stance, specified by a frame statement, of a frame in the knowledge base. In order to capture the mean- ing of sentences containing focusing subjuncts, we will augment Absity's frame-representation language with two new semantic objects, to be described in Section 3.3. The notation Hirst uses for frames is illustrated in Figure 1, which is a frame statement translation of the sentence (7) Ross washed the dog with a new shampoo. The semantics we will outline does not depend on any particular syntactic framework or theory. How- ever, we choose to use Generalized Phrase Structure Grammar (GPSG) (Gazdar et al., 1985), because this formalism uses a compositional semantics that 55 (a ?u (wash ?u (agent=(the ?x (person ?X (propername--Ross)))) (patlent=(the ?y (dog ?y))) (instrument=(a ?z (shampoo ?z (age=new)))) )) Figure 1: An Absity frame statement resembles Montague grammar (Montague, 1973). A central notion of GPSG that we will make use of is that of the features of a syntactic phrase. A feature is a piece of linguistic information, such as tense, num- ber, and bar level; it may be atom-valued or category- valued. 1.4 Previous research The groundwork for the analysis of focusing sub- juncts was laid by Horn (1969). ttom describes only (when modifying an NP) as a predicate tak- ing two arguments, "the term ix] within its scope" and "some proposition [Pz] containing that term" (Horn, 1969, p. 99). The meaning of the predicate is then to presuppose that the proposition P is true of z, and to assert that x is the unique term of which P is true: -,(~y)(y # z & Py). Even takes the same ar- guments. It is said to presuppose (qy)(y # x & Py) and to assert Px. Horn requires a different formula- tion of the meaning of only when it modifies a VP. Since his formulation is flawed, we do not show it here. Jackendoff's (1972, p. 242) analysis of even and only employs a semantic marker F that is assumed to be present in surface structure and associated with a node containing stress. He calls the semantic ma- terial associated with constituents marked by F the focus of a sentence. Fie proposes a rule that states that even and "related words" are associated with focus by having the focus in their range. Differ- ences between the ranges of various focusing adverbs account for their different distributions (Jackendoff, 1972, pp. 249-250). For example: Range of even: If even is directly dominated by a node X, then X and all nodes dominated by X are in its range. Range of only: If only is directly dominated by a node X, then X and all nodes that are both dominated by X and to the right of only are in its range. That is, only cannot precede its focus (nor can just, which has the same range), but even can: (8) 1. *(JOHN) only gave Mary a birthday present (no one else did). 2. (JOHN) even gave Mary a birthday present (and so did everyone else, but John was the person least expected to). We will employ several aspects of Rooth's (1985) domain selection theory. A key feature of the theory is that only takes the VP adjacent to it in S-structure as its argument (an extension of the the- ory allows only to take arguments other than VPs). Rooth describes technical reasons for this arrange- ment (1985, p. 45). Among these is the fact that focusing subjuncts can draw attention to two (or more) items that, syntactically, do not together con- stitute a well-formed phrase: (9) John only introduced (BILL) to (SUE). The prevailing linguistic theories allow a node (such as a focusing subjunct) only one argument in the syntactic or logical (function-argument) structures of a sentence. According to Rooth, the asserted meaning of (10) John only [vP introduced BILL to Sue]. is "if John has a property of the form 'introduce y to Sue' then it is the property 'introduce Bill to Sue'" (Rooth, 1985, p. 44, p. 59). Rooth's theory would produce the same translation, shown in (11.2), for both sentence (10) and sentence (11.1). (11) 1. John only introduced Bill to SUE. 2. VP[[P(john) & P 6 C] --* P = ^introduee'(bill, sue)] P ranges over propositions, so (11.2) is a quantifica- tion over propositions. C is bound 2 to the p-set of the VP of whichever sentence's meaning (11.2) is in- tended to capture. This p-set is "a set of properties, which we think of as the set of relevant properties" (Rooth, 1985, p. 43). Different truth conditions for the two sentences (10) and (11.1) obtain because their VPs have dif- ferent p-sets: the computation of p-sets is sensitive to intonational stress (actually to focus, which is sig- nalled by stress; see below). The desired value for C in the translation of (10) is the set of propositions of the form "introduce y to Sue", namely propositions satisfying (12.1). For the translation of (11.1), C is the set of propositions of the form "introduce Bill to y", that is, those satisfying (12.2). (12) 1. AP3y[P = ^introdued(y, sue)] 2. AP3y[P = ^introduee'(bill, y)] These result in the final translations (13.1) and (13.2) respectively for sentences (10) and (11.1): (13) 1. Vy[introducd(john, y, sue) --+ y=bilO 2. Vy[introduce' (john, bill, y) --+ y=sue] 2 The mechanism of this binding relies on the translation being a formula of which (11.2) is a reasonable simplification; see (Rooth, 1985, p. 59). 56 The formula (13.1) corresponds to the gloss of the meaning of (10) given above. (13.2) is to be inter- preted as meaning: "if John has a property of the form 'introduce Bill to y' then it is the property 'in- troduce Bill to Sue'". The p-set of a complete sentence is a set of "rel- evant propositions". Rooth defines it recursively, from the p-sets of its constituents (Rooth, 1985, p. 14) (the "model" is a Montague-style formal model): (14) Let a be a constituent with translation a ~. The p-set of a is: 1. if a bears the focus feature, the set of ob- jects in the model matching a ~ in type; 2. if a is a non-focused non-complex phrase, the unit set {a'}; 3. if a is a non-focused complex phrase, the set of objects that can be obtained by picking one element from each of the p-sets corresponding to the component phrases of a, and applying the semantic rule for a to this sequence of elements. In other words, the p-set of a sentence consists essen- tially of all propositions that are "like" the propo- sition that it asserts, except that the focused con- stituent in the proposition is replaced by a variable. 3 We will adopt Rooth's definition of the meaning of only: A sentence containing only that (without only) has logical form a: (15) 1. asserts that any "contextually relevant" proposition P whose extension is true is the proposition a; 2. has a as part of its non.asserted meaning. (Rooth, 1985, p. 120). Our analogous definition of even is this: A sentence containing even that (without even) has logical form a: (16) 1. asserts a; 2. conveys the non-asserted inference that there are other "contextually relevant" propositions, besides a, that are true. 2 Devices used to solve the problems Our semantics (which is described in more detail by Lyons (1989)) employs devices described in the fol- lowing sections. 2.1 The focus feature Following Jackendoff, we propose that focus is a bi- nary feature, similar to (say) gender and number, aThe notion that the meaning of only and even can be defined in terms of a base form (such as "John introduced y to Sue") was also noted by Kaxttunen and Peters (1979) and McCord (1982). that is either present or absent on every constituent at surface structure. 4 Focus is initially instantiated onto the leaves of the tree that represent intona- tionally stressed words. The only realization of the focus feature that we accommodate is intonational accent; however, our theory can easily be extended to allow for other overt realizations of focus, includ- ing other intonational effects (e.g. (Hirschberg and Pierrehumbert, 1986)). Focus is optionally and non- deterministically percolated up the syntax tree, to any node from its rightmost daughter (rightmost be- cause stress manifests itself only at the end of the focused constituent (Anderson, 1972)). The non- determinism of the percolation of focus is responsible for ambiguity in the interpretation of sentences with focusing subjuncts. How far the focus feature per- colates up determines how wide a focus is attributed to the focusing subjunct: (17) 1. John also read the book (from the LIBRARY) (as well as the one from the store). 2. John also read (the book from the LIBRARY) (as well as the newspaper). 3. John also Iread the book from the LIBRARY) (as well as completing his as- signment). The ambiguous interpretations of a sentence with a focusing subjunct belong to an ordered set in which each reading has a wider focus for the focusing sub- junct than the previous one. 2.2 Relevant propositions Our semantics employs a computational analogue of Rooth's p-sets for a frame representation. Our p- set for a constituent is computed compositionally, along with the semantic representation, in tandem with the application of the syntactic rule used to build the constituent. The p-set turns out to be an object in the frame representation that is like the semantic assertion derived for the constituent, but lacking restrictive information associated with any focused components. 2.3 Two-part semantics In addition to p-sets, two semantic expressions are computed for each constituent during the interpre- tation of a sentence. One expression represents as- serted meaning, and the other, non-asserted mean- ing. 4 This feature is what Jackendoffcalls the F marker, but is dif- ferent from what he calls "focus". Note that we use the term focus of a focusing subjunct to stand for a distinct con- cept: the item to which a focusing subjunct draws attention to, or focuses. This is the semantic material that corresponds to a stressed word or to a constituent containing one. 57 2.4 Linguistic features Focus is marked as a binary feature on all syntactic constituents. The semantic rules use this informa- tion when constructing semantic expressions for con- stituents. Because the focus feature need not perco- late all the way up to the level of the constituent that is adjacent to the focusing subjunct in the syn- tax tree, we have found it useful to employ a second feature, focus.in, that indicates whether or not any sub-phrase is focused. The restriction that a focus- ing subjunct adjoins only to a phrase containing fo- cus is implemented by requiring the adjunct phrase to be (focus-in +). Range (see Section 1.4) is implemented as two bi- nary features, range-right and range-left, that indi- cate whether or not a given focusing subjunct can adjoin to phrases to its right and left, respectively. (Some words, like even, have both features.) 2.5 Sentential operators Rooth applies his even and only operators to the logi- cal form of the constituent that is the syntactic sister of the focusing subjunct. So, for example, in the VP (18.1), only transforms the expression wash'(dog), which is the translation of the VP argument of only, into the A-expression (18.2). (18) 1. only [vp washed the (DOG)] 2. AxVP[[VP & P e C'] P = ^wash'(x, dog)] For each focusing subjunct, Rooth must define a sep- arate transformation for each different semantic type of phrase that it may take as an argument. He de- fines a basic sentential operator for each focusing subjunct, and then derives the other operators from these (Rooth, 1985, pp. 120-121). Our approach is to instead define a single operator for each focusing subjunct, essentially Rooth's basic sentential operator. This operator takes the seman- tic representation of a sentence as an argument and produces another semantic representation of senten- tial type. When sentential objects are not available, as in the interpretation of [vp only VP], we delay the application of the operator until such a point as fully developed propositions, the semantic objects of sen- tenees, are available. To do this, the grammar rules "percolate" focusing subjunct operators up the syn- tax tree to the S node. Our grammar employs the feature fs to carry this latent operator. When the interpretation of a sentence is otherwise completed, a final step is to apply any latent operators, produc- ing expressions for the sentence's asserted and non- asserted meanings from expressions for its assertion and its p-set. Several pieces of evidence motivate this approach: • As Rooth observed, in order to define a family of cross-categorial operators for (say) only, a basic operator must be defined that operates on an expression of sentential type. The semantics of focusing subjuncts actually seems to take place at the sentence level. Focusing subjuncts normally occur at most once per sentence. Even granting the acceptability of sentences containing several focusing subjuncts, such sentences are clearly semantically compli- cated. The principal advantage of our approach is that it constructs essentially the same final translation of a sentence as Rooth's, but avoids using the A- operator during the derivation of a semantic repre- sentation that does not itself contain a A-operator. This is desirable, as A-expressions would make the frame representation language less tractable. 3 Details of the semantics 3.1 Semantic features Three semantic objects are computed for and at- tached to each syntactic constituent, in parallel with the syntactic processing. The objects are of the types defined in an Absity-like frame representation. They are attached to a node as values of the fol- lowing features (an approach motivated by Shieber (1986)): Assert: The asserted meaning of the constituent, its contribution to the sentence's asserted mean- ing. The value is computed the same way that a Montague-style grammar would con- struct a constituent's logical form from those of its daughters. Figure 2 shows examples of the rules to compute this value. Presupp: The constituent's contribution to the sen- tence's non-asserted meaning. For all rules but sentence rules, the presupp feature on the parent node is undefined. In order not to commit our- selves to the status of the non-asserted mean- ings of focusing subjuncts, we reserve this fea- ture for the non-asserted meanings introduced by focusing subjunct operators (see below). P-set: A prototype of the semantic objects in the node's p-set. All objects that match this object are in the node's p-set. The algorithm for com- puting p-sets distinguishes between two cases: Case 1: If the parent node X (being con- structed) is (focus +), its p-set is a variable of the same type as the assert object. Case 2: Otherwise, the p-set of X is con- structed from the p-set values of the con- stituent phrases in a manner exactly paral- leling the construction of the assert feature. 58 Syntax rule Semantic rule S --* XP[(assert (agent = a))], S = S[(assert (frame ~ (agent = 4) sf-pairs))] VP[(assert (frame fl sf-pairs))] VP ---* V[2 (assert (frame ?t~))], VP = V[(assert (frame ?a (slotfl = ~)))1 NP[obj (assert (slot~ = ¢))] PP --* P[38 (assert slota)], PP = PP[(assert (slots = fl))l NP[(assert fi)] Figure 2: Examples of semantic rules for the assert feature 3.2 Application of the focusing subjunct operators There is a syntactic rule whose sole purpose is to support of the application of a sentential operator: 09) s H[(fs 4)1 S[fs 4] is specified as a non-initial category in the grammar, if a ¢ "-". Therefore, the rule (19) must apply in the derivation of any well-formed sentence containing a focusing subjunct. The corresponding semantic rule (20) applies a focusing subjunct oper- ator to the semantic representation of the sentence. (20) 1. Input: S[(assert a), (p-set ~/), (fs 7)] 2. Output: • If 7 = "-" then S[(assert a), (p-set fl)] • else S[(assert oplv(t~ , fi)), (presupp op2,(tr, fl)), (p-set fl)] where oplv and op2v are the sentential operators for the focusing subjunct 7 (see below). 3.3 The sentential operators The sentential operators for only and even are given below. (The one for too is the same as that for even, and those for the other focusing subjuncts are simi- lar.) (21) 1. oplontu(A, P) = if P then A 2. op2only (A, P) = A 3. opl~,e,(A, P) = A 4. op2~ven( (the ?x frame-descrA), (the ?y frame-descrP) ) = (anew ?y ¢?z (frame-descrP)) The form if P then A is a directive to the underly- ing knowledge base to insert the rule that any frame matching P is just the frame A, that is, A is the unique frame matching P. This directive is a frame implication. It is similar in character to a frame determiner (Hirst, 1987), in that it is a function that manipulates the underlying knowledge base. The form (anew ?y ~?X frame-descrP) is also a new type of entity in the semantics. We treat it as a frame determiner. It is a directive to the knowledge base to retrieve or create a frame instance, ?y, that matches frame-descrP but is not the frame instance identified by the variable ?x. As with the frame determiner (the ?x), such a frame instance ?y should be inserted if not already present in the knowledge base. For example, the sentence (22.1) yields the ex- pression (22.2) as its assertion and (22.3) as its non-asserted meaning (other readings are possible as well). (22) 1. Ross only washed the DOG. 2. if (wash ?x (agent=Ross)) then (wash ?x (agent=Ross) (patient=dog))) 3. (the ?x (wash ?x (agent=Ross) (patient=dog))) The frame instance (22.3) captures the semantic con- tent of the sentence "Ross washed the dog". The frame implication (22.3) is to be interpreted as the rule that any wash frame in the knowledge base hav- ing Ross as its agent must in addition have dog as its patient. A second example: sentence (23.1) yields assertion (23.2) and non-asserted meaning (23.3). (23) 1. Ross washed even the DOG. 2. (the ?x (wash ?x (agent=Ross) (patient=dog))) 3. (anew ?y ~?x (wash ?y (agent=Ross))) The expression (23.3) affirms the existence of a wash instance ?y having agent Ross but that is a distinct washing from ?z in (23.2), which has dog as its pa- tient. 4 The implementation IDEO (Interpreter Designed for Even and Only) is a limited semantic interpreter that incorporates the 59 semantics for even and only described in Section 3. The implementation is in Edinburgh C-Prolog, run- ning under UNIX on a Sun-4 computer. Because the authors did not have access to a working version of Frail (see Section 1.3), IDEO runs on top of a toy knowledge base, also implemented in C-Prolog, whose retrieval language is (unfortunately) a syntac- tic variant of Absity's. A sample session with IDEO is follows below. In this trace, output generated by the program or typed by the user is shown in typewriter font. As the ex- ample shows, IDEO accepts a sentence typed at the keyboard and outputs four different things. Firstly, it echoes the sentence in its internal Prolog format. Secondly, the GPSG category obtained for the sen- tence, which incorporates a parse tree for the sen- tence, is displayed. For the sake of readability, IDEO prints a simplified version of the category. Then, it prints separately the semantic representation of the sentence, which consists of several of the feature- value pairs of the sentence category. Lastly, it dis- plays a message about the success or failure of each knowledge base function call resulting from process- ing the semantic representation of the sentence. In a question-and-answer system, these messages would be replaced by generated English answers. Please enter a sentence, follo.ed by a carriage return, after the "I:" prompt. J: Ross only washed the :dog. The colon preceding the word dog tells IDEO that the word isintonationally stressed. >>> Sa~ the sentence: [ross. only. ,ashed. the. stress(dog). period] >>> The category for this sentence is: [Omitted due to space ~mitations.] The significant piece of information in the GPSG category is that the noun phrase [NP the stress(dog)] is (focus +), but the verb phrase that contains it is not. >>> The semantic representation is: assert( if frame(X, .ash) k slot(X, agent, ross) then frame(X, .ash) k slot(X, agent, ross) slot(X, patient, Y) k framedet(the. Y, frame(Y, dog))) presupp(framedet(a, X, frame(X..ash) k slot(X, agent, ross) k slot(X, patient. Y) k framedet(the, Y, frame(Y, dog)))) p-set(framedet(a. X, frame(X, .ash) k slot(X, agent, ross))) fs(only) >>> OK? no This semantic representation corresponds to the reading in which the speaker means to assert that Ross washed nothing else but the dog. The user replies no to the prompt OK? to indicate that this is not the reading he intends. >>> The category for this sentence is: [Omitted due to space limitations.] This time, a parse is returned in which the feature (focus +) percolates up the parse tree to the entire verb phrase [NP .ashed the stress(dog)], >>> The semantic representation is: assert( if slot(X, agent, ross) then frame(X, .ash) k slot(X, agent, ross) k slot(X, patient, Y) k framedet(the, Y, frame(Y, dog))) presupp(framedet(a, X. frame(X, .ash) k slot(X, agent, ross) slot(X, patient, Y) k framedet(the, Y, frame(Y, dog)))) p-set(framedet(a, X. slot(X, agent, ross))) fs(only) >>> OK? yes The user approves this semantic representation, which corresponds to the reading in which the speaker asserts that Ross did nothing but wash the dog. >>> Retrieved frame "dogl" frame (dog1. dog) >>> Found frame "washl" frame (wash1, .ash) slot(.ashl, agent, ross) slot(.ashl, patient, dogl) >>> Inserted rule "rulel" if slot(X, agent, ross) then X = .ashl The knowledge base now is constrained by the rule rulel. This says that ira frame X satisfies the frame description slot(X, agent, ross)(i.e, its agent is ross), then it must be the frame ,ashl (a ,ash frame whose patient slot is filled by dogl). 60 5 Conclusions This paper summarizes work, details of which can be found in (Lyons, 1989). The work represents a step towards better understanding of focusing subjuncts and of compositional semantics in general. The se- mantics we have proposed allows focusing subjuncts to be covered by a frame-based approach to semantic interpretation, by virtue of its being compositional, computationally practical, able to differentiate be- tween asserted and non-asserted meaning, sensitive to intonation, and eross-categorial. We have found that: • Focus and stress information can be used to ad- vantage in a semantic interpreter. • The hypothesis that focus may be optionally percolated to a parent node from a daughter explains the scope ambiguities observed in the interpretation of focusing subjuncts. • Rooth's method of obtaining the translation of a focusing subjunct by using p-sets to select "domains of quantification" can be adapted to translating a sentence into a frame represents- tion. • Treating focusing subjuncts as operators on sen- tential semantic forms makes this translation possible. • Semantically, focusing subjuncts are not just passive objects for composition. We have shown extensions to standard frame representations that are required for the translation of focus- ing subjuncts. Acknowledgements Both authors acknowledge the support of the Natural Sciences and Engineering Research Council of Canada. We are also grateful to Diane Horton, Brendan Gillon, Barb Brunson, and Mark Hyan for discussions, com- ments on earlier drafts, and general encouragement. References Anderson, Stephen R. (1972). How to get even. Lan. guage, 48:893-906. Charniak, Eugene (1981). A common representation for problem-solving and language-comprehension infor- mation. Artificial Intelligence, 16(3):225-255. Also published as technical report CS-59, Department of Computer Science, Brown University, July 1980. Gazdar, Gerald, Klein, Ewan, Pullum, Geoffrey K., and Sag, Ivan (1985). Generalized Phrase Structure Grammar. Harvard University Press. Hirschberg, Julia and Pierrehumbert, Janet (1986). The intonational structuring of discourse. In 24 th An- nual Meeting of the Association for Computational Linguistics, Proceedings of the Conference. pages 136-143. Hirst, Graeme (1987). Semantic Interpretation and the Resolution of Ambiguity. Cambridge University Pre88. Hirst, Graeme (1988). Semantic interpretation and am- biguity. Artificial Intelligence, 34(2):131-177. Horn, Laurence R. (1969). A presuppositional analy- sis of only and even. In Binnick, Robert I., Davi- son, Alice, Green, Georgia, and Morgan, Jerry, edi- tors, Papers from the Fifth Regional Meeting of the Chicago Linguistics Society. Chicago Linguistic So- ciety, pages 98-107. Jackendoff, Ray S. (1972). Semantic Interpretation in Generative Grammar. The MIT Press. Karttunen, Lanri and Peters, Stanley (1979). Conven- tional implicature. In Oh, Choon-Kyu and Din- neen, David A., editors, Presupposition, volume 11 of Syntaz and Semantics. Academic Press, pages 1- 56. Lyons, Dan (1989). A computational semantics for fo- cusing subjuncts. Master's thesis, Department of Computer Science, University of Toronto. Also published as technical report CSRI-234. McCord, Michael C. (1982). Using slots and modifiers in logic grammars for natural language. Artificial Intelligence, 18:327-367. Montague, Richard (1973). The proper treatment of quantification in ordinary English. In Hin- tiklm, Kaarlo Jaakko Juhani, Moravcsik, Julius Matthew Emil, and Suppes, Patrick Colonel, edi- tors, Approaches to Natural Language: Proceedings of the 1970 Stanford workshop on grammar and se- mantics. D. Reidel, pages 221-242. Also in Thoma~ son, Richmond Hunt (ed.), Formal philosophy: Se- lected papers of Richard Montague. Yale University Press (1974): 247-270. Quirk, Randolph, Greenbaum, Sidney, Leech, Geoffrey, and Svartvik, Jan (1965). A Comprehensive Gram- mar of the English Language. Longman. Rooth, Mats Edward (1985). Association with Focus. PhD thesis, Department of Linguistics, University of Massachusets. Shieber, Stuart M. (1986). An Introduction to Unification-Based Approaches to Grammar. Cen- ter for the Study of Language and Information. 61
1990
8
DESIGNER DEFINITES IN LOGICAL FORM Mary P. Harper* School of Electrical Engineering Purdue University West Lafayette, IN 47907 Abstract In this paper, we represent singular definite noun phrases as functions in logical form. This represen- tation is designed to model the behaviors of both anaphoric and non-anaphoric, distributive definites. It is also designed to obey the computational con- straints suggested in Harper [Har88]. Our initial representation of a definite places an upper bound on its behavior given its structure and location in a sentence. Later, when ambiguity is resolved, the precise behavior of the definite is pinpointed. 1 Introduction A goal of natural language research is to provide a computer model capable of understanding En- glish sentences. One approach to constructing this model requires the generation of an unambiguous internal representation for each sentence before at- tempting to represent subsequent sentences. Natu- ral language systems that attempt to guess the in- tended meaning of a sentence without considering subsequent sentences usually make no provision for recovery from incorrect guesses since that would re- quire storing information about the ambiguity of the sentence. Hence, this approach may require the pro- cessing of several sentences before enough informa- tion is available to determine the intended meaning of the sentence being represented. However, in or- der to make the inferences necessary to resolve some ambiguities, some internal representation is needed for both the current sentence as well as subsequent sentences. A more powerful approach is to leave the ambiguity unresolved in an intermediate repre- sentation until the necessary information has been processed. We adopt this second approach, which advocates mapping parsed sentences into an inter- mediate level of representation called logical form *This paper contains results from the author's the- sis in the Computer Science Department at Brown Uni- versity. The paper has benefited from discussions with Eugene Charniak, Kate Sanders, Leora Morgenstern, Tom Dean, Paul Harper and Frederic Evans. The work was supported in part by the NSF grants IST 8416034 and IST 8515005, ONR grant N00014-79-C-0529, and AFOSR grant F49620-88-c-0132. 62 [SP84; All87; Har88]. Logical form partially spec- ifies the meaning of a sentence based on syntactic and sentence-level information, without considering the effect ofpragmatics and context. Later, as more information becomes available, the representation of the sentence is incrementally updated until all am- biguities have been resolved. In the literature, two sources of ambiguity have been handled using logical form, quantifier scop- ing (see [SP84; Al187]) and pronoun resolution (see [Har88; Har90]). In this paper, we will discuss the use of logical form for handling the ambiguities in the meanings of singular definite noun phrases. But first, it will be useful to briefly review the logical form for pronouns. 2 Pronouns in Logical Form Pronouns are a source of underspecification in a sen- tence which can be handled in logical form. The antecedent of a pronoun cannot be immediately de- termined when the sentence containing it is parsed. Contextual and syntactic constraints combine to al- low a listener/reader to decide on the antecedent for a certain pronoun. In Harper [Har88; Har90], we devised a logical form representation for pronouns. This representation divides the process of deter- mining the meaning of a pronoun into two phases. First, the representation for the pronoun is deter- mined using only syntactic and sentence-level infor- mation. Then, once the antecedent is determined, a feat which often requires pragmatic and contex- tual information available in subsequent sentences, we provide a way to update our logical form to in- dicate this information. Our logical form representation for pronouns was designed with two goals in mind. First, we required our representation to be compatible with the goal of devising a computational model of language com- prehension. In fact, we defined three constraints for using logical form in a computational framework (from [Har88] and [Harg0]). 1. Compactness Constraint: Logical form should compactly represent ambiguity. 2. Modularity Constraint: Logical form should be initially computable from syntax and local (sentence-level) semantics. In par- ticular, logical form should not be dependent on pragmatics, which requires inference and hence, internal representation. 3. Formal Consistency Constraint: Further processing of logical form should only disam- biguate or further specify logical form. Logical form has a meaning. Any further processing must respect that meaning. First, the compactness constraint captures the spirit of logical form as presented by Allen [Al187]. Sec- ond, if the modularity constraint is violated, the value of computing logical form is lost. Finally, the formal consistency constraint keeps us honest. Ini- tially, logical form provides a composite representa- tion for a sentence. However, as more information becomes available, then the meaning of the sentence will be incrementally updated until all ambiguity is resolved. We cannot modify logical form in any way that contradicts its original meaning. The second goal of our approach was to accu- rately model the linguistic behavior of pronouns while obeying our logical form constraints. Since pronouns have a range of behaviors between vari- ables on the one hand and constants on the other, the initial logical form for a pronoun must be com- patible with both extremes (to model the range of pronoun behaviors and to be consistent with the compactness and formal consistency constraints). Hence, we provided a composite representation for a pronoun, one compatible with any possible an- tecedent it can have given its position in a sentence. Pronouns in a sentence are represented as part of the process of providing logical form for that sen- tence. We enumerate the important features of a sentence's representation. 1. A sentence is represented as a predicate- argument structure, with subjects lambda abstracted to handle verb phrase ellipsis. Lambda operators are necessary for handling examples of verb phrase ellipsis. The second sentence in Example 1 is a sentence with verb phrase ellipsis (also called an elided sentence). Example 1 Trigger Sentence: Fredi loves hisi wife. Elided Sentence: Georgej does too. Meanings : a. George loves Fred's wife. b. George loves George's wife. Assuming that the meaning of the elided verb phrase is inherited from the representation of the trigger sentence's verb phrase, then the the pronoun his in the trigger verb phrase must be able to refer indirectly to the subject Fred in 63 order for the sloppy reading of the elided sen- tence (i.e., George loves George's wife) to be available. All sentences are potentially trig- ger sentences; hence, we lambda abstract the syntactic subjects of all sentences (following Webber [Web78] and Sag [Sag76]). 2. The logical roles of all noun phrases in a sen- tence are identified by position in logical form (logical subject first, logical object second, log- ical indirect object third, etc.). 3. We represent universal noun phrases as univer- sally quantified (and restricted) variables and indefinite noun phrases as existentially quanti- fied (and restricted) variables (following Web- her [Web78]). 4. Quantifier scope ambiguity is handled in the same way as in Allen [All87]. Initially, we place quantifiers in the predicate-argument struc- ture (except for subjects). Later, when infor- mation becomes available for making scoping decisions, quantifier scoping is indicated (dis- cussed in Harper [Har90]). A composite representation for a pronoun is pro- vided once the parse tree for the sentence contain- ing it is available. When the parse tree is provided, we can determine all of the quantified noun phrases that are possible antecedents for a pronoun in the sentence (see l~einhart [Rei83]). Hence, we repre- sent a pronoun initially as a function of all of the variables associated with noun phrases that are pos- sible antecedents for or distribute over possible an- tecedents for the pronoun. To handle verb phrase ellipsis, the argument list must also include the lambda variables corresponding to syntactic sub- jects. A pronoun is represented as a uniquely-named function of all lambda variables (associated with subjects) which have scope over it in logical form, any non-subject quantified variables corresponding to noun phrases that c-command the pronoun (fol- lowing Reinhart [Rei83]), and any quantified noun phrase not embedded in a relative clause but con- tained in a noun phrase that c-commands the pro- noun. The lambda variable of a quantified subject subsumes the subject's quantified variable because the lambda operator abstracts the quantified vari- able. Our logical form representation for pronouns summarizes all of the operators that can directly affect their final meanings. Hence, the representa- tion is useful for limiting the possible antecedents of a pronoun. For example, a pronoun function can take a universal noun phrase as its antecedent if and only if the universal variable (or the variable corre- sponding to the lambda operator that abstracts the universal variable) is included in the function's ar- gument list. Consider a simple example to demonstrate the initial representation of the following sentence. Example 2 Every teacher gave every student his paper. Yx: (teacher x) x, A(y)(give y (paper-of (hisa y z)) [Vz: (student z) z]) The syntactic subject of the sentence is univer- sally quantified, and the restriction on the quan- tifier is indicated after the colon 1. The syntac- tic subject of the sentence is abstracted from the predicate-argument structure representing the sen- tence. Hence, the verb phrase, represented as a lambda function, is separable from the subject. The subject's position is maintained in the lambda func- tion by the lambda variable. Notice that the defi- nite noun phrase his paper is represented here as a function of the pronoun. Shortly, we will pro- vide a more general representation for definite noun phrases. Notice that the pronoun his is represented as a function of subject's lambda variable plus the universal variable corresponding to every student. This list of arguments corresponds to the opera- tors for noun phrases that can be antecedents for the pronoun given the syntactic constraints or can distribute over possible definite antecedents. No- tice that the subject's lambda variable subsumes the subject's universal variable. The reader should note that quantifier scoping is not indicated in our initial logical form (following Allen [Al187]). The representation for the pronoun in 2 is a composite representation, that is it indicates all of the operators that can affect its final meaning. In fact, before the final meaning of the sentence can be given, the antecedent for the pronoun must be determined and made explicit in our logical form. Though the process of determining antecedents for pronouns is beyond the scope of this paper, when a pronoun's antecedent is known (requiring additional pragmatic information), the logical form containing it must be updated in a way compatible with its initial representation (because of the formal consis- tency constraint). Suppose that we decide that the antecedent for his in example 2 is every student, then the logical form is be modified as shown in 3. 1The colon following the quantifier is syntactic sugar which expands the restriction differently depending on the type of quantifier. If a sentence is represented as 3x: (R x) (P x), then the meaning is 3x (and (R x) (P x)). If a sentence is represented as Vx: (R x) (P x), then it is expanded as vx (if (R x) (P x)). Example 3 Every teacherl gave every student./ hisj paper. VX: (teacher x) x, A(y)(and (give y (paper-of (his1 y z)) [Vz: (student z) z]) (= (hisl y z) z)) This update is compatible with the pronoun's initial representation. We are indicating that the function (his1 y z) is really the identity function on z. In Harper [Har88], we fully specify how logical form is updated when a pronoun's antecedent has been determined. 3 Definites: Behaviors to Cover In the rest of this paper, we develop our logical form representation for singular definite noun phrases. As for pronouns, we wish to obey our computational constraints while providing a good model of definite behavior. Consider the behaviors of definit.es we wish to cover. Like pronouns, definite noun phrases can be anaphoric. Anaphoric definites can either depend on linguistic antecedents (in either the same or pre- vious sentences) or can denote salient individuals in the environment of the speaker/hearer (also called deictic use). Because of our logical form constraints, in particular because of the compactness and for- mal consistency constraints, the initial representa- tion for a definite noun phrase must be compatible with the representations of its possible antecedents. Definite noun phrases can have intrasentential an- tecedents as in example 4. Example 4 Every boy~ saw (hisl dog)j before the beastj saw himi. 64 In this case, the definite noun phrase acts like a universally quantified variable (adopting the behav- ior of its antecedent in much the same way as a pronoun). Definites, unlike pronouns, can also have a com- plex syntactic structure. Pronouns and other noun phrases can be attached to a definite noun phrase in different ways. First, consider the effect em- bedded pronouns have on definite noun phrases. While simple definites (which are not intrasentential anaphors) seem to act like constants when they oc- cur in a sentence with a universal noun phrase (e.g., 5a), definite noun phrases with embedded pronouns often cannot be described as constants (e.g., 5b). Example 5 a. Every boy loves the woman. b. Every boy loves his mother. The meaning of his mother depends on how the pro- noun is resolved. If the antecedent for his is found in another sentence, then his mother could be rep- resented as a constant. In contrast, if every boy is the antecedent for his, then the universal quanti- fier corresponding to every boy distributes over his mother. When a quantifier distributes over a defi- nite, the definite changes what it denotes based on the values assigned to the quantified variable. Embedded quantified noun phrases can also dis- tribute over a definite noun phrase, preventing it from acting like a constant. For example, the uni- versal possessive noun phrase distributes over the definite in the following sentence. The definite in this case cannot be described as a constant. Example 6 George loves every man's wife. However, not all embedded quantified noun phrases can distribute over a definite. When quantified noun phrases are embedded in relative clauses attached to a definite noun phrase, they cannot distribute over that noun phrase. This constraint (related to the complex noun phrase constraint, first noted by [Ros67]) prohibits quantifiers from moving out of a relative clause attached to a noun phrase. For ex- ample: Example 7 George saw the mother who cares for every boy. In this case, the mother who cares for every boy de- notes one specific mother. In such cases, the univer- sal cannot distribute over the definite it is attached to or have scope over other quantified noun phrases outside of the relative clause. Thus, the meaning of a definite noun phrase is affected by its structure, whether it contains pro- nouns, and whether or not it is used anaphorically. If used anaphorically, it should behave in a way con- sistent with its antecedent, just like a pronoun. If it contains pronouns, then its meaning should depend on the antecedents chosen for those pronouns. If it contains embedded quantified noun phrases (not subject to the relative clause island constraint), then those embedded noun phrases may distribute over the definite. In the remainder of this paper, we introduce our logical form representation for definites. We discuss the initial representation of definites, which must be able to encompass all of the above definite behav- iors. We also describe the ways this logical form is updated once ambiguity is resolved. 4 Our Representation of Definite Noun Phrases In this section, we develop a representation for def- inites in logical form. The logical form represen- tation for a definite noun phrase presents a chal- lenge to our approach. To be consistent with the modularity constraint, we must provide an initial representation for a definite noun phrase that can be generated before we know the antecedents for any embedded pronouns or before we know the def- inite's antecedent (if it is anaphoric). To obey the compactness and formal consistency constraints, we must initially represent a definite so it is consistent with all the ways it can possibly act. As more in- formation becomes available about the meaning of the definite noun phrase, we must be able to update logical form in a way compatible with its initial rep- resentation. Our logical form for a definite must be a composite representation compatible with its pos- sible behaviors. We cannot provide different initial representations for a definite depending on use, oth- erwise we violate the compactness constraint. Ad- ditionally, unless our initial representation is com- patible with all possible behaviors, we could violate the formal consistency constraint when we update logical form. We represent a definite as a named function of all of the variables associated with operators that can affect its meaning. This representation satis- fies our constraints by combining the advantages of definite descriptions (discussed in Harper [Har90]) with the functional notation we introduced to rep- resent pronouns. Each definite function is defined by a unique name (i.e., defwith a unique integer ap- pended to it), a list of arguments, and a restriction. The restriction of a definite function is derived from the words following the determiner. The argument list of the function consists of the variables associ- ated with lambda operators that have scope over its position, any variables associated with non-subject quantified noun phrases that could bind a pronoun in that position, and any quantified variables asso- ciated with embedded quantified noun phrases that are not embedded in a relative clause attached to a noun phrase 2. Because a definite function has a unique name, we can differentiate two occurrences of the same definite noun phrase, in contrast to def- inite descriptions [RusT1] (for more information on the shortcomings of definite descriptions and defi- nite quantifiers, see [Harg0; Hin85]). 2We should also add that a sententially attached PP with a quantified object can quantify over a definite as well (as in, In every car, the driver turned the steering wheel. This sentence is tricky because we seem to be attaching the PP to both of the NPs while leaving the quantifier to distribute over both definites). 65 Consider the initial representation of a sentence containing a definite noun phrase before the an- tecedent of an embedded pronoun is known: Example 8 Every man showed every boy his picture. VX: (man X) x, A(y) (show y ((defl y z) I (and (picture (dell y z)) (possess (his2 y z) (dell y z) ))) [Vz: (boy z) z]) The representation of this sentence is very similar to example 2 except for the representation of the definite noun phrase. Notice that his picture is rep- resented as a function called defl. The restriction of the function is the conjunction of statements fol- lowing the vertical bar. The vertical bar is syntactic sugar and should be expanded like the colon in an existential's restriction (but not until the definite's final meaning is determined). The argument list of the function consists of the variables y and z 3. No- tice that the pronoun his is also represented as a function of y and z. Anything that can affect the pronoun his picture will also affect the meaning of the definite noun phrase. Because a definite function is a composite rep- resentation for all possible meanings of a definite noun phrase, we must restrict the function in cer- tain ways before a final interpretation for the sen- tence is available (or before deriving the meaning of an elided sentence from a trigger verb phrase con- taining a definite function, as discussed in [Har90]). The initiM representation of a definite places an up- per and lower bound on the definite's behavior. The lower bound is a constant, while the upper bound is the initial representation. These bounds must be tightened to settle on a final interpretation for the definite. We provide two methods to pinpoint a def- inite function. If the definite is used anaphorically, we equate the definite function with some value con- sistent with its antecedent. Otherwise, we apply a constraint that limits the argument list of the func- tion to include only necessary variables. If a definite is used anaphorically, it can be equated with some value depending on its an- tecedent (just like pronoun functions in [Har88]). For example, if the antecedent of a definite noun phrase occurs in another sentence, we would equate the definite function with a discourse entity. An- tecedents for definite noun phrases can also occur 3As in the representation of pronouns, we omit the variable x from the argument list because the lambda operator for y abstracts x, so y is the more general argument. within the same sentence. An intrasentential refer- ence to an antecedent requires the definite function to have an argument list compatible with the rep- resentation of the antecedent 4. Consider the initial representation of a sentence containing a potentially anaphoric definite shown in 9. Example 9 Every man told his mother's psychiatrist about the old lady's diary. Vx: (man x) x, A(y) (tell Y ((defl y) i (and (psychiatrist (defl y)) (possess ( (def2 y) (and (mother (def2 y)) (possess (his3 y) (def2 y)))) (defl y)))) (about ((def4 y) I (and (diary (def4 y)) (possess ((defs y) l (old-lady (def5 y))) (def4 y)))))) Suppose the antecedent for his is every man and the antecedent for the old lady is his mother. Then we can augment the logical form, as shown in 10. 66 4It is unusual for a definite to have an antecedent corresponding to one of its arguments unless the vari- able corresponds to a quantified noun phrase which is not embedded in a relative clause but is embedded in another noun phrase. When the antecedent is repre- sented as a function, its argument list must be a subset of (or it must be possible to limit it to be a subset of) the arguments of the anaphoric definite for the equality to be asserted. Example 10 Every manj told (his) mother's)i psychiatrist about the old lady's~ diary. Vx: (man x) x, A(y)(tell Y ((dell y) I (and (psychiatrist (dell y)) (possess ((def2 y) I (and (mother (def2 y)) (possess (hisa y) (def2 y)) (or (= (hisa y) y) (= (his3 y) x)))) (dell y)))) (about ((def4 y) [ (and (diary (def4 y)) (possess ((def5 y) I (old-lady (def5 y))) (def4 y)) (= (def5 y) (def2 y)))))) This example would be very difficult for an ap- proach that uses either definite descriptions or def- inite quantifiers. Either approach would represent the old lady in a way equivalent to replacing the representation by a constant, because of uniqueness. Hence, any update of those representations to indi- cate the anaphora would violate formal consistency. Our approach, however, can easily handle the ex- ample. The other way to pinpoint a definite function ap- plies once antecedents for embedded pronouns are known and once we know whether quantifiers cor- responding to embedded quantified noun phrases (not embedded in relative clauses attached to noun phrases) should distribute over the definite. Con- sider the initial representation of the sentence in 8. The definite function defl is a function of all of the variables that can potentially cause it to change. However, once we know the antecedent for its em- bedded pronoun, the argument list of the function should be limited. To limit the argument list, we make use of the insights gained from definite de- scriptions. Because of the uniqueness assumption, any definite description that does not contain vari- ables bound by outside quantifiers acts like a con- stant. On the other hand, if a pronoun embedded in a definite description adopts the behavior of a universally quantified variable, then the definite de- scription will change what it denotes depending on the instantiation of that variable. Hence, we con- clude that a definite function should only change as a function of those variables bound by operators outside of its restriction (ignoring its own argument list). 67 Once antecedent and embedded quantifier infor- mation is available, we can limit the argument list to precisely those arguments that are bound by opera- tors outside of the restriction. If a pronoun function in the restriction of the definite function is equated with a variable bound outside its restriction or with another function which must be a function of a cer- tain variable (based on its own restriction), then the argument must be retained. Additionally, other arguments that are free in the restriction must be retained (these correspond to embedded quantified noun phrases whose quantifiers are moved out of the restriction). Once we know the necessary ar- guments, we replace the original function by a new function over those arguments. By using this argu- ment reduction constraint, we limit the initial com- posite representation of a definite noun phrase to its final meaning (given pronoun and quantifier infor- mation). Consider how we would limit the function (defl y z) from example 8 following pronoun res- olution. If we decide that the antecedent of his is every boy, then we would update the logical form, as shown in 11. Example 11 Every man showed every boyi hisi picture. Vx: (man x) x, A(y)(show y ((defl y z) [ (and (picture (dell y z)) (possess (his2 y z) (defl y z)) (= (his2 y z) z))) [Vz: (boy z) z]) By using our argument reduction constraint, we can replace the function (defl y z) by a function of z (since (his2 y z) is replaced with the variable z), as shown in 12. Example 12 Every man showed every boyl hisi picture. Vx: (man x) x, A(y)(and (show Y ((defl y z) ] (and (picture (defl y z)) (possess (his2 y z) (dell y z)) (= (his2 y z) z))) [Vz: (boy z) z]) (= (dell y z) (def3 z))) Equality here is equivalent to replacing the first function with the second value. Because of this fact and because of the meaning of the vertical bar in the restriction of the function, this representation can be simplified as shown in 13. Example 13 Every man showed every boyi hisi picture. Vx: (man x) x, A(y)(and (show y (def3 z) [Vz: (boy z) z]) (picture (def3 z)) (possess z (def3 z))) To handle the readings where his is anaphorically dependent on other noun phrases, our approach would be similar. Our representation of pronouns has several strengths. First, the representation provides useful information to a semantic routine concerning possi- ble intrasentential antecedents for the definite. Ai'- gument lists limit what can be the antecedent along with other factors like number and gender agree- ment and antecedent limitations particular to deft- nites. To demonstrate a strength of this approach, consider the initial representation of the following sentence: Example 14 Fred told the teacher who discusses every student with his mother to record her response. ((dell) ] (name (dell) Fred)), A(x) (tell x ((def2 x) I (and (inst (def2 x) teacher) (def2 x), A (y) (discuss Y [V(z) : (inst z student) z] (with ((def3 X y Z) I (and (inst (def3 x y z) • mother) (possess [(def2 x), A(w)(record w ((def5 x w) [ (and (inst (defs x w) response) (possess (her6 x w) (defs x w)))))]) antecedent for her. If the antecedent for his is every student, then his mother cannot be the antecedent for her. This accessibility problem results because the universal in the relative clause (i.e., every stu- dent) cannot have scope over her response, hence, his mother is not a good antecedent for her 5. Notice that (her6 x w) is not immediately compatible with the representation for his mother (i.e., (def3 x y z)). Before we can assert that his mother is the an- tecedent for her we must pinpoint the meaning of that noun phrase, that is, we must determine the antecedent for his. Then depending on our choice, the final meaning of his mother may or may not be accessible to the pronoun. Hence, we can explain why some definites in relative clauses are accessible to pronouns in the matrix sentence and others are not. C-command does not accurately predict when definites are accessible as antecedents for anaphoric expressions. This is not surprising, given the fact that the final meaning of a definite determines its accessibility, and determining this meaning may re- quire resolving pronouns and scoping ambiguities. In this paper, we have introduced a composite representation for definite noun phrases with two ways to update their meaning as more informa- tion becomes available. This approach is consistent with the three compntational constraints discussed in section 2, and also provides a good model of deft- nite behavior. We refer the reader to Harper [Har90] for discussion of a wider variety of examples. In particular, we discuss examples of verb phrase el- lipsis, Bach-Peters sentences, and definite donkey sentences [Gea62]. Our approach has been imple- mented and tested on a wide variety of examples. The logical form for pronouns and definites is pro- vided as soon as a parse tree for the sentence is available. Then, the logical form for the sentence is incrementally updated until all ambiguities have been resolved. Logical form is very useful in the search for pronoun and definite antecedents. For more on the implementation see [Harg0]. (his4 x y z) One shortcoming of our approach is our inabil- (def3 x y z))))))))ity to provide a single logical form for a sentence with structural ambiguity. One possible solution to this problem (which we are currently investigating) is to store partial logical forms in a parse forest. As more information is processed this intermediate rep- resentation will be incrementally updated until the parse forest is reduced to a single tree containing Here the meaning of her response depends on the antecedent for her. What then are legal antecedents for her in this sentence? Certainly, the teacher is a fine candidate, but what about his mother. We can- not tell immediately whether his mother can be the 5Strictly speaking, universal noun phrases cannot bind across sentences. However, speakers sometimes al- low a universal to be the antecedent for a singular pro- noun outside of its scope. Such pronouns are not usu- ally understood as giving a bound variable reading. See Webber [Web78] for a discussion of this issue. A simi- lar treatment can apply to definites which change as a function of a universal. 68 one logical form. 5 Past Approaches Our work has benefited from the insights gained from other approaches to definite noun phrases in the literature. We considered both definite de- scriptions introduced by Russell [Rus05] and defi- nite quantifiers (used by many including [Web83]) for representing definite noun phrases. Neither representation allows us to handle intrasentential anaphoric definites while obeying our computational constraints. However, the in-place definite descrip- tion is excellent for modeling definite subjects in verb phrase ellipsis and for capturing the behaviors of distributive definite noun phrases. On the other hand, a definite quantifier is not a good represen- tation for a definite subject in verb phrase ellipsis (the strict meaning of The cat wants its toy. The dog does too cannot be provided because quantifiers do not have scope across sentences). In fact, to make the definite quantifier a feasible representation, we would have to make the binding properties of a def- inite quantifier different than the binding proper- ties of a universal. Hornstein [Hor84] suggests that definite quantifiers have different binding properties than universals. His approach fails to consider how the process of pinpointing the meaning of a defi- nite affects its ability to bind a pronoun. For more discussion of the strengths and weaknesses of these approaches, see Harper [Har90]. Other approaches to handling definites include the work of [Hei82; Kam81; Rob87; Kle87; PP88]. Each approach differs from ours both in scope and emphasis. We build an intermediate meaning for a sentence using only the constraints dictated by the syntax and local semantics and incrementally up- date it as we process contextual information. The work of Pollack and Periera [PP88] also attempts to gradually build up a final interpretation of a sen- tence using their semantic and pragmatic discharge interpretation rules. However, our representation of a definite noun phrase locally stores information about those quantifiers in the sentence that can po- tentially quantify over it, while Pollack and Periera's representation does not. The approaches of [Hei82; Kam81; Rob87; Kle87] require a large amount of contextual information before the representation of a sentence can be given (leading to a violation of our constraints). References [Al187] James Allen. Natural Language Understand- ing. The Benjamin/Cummings Publishing Company, Menlo Park, CA, 1987. [Gea62] Peter T. Geach. Reference and Generality. Cornell University Press, Ithaca, 1962. 69 [Har88] [Har90] [Hei82] [Hin85] [Hor84] [Kam81] [Kle87] [PP88] [Rei83] [RobS7] [Ros67] [Rus05] [Rus71] [Sag76] [SP84] [Web78] [Web83] Mary P. Harper. Representing pronouns in logical form : Computational constraints and linguistic evidence. In The Proceedings of the 7th National Meeting of AAAI, 1988. Mary P. Harper. The representation of noun phrases in logical form. PhD thesis, Brown University, 1990. Irene Heim. The Semantics of Definite and In- definite Noun Phrases. PhD thesis, University of Massachusetts, 1982. Jaakko Hintikka. Anaphora and Definite Descriptions: Two applications of Game- Theoretical semantics. D. Reidel Publishing Oompany, Boston, 1985. Norbert Hornstein. Logic as Grammar: An Approach to Meaning in Natural Language. MIT Press, Cambridge, MA, 1984. Hans Kamp. A theory of truth and semantic representation. In Jeroen Groenendijk, Theo Janssen, and Martin Stokhof, editors, Formed Methods in the Study of Language, volume 1. Mathematische Centrum, Amsterdam, 1981. Ewan Klein. VP ellipsis in DR theory. In J. Groenendijk, D. de Jongh, and M. Stokhof, editors, Studies in Discourse Representation and the Theory of Generalized Quantifiers. Foris, Dordrecht, 1987. Martha E. Pollack and Fernando C.N. Pereira. An integrated framework for semantic and pragmatic interpretation. In The Proceedings of the 26th Annual Meeting of the Association for Computastional Linguistics, 1988. Tanya Reinhart. Anaphora and Semantic In- terpretation. Croom Helm, London, 1983. Craige Roberts. Modal Subordination, Anaphora, and Distributivity. PhD thesis, University of Massachusetts, 1987. John R. Ross. Constraints on Variables in Syntax. PhD thesis, MIT, 1967. Bertrand Russell. On denoting. Mind, 14:479- 493, 1905. Bertrand Russell. Reference. In J. F. Rosen- berg and C. Travis, editors, Readings in the Philosophy of Language. Prentice-Hall, Inc., Englewood Cliffs, New Jersey, 1971. Ivan A. Sag. Deletion and Logical Form. PhD thesis, MIT, 1976. L. K. Schubert and F. J. Pelletier. From En- glish to Logic : Context-flee computation of 'conventional' logical translations. American Journal of Computational Linguistics, 10:165- 176, 1984. Bonnie L. Webber. A Formal Approach to Dis- course Anaphora. PhD thesis, Harvard, 1978. Bonnie L. Webber. So what can we talk about now? In M. Brady and R. Berwick, edi- tors, Computational Models of Discourse. MIT Press, Cambridge MA, 1983.
1990
9
RESOLUTION OF COLLECTIVE-DISTRIBUTIVE AMBIGUITY USING MODEL-BASED REASONING Chinatsu Aone* MCC 3500 West Balcones Center Dr. Austin, TX 78759 [email protected] Abstract I present a semantic analysis of collective- distributive ambiguity, and resolution of such am- biguity by model-based reasoning. This approach goes beyond Scha and Stallard [17], whose reasoning capability was limited to checking semantic types. My semantic analysis is based on Link [14, 13] and Roberts [15], where distributivity comes uniformly from a quantificational operator, either explicit (e.g. each) or implicit (e.g. the D operator). I view the semantics module of the natural language sys- tem as a hypothesis generator and the reasoner in the pragmatics module as a hypothesis filter (cf. Simmons and Davis [18]). The reasoner utilizes a model consisting of domain-dependent constraints and domain-independent axioms for disambiguation. There are two kinds of constraints, type constraints and numerical constraints, and they are associated with predicates in the knowledge base. Whenever additional information is derived from the model, the Contradiction Checker is invoked to detect any contradiction in a hypothesis using simple mathe- matical knowledge. CDCL (Collective-Distributive Constraint Language) is used to represent hypothe- ses, constraints, and axioms in a way isomorphic to diagram representations of collective-distributive ambiguity. 1 Semantics of Collective- Distributive Ambiguity Collective-distributive ambiguity can be illustrated by the following sentence. (1) Two students moved a desk upstairs. (1) means either that two students TOGETHER moved one desk (a collective reading) or that each *The work described in this paper was done as a part of the author's doctoral dissertation at The University of Texas at Austin. of them moved a desk SEPARATELY (a distributive reading). Following Link [14, 13] and Roberts [15], distributivity comes from either an explicit quantifi- cational operator like each or an implicit distributive operator called the D operator. The D operator was motivated by the equivalence in the semantics of the following sentences. (2) a. Every student in this class lifted the piano. b. Students in this class each lifted the piano. c. Students in this class lifted the piano. (the distributive reading) Thus, the distributive readings of (1) and (2c) result from applying the D operator to the subjects. Now, look at another sentence "Five students ate four slices of pizza." It has 8 POSSIBLE readings be- cause the D operator may apply to each of the two arguments of eat, and the two NPs can take scope over each other. Thus, 2x2x2 = 8. i j have extended Link's and Roberts's theories to quantify over events in Discourse Representation Theory (cf. Kamp [10], Heirn [9], Aone [2]) so that these readings can be sys- tematically generated and represented in the seman- tics module. However, the most PLAUSIBLE reading is the "distributive-distributive reading", where each of the five students ate four slices one at a time, as represented in a discourse representation structure (DRS) in Figure 1 ~. Such plausibility comes partly from the lexical semantics of eat. From our "common sense", we know that "eating" is an individual activ- ity unlike "moving a desk", which can be done either individually or in a group. However, such plausi- bility should not be a part of the semantic theory, but should be dealt with in pragmatics where world knowledge is available. In section 2, I'll identify the 1Actually the two collective-collective readings are equiv- alent, so there are 7 distinct readings. 2(i-part x I x) says "x I is an atomic individual-part of x" (cf. Link [12]), and CU, i.e. "Count-Unit", stands for a natural measure unit for students (cf. Krifka [11]). (student x) (amount x 5) (measure x CU) xl j (i-part x' x) Y (pizza y) (amount y 4) (measure y slice) y' e D (i-part y' y]' ,(eat e x' y') Figure h DRS for "Five students ate four slices of pizza" necessary knowledge and develop a reasoner, which goes beyond Scha and Stallard [17]. There is a special reading called a cumulative reading (cf. Scha [16]). (3) 500 students ate 1200 slices of pizza. The cumulative reading of (3) says "there were 500 students and each student ate some slices of pizza, totaling 1200 slices." The semantics of a cumulative reading is UNDERSPECIFIED and is represented as a collective-collective reading at the semantic level (cf. Link [13], Roberts [15], Aone [2]). This means that a cumulative reading should have a more specific rep- resentation at the pragmatics level for inferencing. Reasoning about cumulative readings is particularly interesting, and I will discuss it in detail. 2 Model-Based Reasoning for Disambiguation Although scope ambiguity has been worked on by many researchers (e.g. Grosz et al. [8]), the main problem addressed has been how to generate all the scope choices and order them according to some heuristics. This approach might be sufficient as far as scope ambiguity goes. However, collective- distributive ambiguity subsumes scope ambiguity and a heuristics strategy would not be a strong method. I argue that the reason why some of the readings are implausible (and even do not occur to some people) is because we have access to domain- dependent knowledge (e.g. constraints on predi- cates) along with domaln-independent knowledge (e.g. mathematical knowledge). I have developed a reasoner based on the theory of model-based reason- ing (cf. Simmons and Davis [18], Fink and Lusth [6], Davis and Hamscher [5]) for collective-distributive ambiguity resolution. The model that the reasoner uses consists of four kinds of knowledge, namely predicate constraints, two types of axioms, and sim- ple mathematical knowledge. First, I will discuss the representation language CDCL 3. Then, I will discuss how these four kinds of knowledge are utilized during reasoning. 2.1 CDCL CDCL is used to represent collective-distributive readings, constraints and axioms for reasoning. There are three types of CDCL clauses as in (4), and I will explain them as I proceed 4. (4) Core clause: (1 ((5) a0 4 al)) Number-of clause: (number-of al ?q:num) Number comparison clause: (<= ?q:num 1) 2.1.1 Expressing Collective and Distributive Readings in CDCL CDCL is used to express collective and distributive readings. Below, a's are example sentences, b's are the most plausible readings of the sentences, and c's are representations of b's in CDCL. (5) a. "5 students ate 4 slices of pizza." b. Each of the 5 students ate 4 slices of pizza one at a time. c. (eat a0 al): (5 (1 a0 -* 4 al)) 3CDCL stands for "Collective-Distributive Constraint Language". 4Though not described in this paper, CDCL has been ex- tended to deal with sentences with explicit quantifiers as in "Every student ate 4 slices of pizza" and sentences with n-ary predicates as in "2 companies donated 3 PC's to 5 schools". For example: (i) (eat a0 al): (every (1 a0 -* 4 al)) (ii) (donate a0 al a2): (2 (1 a0 --* (5 (1 a2 ---* (3) al)))) See Aone [2] for details of CDCL expressed in a context-free grammar. 2 (6) a. "5 dogs had (a litter of) 4 puppies." b. Each of the 5 mother dogs delivered a litter of 4 puppies. c. (deliver-offspring a0 al): (5 (1 a0 --~ (4) al)) (7) a. "5 alarms were installed in 6 buildings." b. Each of the 6 buildings was installed with 5 alarms one at a time. c. (installed-in a0 al): (6 (1 al --* 5 a0)) First, consider (5c). The representation should capture three pieces of information: scope relations, distributive-collective distinctions, and numerical re- lations between objects denoted by NP arguments. In CDCL, a0 and al signify the arguments of a pred- icate, e.g. (eat a0 al). The scope relation is repre- sented by the relative position of those arguments. That is, the argument on the left hand side of an ar- row takes wide scope over the one on the right hand side (cf. (5) vs. (7)). The numerical relation such as "there is an eating relation from EACH student to 4 slices of pizza" is represented by the numbers before each argument. The number outside the parenthe- ses indicates how many instances of such a numerical relation there are. Thus, (5c) says there are five in- stances of one-to-four relation from students to slices of pizza. CDCL is designed to be isomorphic to a di- agram representation as in Figure 2. --p s --p s --p s --p s --p \-p \-p \-p . \-p \-p \-p \-p \-p \-p \-p \-p \-p \-p \-p \-p s = a student p = a s~ce of pizza Figure 2:"5 students ate 4 slices of pizza." As for the collective-distributive information in CDCL, it was implicitly assumed in (5c) that both arguments were read DISTRIBUTIVELY. To mark that an argument is read COLLECTIVELY, a number be- fore an argument i s written in parentheses where the number indicates cardinality, as in (6c). There are two additional symbols, anynum and anyset for representing cumulative readings. The cumulative reading of (3) is represented in CDCL as follows. (s) (500 (1 a0 --* anynum0 al)) ~c (1200 (1 al --~ anynuml a0)) In (8), the situation is one in which each student (a0) ate a certain number of pizza slices, and the number may differ from student to student. Thus, anynumO represents any positive integer which can vary with the value of a0. 2.1.2 Constraints in CDCL CDCL is also used to express constraints. Each pred- icate, defined in the knowledge base, has its associ- ated constraints that reflect our "common sense". Thus, constraints are domain-dependent. There are two kinds of constraints: type constraints (i.e. constraints on whether the arguments should be read collectively or distributively) and numerical con- straints (i.e. constraints on numerical relations be- tween arguments of predicates.) There are 6 type constraints (C1 - C6) and 6 numerical constraints (C7- C12) as in Figure 3. C1. (?p:num (1 ?a:arg ---* ?q:num ?b:arg)) :::~z inconsistent "Both arguments are distributive." C2. (1 (?p:set ?a:arg ~ ?q:set ?b:arg)) :=~ inconsistent "Both arguments are collective." C3. (?p:num (1 a0 ---. ?r:set al)) :=~ inconsistent C4. (1 (?q:set al ~ ?r:num a0)) :=~ inconsistent "lst argument distributive and 2nd collective." C5. (1 (?p:set a0 ---* ?q:num al)) :=~ inconsistent C6. (?p:num (1 al ~ ?q:set a0)) :=~ inconsistent "lst argument collective and 2nd distributive." C7. (?p:num (1 ?a:arg ---* ?q:num ?b:arg)) =~ (<--- ?q:num ?r:num) C8. (?p:num (1 ?a:arg --* ?q:num ?b:arg)) =~ (<-- ?r:num ?q:num) C9. (?p:num (1 a0 --, 1 al)) :=~ inconsistent "A relation from a0 to al is a function." C10. (?p:num (1 al ---, 1 a0)) :=~ inconsistent "A relation from al to a0 is a function." Cll. (1 (?p:set a0 --* 1 al)) :=~ inconsistent "Like C9, the domain is a set of sets." C12. (1 (?p:set al --* 1 a0)) :=~ inconsistent "Like C10, the domain is a set of sets." Figure 3: Constraints Predicate constraints are represented as rules. Those except C7 and C8 are represented as "anti- rules". That is, if a reading does not meet a con- straint in the antecedent, the reading is considered inconsistent. C7 and C8 are ordinary rules in that if they succeed, the consequents are asserted and if they fail, nothing happens. The notation needs some explanation. Any sym- bol with a ?-prefix is a variable. There are 4 variable types, which can be specified after the colon of each variable: (9) ?a:arg ?b:num ?c:set ?d:n-s argument type (e.g. a0, al, etc.) positive integer type non-empty set type either num type or set type If an argument type variable is preceded by a set type variable, the argument should be read collec- tively while if an argument type variable is preceded by a number type variable, it should be read dis- tributively. To explain type constraints, look at sentence (6). The predicate (deliver-offspring a0 al) requires its first argument to be distributive and its second to be collective, since delivering offspring is an individ- ual activity but offspring come in a group. So, the predicate is associated with constraints C3 and C4. As for constraints on numerical relations between arguments of a predicate, there are four useful con- straints (C9 - C12), i.e. constraints that a given re- lation must be a FUNCTION. For example, the pred- icate deliver-o~spring in (6) has a constraint of a biological nature: offspring have one and only one mother. Therefore, the relation from al (i.e. off- spring) to a0 (i.e. mothers) is a function whose do- main is a set of sets. Thus, the predicate is associ- ated with C12. Another example is (7). This time, the predicate (installed-in a0 al) has a constraint of a physical nature: one and the same object cannot be installed in greater than one place at the same time. Thus, the relation from a0 (i.e. alarms) to al (i.e. buildings) is a many-to-one function. The pred- icate is therefore associated with C9. In addition, more specific numerical constraints are defined for specific domains. For example, the constraint "each client machine (al) has at most one diskserver (a0)" is expressed as in (10), given (disk-used-by a0 al). It is an instance of a general constraint C7. (10) (?p:num (1 al --* ?q:num a0)) (~= ?q:num 1) 2.1.3 Axioms in CDCL While constraints are associated only with particular predicates, axioms hold regardless of predicates (i.e. are domaln-independent). There are two kinds of axioms as in Figure 4. The first two are con- straint axioms, i.e. axioms about predicate con- straints. Constraint axioms derive more constraints if a predicate is associated with certain constraints. CA1. CA2. RA1. RA2. RA3. (?m:num (1 ?a:arg --~ 1 ?b:arg)) (number-of ?a:arg ?re:hum) & (number-of ?b:arg ?n:num) & (<= ?n:num ?m:num) (?l:num (?s:set ?a:arg --~ 1 ?b:arg)) (number-of ?a:arg ?re:hum) & (number-of ?b:arg ?n:num) & (<= ?n:num ?re:hum) (?m:num (1 ?a:arg -~ ?y:n-s ?b:arg)) (number-of ?a:arg ?m:num) (?re:hum (1 ?a:arg --* ?y:num ?b:arg)) & (<= ?y:num ?z:num) (number-of ?b:arg ?n:num) & (<= ?n:num (* ?m:num ?z:num)) (?m:num (1 ?a:arg --* ?y:num ?b:arg)) & (<= ?z:num ?y:num) (number-of ?b:arg ?n:num) & (<= ?z:num ?n:num) Figure 4: Axioms (11) C9. CA1. The others are reading axioms. They are ax- ioms about certain assertions representing particu- lar readings. Reading axioms derive more assertions from existing assertions. The constraint axiom CA1 derives an additional numerical constraint. It says that if a relation is a function, the number of the objects in the range is less than or equal to the number of the objects in the domain. This axiom applies when constraints C9 or C10 is present. For example: (?p:num (1 a0 ~ 1 al)) (?m:num (1 ?a:ar s --* 1 ?b:arg)) (number-of ?a:arg ?re:hum) & (number-of ?b:arg ?n:num) & (<= ?n:num ?re:hum) (number-of a0 ?m:num) & (number-of al ?n:num) & (<= ?n:num ?m:num) The constraint axiom CA2 is similar to CA1 except that the domain is a set of sets. The reading axiom RA1 asserts the number of all objects in the domain of a relation. For example: (12) A1. (5 (1 a0 --* 6 al)) RA1. (?m:num (1 -- ?y:n-s ?b:arg)) (number-of ?a:arg ?m:num) (number-of a0 5) 4 Given an assertion A1, RA1 asserts that the number of objects in the domain is 5. The reading axiom RA2 is for a relation where each object in the domain is related to less than or equal to n objects in the range. In such a case, the number of the objects in the range is less than or equal to the number of objects in the domain multiplied by n. For example: (13) A2. RA2. (5 (1 a0 ~ ?x:num al)) & (<----- ?x:num 2) (?m:num (1 ?a:arg --+ ?y:num ?b:arg)) & (<= ?y:num ?z:num) (number-of ?b:arg ?n:num) & (<= ?n:num (, ?m:num ?z:num)) (number-of al ?n:num) & (<---- ?n:num (. 5 2)) The last axiom RA3 is similar to RA2. These axioms are necessary to reason about con- sistency of cumulative readings when numerical con- straints are associated with the predicates. For ex- ample, given "5 alarms were installed in 6 buildings", intuitively we eliminate its cumulative reading be- cause the number of buildings is more than the num- ber of alarms. I claim that behind this intuition is a calculation and comparison of the number of build- ings and the number of alarms given what we know about "being installed in". The constraint axioms above are intended to simulate how humans make such comparisons between two groups of objects re- lated by a predicate that has a numerical constraint. The reading axioms, on the other hand, are intended to simulate how we do such calculations of the num- ber of objects from what we know about the reading (cf. 2.2.2). 2.2 Model-Based Reasoner In this section, I describe how the reasoner per- forms disambiguation. But first I will describe spe- cial "unification" which is the basic operation of the reasoner 5 . 2.2.1 Unification "Unification" is used to unify CDCL clauses during the reasoning process. However, it is not standard unification. It consists of three sequential matching operations: Syntax Match, ARG Match, and Value Match. First, Syntax Match tests if the syntax of 5The reasoner has been implemented in Common Lisp. Unification and forward chaining rule codes are based on Ableson and Sussman [1] and Winston and Horn [19]. two expressions matches. The syntax of two expres- sions matches when they belong to the same type of CDCL clauses (cf. (4)). If Syntax Match succeeds, ARG Match tests if the argument constants (i.e. a0, al) in the two expressions match. If this operation is successful, Value Match is performed. There are two ways Value Match fails. First, it fails when types do not match. For example, (14a) fails to unify with (14b) because ?r:set does not match the integer 4. (14) a. (?p:num (?q:num a0 --* ?r:set al)) b. (5 (1 a0 ---* 4 al)) The second way Value Match fails is two values of the same type are simply not the same. (15) a. (1 (?p:set al --* 1 a0)) b. (1 ((4) al --* 5 a0)) Unification fails only when the first and second operations succeed and the third one fails, and uni- fication succeeds only when all the three operations succeed. Otherwise, unification neither succeeds nor fails. 2.2.2 Inferences Using A Model Each reading (i.e. a hypothesis) generated by the se- mantics module is stored in what I call a reading record (RR). Initially, it just stores assertions that represent the reading. As reasoning proceeds, more information is added to it. When the RR is updated and inconsistency arises, the RR is marked as incon- sistent and the hypothesis is filtered out. The reasoner uses a model consisting of four kinds of knowledge. Inferences that use these four (namely Predicate-Constraint inference, Constraint- Axiom inference, Reading-Axiom inference, and the Contradiction Checker) are controlled as in Figure 5. First, Predicate-Constraint inference tests if each hypothesis satisfies predicate constraints. This is done by unifying each CDCL clause in the hypoth- esis with predicate constraints. For example, take a type constraint C1 and a hypothesis HI. (16) H1. (eat a0 al): (5 (1 a0 --* (4) al)) cl. (?v:num (I ?a:arg -, ?q:num ?b:arg)) :=# inconsistent inconsistent When a predicate constraint is an anti-rule like C1, a hypothesis is filtered out if it fails to unify with the constraint. When a predicate constraint is a rule like C7, the consequent is asserted into the RR if the hypothesis successfully unifies with the antecedent. Figure 5: Control Structure Second, Constraint-Axiom inference derives addi- tional CONSTRAINTS by unifying antecedents of con- straint axioms with predicate constraints. If the uni- fication is successful, the consequent is stored in each RR (cf. (11)). (19) Third, Reading-Axiom inference derives more AS- SERTIONS by unifying reading axioms with assertions in each RR (cf. (12) and (13)). While these three inferences are performed, the fourth kind, the Contradiction Checker, constantly monitors consistency of each RR. Each RR contains a consistency database. Every time new infor- mation is derived through any other inference, the Contradiction Checker updates this database. If, at any point, the Contradiction Checker finds the new information inconsistent by itself or with other infor- mation in the database, the RR that contains this (20) database is filtered out. For example, take the cumulative reading of (7a), which is implausible because there should be at least 6 alarms even when each building has only one alarm. The reading is represented in CDCL as fol- lows. (17) (5 (1 a0 --* anynum0 al)) & (6 (1 al --* anynuml a0)) The Contradiction Checker has simple mathematical knowledge and works as follows. Initially, the con- (21) sistency database records that the upper and lower bounds on the number of objects denoted by each argument are plus infinity and zero respectively. (18) Number-of-a0 [0 +inf] Number-of-al [0 +inf] Constraint NIL Consistent? T Then, when the constraint axiom CA1 applies to the predicate constraint C9 associated with installed-in (cf. (11)), a new numerical constraint "the number of buildings (al) should be less than or equal to the number of alarms (a0)" is added to the database. Number-of-a0 [0 +inf] Number-of-al [0 +inf] Constraint (<= al a0) Consistent? T Now, the reading axiom RA1 applies to the first clause of (17) and adds an assertion (number-of a0 5) to the database (cf. (12)). The database is up- dated so that both upper and lower bounds on a0 are 5. Also, because of the constraint (<= al a0), the upper bound on al is updated to 5. Number-of-a0 [5 5] Number-of-al [0 5] Constraint (<= al a0) Consistent? T Finally, RA1 applies to the second clause of (17) and derives (number-of al 6). However, the Contradic- tion Checker detects that this assertion is inconsis- tent with the information in the database, i.e. the number of al must be at most 5. Thus, the cumula- tive reading is filtered out. Number-of-a0 [5 5] Number-of-al [0 5] Constraint (<= al a0) Consistent? NIL [6 6] 2.2.3 Example I illustrate how the reasoner disambiguates among possible collective and distributive readings of a sen- tence. The sentence (7a) "5 alarms were installed in 6 buildings" generates 7 hypotheses as in (22). (22) R1 (5 (1 a0 -~ 6 al)) R2 (1 ((5) a0---. 6 al)) R3 (5 (1 a0 ---* (6) al)) R4 (6 (1 al ~ 5 a0)) R5 (1 ((6) al ~ 5 a0)) R6 (6 (1 al --* (5) a0)) R7 (5 (1 a0 ~ anynumO al)) & (6 (1 al ---+ anynuml a0)) The predicate (be-installed a0 al) is associated with two constraints C1 and C9. Predicate-Constraint inference, using the type constraint C1 (i.e. both ar- guments should be read distributively), filters out R2, R3, R5, and R6. The numerical constraint, C9, requires that the relation from alarms to buildings be a function. This eliminates R1, which says that each alarm was installed in 6 buildings. The cumu- lative reading R7 is filtered out by the other three inferences, as described in section 2.2.2. Thus, only R4 is consistent, which is what we want. 3 Conclusion Acknowledgments I would like to thank Prof. Manfred Krifka and Prof. Benjamin Kuipers for their useful comments. The prototype of the reasoner was originally built using Algernon (cf. Crawford [3], Crawford and Kuipers [4]). Many thanks go to Dr. James Crawford, who gave me much useful help and advice. References [1] [2] Harold Abelson and Gerald Sussman. Structure and Interpretation of Computer Programs. The MIT Press, Cambridge, Massachusetts, 1985. [3] Chinatsu Aone. Treatment of Plurals and Collective-Distributive Ambiguity in Natural Language Understanding. PhD thesis, The Uni- versity of Texas at Austin, 1991. The work described in this paper improves upon previous works on collective-distributive ambiguity [4] (cf. Scha and Stallard [17], Gardiner et al. [7]), since they do not fully explore the necessary reason- ing. I believe that the reasoning method described in this paper is general enough to solve collective- distributive problems because 1) any special con- straints can be added as new predicates are added to the KB, and 2) intuitively simple reasoning to [5] solve numerical problems is done by using domain- independent axioms. However, the current reasoning capability should be extended further to include different kinds of knowledge. For example, while the cumulative read- [6] ings of "5 alarms were installed in 6 building" is implausible and is successfully filtered out by the reasoner, that of "5 students ate 4 slices of pizza" is less implausible because a slice of pizza can be [7] shared by 2 students. The difference between the two cases is that an alarm is not divisible but a slice of pizza is. Thus knowledge about divisibility of ob- jects must be exploited. Further, if an object is divis- ible, knowledge about its "normal size" with respect to the predicate must be available with some prob- [8] ability. For example, the cumulative reading of "5 students ate 4 large pizzas" is very plausible because a large pizza is UNLIKELY to be a normal size for an individual to eat. On the other hand, the cumula- tive reading of "5 students ate 4 slices of pizza" is [9] less plausible because a slice of pizza is more LIKELY to be a normal size for an individual consumption. James Crawford. Access-Limited Logic - A Lan- guage for Knowledge Representation. PhD the- sis, The University of Texas at Austin, 1990. James Crawford and Benjamin Kuipers. To- wards a theory of access-limited logic for knowl- edge representation. In Proceedings of the First International Conference on Principles of Knowledge Representation and Reasoning, Los Altos, California, 1989. Morgan Kaufmann. Randall Davis and Walter Hamscher. Model- based reasoning: troubleshooting. In H. E. Shrobe, editor, Exploring Artificial Intelligence. Morgan Kaufmann, Los Altos, California, 1988. Pamela Fink and John Lusth. A general expert system design for diagnostic problem solving. IEEE Transactions on Systems, Man, and Cy- bernetics, 17(3), 1987. David Gardiner, Bosco Tjan, and James Single. Extended conceptual structures notation. Tech- nical Report TR 89-88, Department of Com- puter Science, University of Minnesota, Min- neapolis, Minnesota, 1989. Barbara Grosz, Douglas Appelt, Paul Martin, and Fernando Pereira. Team: An experiment in the design of transportable natural-language interfaces. Artificial Intelligence, 32, 1987. Irene Heim. The Semantics of Definite and In- definite Noun Phrases. PhD thesis, University of Massachusetts at Amherst, 1982. 7' [10] Hans Kamp. A theory of truth and semantic representation. In Groenendijk et al., editor, Truth, Interpretation, and Information. Foris, 1981. [11] Manfred Krifka. Nominal reference and tempo- ral constitution: Towards a semantics of quan- tity. In Proceedings of the Sixth Amsterdam Col- loquium, pages 153-173, University of Amster- dam, Institute for Language, Logic and Infor- mation, 1987. [12] Godehard Link. The logical analysis of plurals and mass terms: Lattice-theoretical approach. In Rainer Banerle, Christoph Schwarze, and Arnim von Steehow, editors, Meaning, Use, and Interpretations of Language. de Gruyter, 1983. [13] Godehard Link. Plural. In Dieter Wunderlich and Arnim yon Steehow, editors, To appear in: Handbook of Semantics. 1984. [14] Godehard Link. Generalized quantifiers and plurals. In P. Gaerdenfors, editor, General- ized Qnantifiers: Linguistics and Logical Ap- proaches. Reidel, 1987. [15] Craige Roberts. Modal Subordina- tion, Anaphora, and Distribntivitg. PhD thesis, University of Massachusetts at Amherst, 1987. [16] Remko Scha. Distributive, collective, and cumulative quantification. In Janssen and Stokhof, editors, Truth, Interpretation and In- formation. Foris, 1984. [17] Remko Scha and David Stallard. Multi-level plural and distributivity. In Proceedings of 26th Annual Meeting of the ACL, 1988. [18] Reid Simmons and Randall Davis. Generate, test and debug: Combining associational rules and causal models. In Proceedings of the Tenth International Joint Conference on Artificial In- telligence, Los Altos, California, 1987. [19] Patrick Winston and Berthold Horn. LISP 8rd Edition. Addison-Wesley, Reading, Mas- sachusetts, 1989.
1991
1
TYPE-RAISING AND DIRECTIONALITY IN COMBINATORY GRAMMAR* Mark Steedman Computer and Information Science, University of Pennsylvania 200 South 33rd Street Philadelphia PA 19104-6389, USA (Interact: steedman@cis, upenn, edu) ABSTRACT The form of rules in ¢ombinatory categorial grammars (CCG) is constrained by three principles, called "adja- cency", "consistency" and "inheritance". These principles have been claimed elsewhere to constrain the combinatory rules of composition and type raising in such a way as to make certain linguistic universals concerning word order under coordination follow immediately. The present paper shows that the three principles have a natural expression in a unification-based interpretation of CCG in which di- rectional information is an attribute of the arguments of functions grounded in string position. The universals can thereby be derived as consequences of elementary assump- tions. Some desirable results for grammars and parsers fol- low, concerning type-raising rules. PRELIMINARIES In Categorial Grammar (CG), elements like verbs are associated with a syntactic "category", which identi- fies their functional type. I shall use a notation in which the argument or domain category always ap- pears to the right of the slash, and the result or range category to the left. A forward slash / means that the argument in question must appear on the right, while a backward slash \ means it must appear on the left. (1) enjoys := (S\NP)/NP The category (S\NP)/NP can be regarded as both a syntactic and a semantic object, in which symbols like S are abbreviations for graphs or terms including interpretations, as in the unification-based categorial grammars ofZeevat et al. [8] and others (and cf. [6]). Such functions can combine with arguments of the appropriate type and position by rules of functional application, written as follows: (2) The Functional Application Rules: a. X/Y Y =~ X (>) b. Y X\Y :=~ X (<) Such rules are also both syntactic and semantic rules *Thanks to Michael Niv and Sm Shieber. Support from: NSF Grant CISE IIP CDA 88-22719, DARPA grant no. N0014-90J- 1863, and ARO grant no. DAAL03-89-C0031. of combination in which X and Y are abbreviations for more complex objects which combine via unifi- cation. They allow context-free derivations like the following (the application of rules is indicated by in- dices >, < on the underlines: (3) Mary enjoys ~usicals m, (s\m')/~ ]w . . . . . . . . . . . . . . . . > s\lP . . . . . . . . . . . . . < s The derivation can be assumed to build a composi- tional interpretation, (enjoy' musicals') mary', say. Coordination can be included in CG via the follow- ing rule, allowing constituents of like type to conjoin to yield a single constituent of the same type: (4) X conj X =~ X (5) I love and admire musicals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (s\m')/m, The rest of the derivation is exactly as in (3). In order to allow coordination of contiguous strings that do not constitute constituents, CCG allows certain operations on functions related to Curry's combina- tots [1]. Functions may compose, as well as apply, under rules like the following: (6) Forward Composition: X/Y Y/Z ~B X/Z (>B) The rule corresponds to Curry's eombinator B, as the subscripted arrow indicates. It allows sentences like Mary admires, and may enjoy, musicals to be ac- cepted, via the functional composition of two verbs (indexed as >B), to yield a composite of the same category as a transitive verb. Crucially, composition also yields the appropriate interpretation for the com- posite verb may prefer in this sentence (the rest of the derivation is as in (3)): 71 (7) admires and may enjoy (S\NP)/NP conj (S\NP)/VP VP/NP ............... >B (SkWP)Im, (s\~)/~P CCG also allows type-raising rules, related to the combinator T, which turn arguments into functions over functions-over-such-arguments. These rules al- low arguments to compose, and thereby lake part in coordinations like I dislike, and Mary enjoys, musi- cals. They too have an invariant compositional se- mantics which ensures that the result has an appro- priate interpretation. For example, the following rule allows such conjuncts to form as below (again, the remainder of the derivation is omitted): (8) Subject ~pe-raising: NP : y ~T S/(S\NP) (> T) (9) I dislike and Rax'y *"joys IP (S\IP)/IP conj lP (S\IP)/|P ........ >T ........ >T Sl(S\lP) Sl(S\IP) .............. =--->s .................. >S SlIP SlIP sliP This apparatus has been applied to a wide variety of phenomena of long-range dependency and coordinate structure (cf. [2], [5], [6]). 1 For example, Dowty pro- posed to account for the notorious "non-constituent" coordination in (10) by adding two rules that are sim- ply the backward mitre-image versions of the com- position and type raising rules already given (they are indicated in the derivation by <B and <T). 2 This is a welcome result: not only do we capture a construction that has been resistant to other formalisms. We also satisfy a prediction of the theory, for the two back- ward rules arc clearly expected once we have chosen to introduce their mirror image originals. The ear- lier papers show that, provided type raising is limited to the two "order preserving" varieties exemplified in these examples, the above reduction is the only one permitted by the lexicon of English. A number of related cross-linguistic regularities in the dependency of gapping upon basic word order follow ([2], [6]). The construction also strongly suggests that all NPs (etc.) should be considered as type raised, preferably I One further class of rules, corresponding to the combinator S, has been proposed. This combinator is not discussed here, but all the present results transfer to tho6e rules as well. 2This and other long examples have been "flmted" to later po- sitions in the text. in the lexicon, and that categories like NP should not reduce at all. However, this last proposal seems tc implies a puzzling extra ambiguity in the lexicon, and for the moment we will continue to view type-raising as a syntactic rule. The universal claim depends upon type-raising be- ing limited to the following schemata, which do not of themselves induce new constituent orders: (11) x =~T T/if\X) X :::}T T\(T/X) If the following patterns (which allow constituent or- ders that are not otherwise permitted) were allowed, the regularity would be unexplained, and without fur- ther restrictions, grammars would collapse into free order: (12) X :::}T T/(T/X) X ::~T T\(T\X) But what are the principles that limit combinatory rules of grammar, to include (11) and exclude (12)? The earlier papers claim that all CCG rules must conform to three principles. The first is called the Principle of Adjacency [5, pA05], and says that rules may only apply to string-adjacent non-empty cate- gories. It amounts to the assumption that combina- tops will do the job. The second is called the Prin- ciple of Directional Consistency. Informally stated, it says that rules may not override the directionality on the "cancelling" Y category in the combination. For example, the following rule is excluded: (13) • X\Y Y => X The third is the Principle of Directional Inheritance, which says that the directionality of any argument in the result of a combinatory rule must be the same as the directionality on the corresponding argument(s) in the original functions. For example, the following composition rule is excluded: (14) * X/Y Y/Z => X\Z However, rules like the following are permitted: (15) Y/Z X\Y => X/Z (<Bx) This rule (which is not a theorem in the Lambek cal- culus) is used in [5] to account for examples like I shall buy today and read tomorrow, the collected works of Proust, the crucial combination being the following: (16) ... read tomorxow ... vP/m, vP\vP ............ <Bx VP/NP The principles of consistency and inheritance amount 72 to the simple statement that combinatory rules may not contradict the directionality specified in the lexi- con. But how is this observation to be formalised, and how does it bear on the type-raising rules? The next section answers these questions by proposing an interpretation, grounded in string positions, for the symbols / and \ in CCG. The notation will temporar- ily become rather heavy going, so it should be clearly understood that this is not a proposal for a new CCG notation. It is a semantics for the metagrammar of the old CCG notation. DIRECTIONALITY IN CCG The fact that directionality of arguments is inher- ited under combinatory rules, under the third of the principles, strongly suggests that it is a property of arguments themselves, just like their eategorial type, NP or whatever, as in the work of Zeevat et al. [8][9]. However, the feature in question will here be grounded in a different representation, with signif- icantly different consequences, as follows. The basic form of a combinatory rule under the principle of ad- jacency is a fl ~ ~,. However, this notation leaves the linear order of ot and fl implicit. We therefore temporarily expand the notation, replacing categories like NP by 4-tuples, of the form {e~, DPa, L~, Ra}, comprising: a) a type such as NP; b) a Distinguished Position, which we will come to in a minute; c) a Left- end position; and d) a Right-end position. The Prin- ciple of Adjacency finds expression in the fact that all legal combinatory rules must have the the form in (17), in which the right-end of ~ is the same as the left-end of r: We will call the position P2, to which the two categories are adjacent, the juncture. The Distinguished Position of a category is simply the one of its two ends that coincides with the junc- ture when it is the "'cancelling" term Y. A rightward combining function, such as the transitive verb enjoy, specifies the distinguished position of its argument (here underlined for salience) as being that argument's left-end. So this category is written in full as in (18)a, using a non-directional slash/. The notation in (a) is rather overwhelming. When positional features are of no immediate relevance in such categories, they will be suppressed. For example, when we are thinking of such a function as a function, rather than as an argu- ment, we will write it as in (18)b, where VP stands for {VP, DFVp, Lw,, Rvp}, and the distinguished position of the verb is omitted. It is important to note that while the binding of the NP argument's Distin- guished Position to its left hand end L,p means that enjoy is a rightward function, the distinguished posi- tion is not bound to the right hand end of the verb, t~verb. It follows that the verb can potentially com- bine with an argument elsewhere, just so long as it is to the right. This property was crucial to the earlier analysis of heavy NP shift. Coupled with the parallel independence in the position of the result from the position of the verb, it is the point at which CCG parts company with the directional Lambek calculus, as we shall see below. In the expanded notation the rule of forward ap- plication is written as in (19). The fact that the dis- tinguisbed position must be one of the two ends of an argument category, coupled with the requirement of the principle of Adjacency, means that only the two order-preserving instances of functional applica- tion shown in (2) can exist, and only consistent cate- gories can unify with those rules. A combination under this rule proceeds as follows. Consider example (20), the VP enjoy musicals. The derivation continues as follows. First the positional variables of the categories are bound by the positions in which the words occur in the siring, as in (21), which in the first place we will represent explicitly, as numbered string positions, s Next the combinatory rule (19) applies, to unify the argument term of the function with the real argument, binding the remain- ing positional variables including the distinguished position, as in (22) and (23). At the point when the combinatory rule applies, the constraint implicit in the distinguished position must actually hold. That is, the distinguished position must be adjacent to the functor. Thus the Consistency property of combinatory rules follows from the principle of Adjacency, embodied in the fact that all such rules identify the distinguished position of the argument terms with the juncture P2, the point to which the two combinands are adjacent, as in the application example (19). The principle of Inheritance also follows directly from these assumptions. The fact that rules corre- spond to combinators like composition forces direc- tionality to be inherited, like any other property of an argument such as being an NP. It follows that only instances of the two very general rules of compo- sition shown in (24) are allowed, as a consequence of the three Principles. To conform to the principle of consistency, it is necessary that L~ and /~, the ends of the cancelling category Y, be distinct posi- tions - that is, that Y not be coerced to the empty string. This condition is implicit in the Principle of Adjacency (see above), although in the notation of 3 Declaritivising position like this may seem laborious, but it is a tactic familiar from the DCG literature, from which we shall later borrow the elegant device of encoding such positions implicitly in difference-lists. 73 (1o) give a policeman a flower and (VP/liP)/tip lip ~ conj <T <T (~/m~)\C (vP/SP)/mD vPXC~/SP) ------" . . . . . . "---- . . . . . . . . . . " . . . . . < e Vl'\(~/lw) a dog a bone ~\(~lW) ,iP liP liP .................. <T <T CVP/sP) \ (CVP/sP)/sP) ~\ (vv/sP) <B vp\ (VV lSi.) <&> (17) {a, DPa, Px,P~} {]~,DP~,P2, Ps} ::~ {7, DP.y,P1,Pa} (18) a. enjoy :-- {{VP, DPvp, Lvp, Rvp}/{NP, L.p, Lnp, R.p}, DPverb, Leerb, R~erb} b. enjoy :-- {VP/{NP, Lnp, L.p, P~p}, Leerb, R~erb} (19) {{X, DP., PI, P3}/{Y, P2, P2, P3}, PI, P2} {Y, P2, P2, P3} :~ {X, DPz, PI, P31 (20) 1 enjoy 2 musicals 3 {VP/{NP, Larg, Larg,Rare},Llun,Rlu.} {NP, DPnp, Lnp,R.p} (21) 1 enjoy 2 musicals 3 {VP/{NP, La,,, La,,, R.r,}, 1, 2} {NP, DPnp, 2, 3} (22) I enjoy 2 musicals 3 {VP/{NP, L.rg,Larg,Ro~g},l,2} {NP, DP.p,2,3} {X/{Y, P2, P2, P3}, P1, P2} {Y, P2, P2, P3} (23) 1 enjoy 2 musicals 3 {VP/{NP, 2,2,3~,l,2~ {NP,2,2, 3} {vP, 1, 3} (24) a. {{X, DP~,L.,R.}/{Y, P2,P2,P~},P1,P2} {{Y, P2,P2,P~}/{Z, DPz,Lz,R.},P2,P3) :~ {{X, DPx,L,,,R~,}/{Z, DP.,L.,R.},PI,P3} b. {{Y, P2, Ly, P2}/{Z, DPz, Lz, Rz}, PI, P2} {{X, DPx, L~, R~}/{Y, P2, Lu, P2}, P2, P3} :~ {{X, DPx, Lx,Rz}/{Z, DPz,L,,Rz},PI,P3} (25) The Possible Composition Rules: a. X/Y Y/Z =~B X/Z (>B) b. X/Y Y\Z =~B X\Z (>Bx) e. Y\Z X\Y =~B X\Z (<B) d. Y/Z X\Y ::*'B X/Z (<Bx) 7'4 the appendix it has to be explicitly imposed. These schemata permit only the four instances of the rules of composition proposed in [5] [6], given in (25) in the basic CCG notation. "Crossed" rules like (15) are still allowed Coecause of the non-identity noted in the discussion of (18) between the distinguished posi- tion of arguments of functions and the position of the function itself). They are distinguished from the cor- responding non-crossing rules by further specifying DP~, the distinguished position on Z. However, no rule violating the Principle of Inheritance, like (14), is allowed: such a rule would require a different distin- guished position on the two Zs, and would therefore not be functional composition at all. This is a desir- able result: the example (16) and the earlier papers show that the non-order-preserving instances (b, d) are required for the grammar of English and Dutch. In configurational languages like English they must of course be carefully restricted as to the categories that may unify with Y. The implications of the present formalism for the type-raising rules are less obvious. Type raising rules are unary, and probably lexical, so the principle of adjacency does not apply. However, we noted earlier that we only want the order-preserving instances (11), in which the directionality of the raised category is the reverse of that of its argument. But how can this reversal be anything but an arbitrary property? Because the directionality constraints are grounded out in string positions, the distinguished position of the subject argument of a predicate walks - that is, the right-hand edge of that subject - is equivalent to the distinguished position of the predicate that consti- tutes the argument of an order-preserving raised sub- ject Gilbert that is, the left-hand edge of that pred- icate. It follows that both of the order-preserving rules are instances of the single rule (26) in the ex- tended notation: The crucial property of this rule, which forces its instances to be order-preserving, is that the distinguished position variable D Parg on the argument of the predicate in the raised category is the same as that on the argument of the raised category itself. (l'he two distinguished positions are underlined in (26)). Of course, the position is unspecified at the time of applying the rule, and is simply represented as an unbound unification variable with an arbitrary mnemonic identifier. However, when the category combines with a predicate, this variable will be bound by the directionality specified in the predicate itself. Since this condition will be transmitted to the raised category, it will have to coincide with the juncture of the combination. Combination of the categories in the non-grammatical order will therefore fail, just as if the original categories were combining without the mediation of type-raising. Consider the following example. Under the above rule, the categories of the words in the sentence Gilbert walks are as shown in (27), before binding. Binding of string positional variables yields the cat- egories in (28). The combinatory rule of forward application (19) applies as in example (29), binding further variables by unification. In particular, DP 9, Prop, DPw, and P2, are all bound to the juncture po- sition 2, as in (30). By contrast, the same categories in the opposite linear order fail to unify with any combinatory rule. In particular, the backward appli- cation rule fails, as in (31). (Combination is blocked because 2 cannot unify with 3). On the assumption implicit in (26), the only permit- ted instances of type raising are the two rules given earlier as (11). The earlier results concerning word- order universals under coordination are therefore cap- tured. Moreover, we can now think of these two rules as a single underspecified order-preserving rule di- rectly corresponding to (26), which we might write less long-windediy as follows, augmenting the origi- nal simplest notation with a non-directional slash: (33) The Order-preserving Type-raising Rule: X ~ TI(TIX) (T) The category that results from this rule can combine in either direction, but will always preserve order. Such a property is extremely desirable in a language like English, whose verb requires some arguments to the right, and some to the left, but whose NPs do not bear case. The general raised category can combine in both directions, but will still preserve word order. It thus eliminates what was earlier noted as a worrying extra degree of categorial ambiguity. The way is now clear to incorporate type raising directly into the lexicon, substituting categories of the form T I(TIX), where X is a category like NP or PP, directly into the lexicon in place of the basic categories, or (more readably, but less efficiently), to keep the basic categories and the rule (33), and exclude the base categories from all combination. The related proposal of Zeevat et al. [8],[9] also has the property of allowing a single lexical raised category for the English NP. However, because of the way in which the directional constraints are here grounded in relative string position, rather than being primitive to the system, the present proposal avoids certain difficulties in the earlier treatment. Zeevat's type-raised categories are actually order-changing, and require the lexical category for the English pred- icate to be S/NP instead of S\NP. (Cf. [9, pp. 7S (25) {X, DParg,L..rg, R,,rg} => {T/{T/{X, DP,,,'g,L,,rg,R,,,-g},DParg,Lpred,Ra, red},L"rg, Rar9 } (27) 1 Gilbert 2 walks 3 {S/{S/{NP, DPg,Lg,Rg},DPg,Lpred, Rpred},Lg,Rg } {S/{NP, R~p,L.p,R~p},DP,~,Lw,R~} (28) 1 Gilbert 2 walks 3 {S/{S/{NP, DPg, I,2},DPg,Lpre,~,R~r.d}I,2} {S/{NP, R.p,L.p,R,w},DP",2,3} (29) 1 Gilbert 2 {S/{S/{NP, 01)9, 1, 2}, DPg, Lure& R~red}, 1,2} {X/{Y, P2, P2, P3}, P1, P2} walks {S/{NP, R~p, L.p, R~p}, DP,~, 2, 3} {Y, P2, P2, P3} (3O) 1 Gilbert 2 walks {S/{S/{NP, 2, 1,2}, 2, 2, 3}, 1, 2} {S/{NP, 2, 1,2}, 2, 2, 3} {S, 1,3} (31) 1 ,Walks 2 {S/{NP, R~p, L.~, R~p}, 1, 2} {Y, P2, P1, P2} Gilbert {S/ { S/ { N P, 01)9,2, 3}, DP 9, Lpr.d, Rpred}, 2, 3} {X/{Y, P2, Pl, P2}, P2, P3} (32) .{X, DParg,Larg,Rarg} :=~ {T/{T/{X, DParg,Lar.,Rarg}'DPpred'Lpred'Rp red}'Larg'Rarg} 207-210]). They are thereby prevented from captur- ing a number of generalisations of CCGs, and in fact exclude functional composition entirely. It is important to be clear that, while the order preserving constraint is very simply imposed, it is nevertheless an additional stipulation, imposed by the form of the type raising rule (26). We could have used a unique variable, DPpr,a say, in the crucial position in (26), unrelated to the positional condi- tion DP~r9 on the argument of the predicate itself, to define the distinguished position of the predicate argument of the raised category, as in example (32). However, this tactic would yield a completely uncon- strained type raising rule, whose result category could not merely be substituted throughout the lexicon for ground categories like NP without grammatical col- lapse. (Such categories immediately induce totally free word-order, for example permitting (31) on the English lexicon). It seems likely that type raising is universally confined to the order-preserving kind, and that the sources of so-called free word order lie else- where. Such a constraint can therefore be understood in terms of the present proposal simply as a require- ment for the lexicon itself to be consistent. It should also be observed that a uniformly order-changing cat- egory of the kind proposed by Zeevat et al. is not possible under this theory. The above argument translates directly into unification-based frameworks such as PATR or Pro- log. A small Prolog program, shown in an appendix, can be used to exemplify and check the argument. 4 The program makes no claim to practicality or ef- ficiency as a CCG parser, a question on which the reader is refered to [7]. Purely for explanatory sim- plicity, it uses type raising as a syntactic rule, rather than as an offline lexical rule. While a few English lexical categories and an English sentence are given by way of illustration, the very general combinatory rules that are included will of course require further constraints if they are not to overgenerate with larger fragments. (For example, >B and >Bx must be dis- anguished as outlined above, and file latter must be greatly constrained for English.) One very general constraint, excluding all combinations with or into NP, is included in the program, in order to force type-raising and exemplify the way in which further constrained rule-instances may be specified. CONCLUSION We can now safely revert to the original CCG nota- 4The program is based on a simple shift-reduce parser/rccogniscr, using "difference list"-encoding of string posi- tion (el. [41, [31). tion described in the preliminaries to the paper, mod- ified only by the introduction of the general order- preserving type raising rule (26), having established the following results. First, the earlier claims con- cerning word-order universals follow fTom first prin- ciples in a unification-based CCG in which direction- ality is an attribute of arguments, grounded out in string position. The Principles of Consistency and In- heritance follow as theorems, rather than stipulations. A single general-purpose order-preserving type-raised category can be assigned to arguments, simplifying the grammar and the parser. REFERENCES [1] Curry, I-Iaskell and Robert Feys: 1958, Combi- natory Logic, North Holland, Amsterdam. [2] Dowry, David: 1988, Type raising, functional composition, and non-constituent coordination, in Richard T. Oehrle, E. Bach and D. Wheeler, (eds), Categorial Grammars and Natural Lan- guage Structures, Reidel, Dordrecht, 153-198. [3] Gerdeman, Dale and Hinrichs, Erhard: 1990. Functor-driven Natural Language Generation with Categorial Unification Grammars. Proceedings of COLING 90, Helsinld, 145-150. [4] Pereira, Fernando, and Smart Shieber: 1987, Pro- log and Natural Language Analysis, CSLIAJniv. of Chicago Press. [5] Steedman, Mark: 1987. Combinatory grammars and parasitic gaps. Natural Language & Linguis- tic Theory, 5, 403-439. [6] Steedman, Mark: 1990, Gapping as Constitu- tent Coordination, Linguistics and Philosophy, 13, 207-263. [7] Vijay-Shartkar, K and David Weir: 1990, 'Poly- nomial Time Parsing of Combinatory Categorial Grammars', Proceedings of the 28th Annual Con- ference of the ACL, Pittsburgh, June 1990. [8] Zeevat, Hunk, Ewan Klein, and Jo Calder: 1987, 'An Introduction to Unification Categorial Gram- mar', in N. Haddock et al. (eds.), Edinburgh Working Papers in Cognitive Science, 1: Catego- rial Grammar, Unification Grammar, and Pars- ing. [9] Zeevat, Henk: 1988, 'Combining Categorial Grammar and Unification', in U. Reyle and C. Rohrer (eds.), Natural Language Parsing and Lin- guistic Theories, Dordrecht, Reidel, 202-229. 77' APPENDIX ~ A Lexical Frasment: parse will bind position (via list-encoding): category(gilbert, cat(np, _, P1, P2)). category(brigitte, cat(np, _, P1, P2)). category(ualks0cat(cat(s ...... )/cat(np,P2,_,P2),_,P3,P4)). category(love, cat(cat(vp ...... )/cat(np,P3,P3,_),_,P1,P2)). category(must,cat(cat(cat(s ...... )/cat(np,P2,_,P2) ...... )/cat(vp,P5,PS,_),_,P3,P4)). category(madly, cat(cat(vp, ..... )/cat(vp,P2,_,P2),_,P3,P4)). ~ Application and (overgeneral) Composition: Partial evaluation of DPy with the actual Juncture P2 ~ imposes Adjacency. DPy (=P2) must not be =- Y'e other end (see <B and >B). Antecedent \+ Y=np ~ disallows ALL combination with unraisedNPe. reduce(cat(cat(X,DPx,Pl,P3)/cat(Y,P2,P2,P3),_,Pl,P2), cat(Y, P2, P2,P3), cat(X,VPx,Pl,P3)) :- \+ Y-rip. ~> reduce(cat(Y,P2,Pl,P2), cat(cat(X,DPx,P1,P3)/cat(Y,P2,P1,P2),_,P2,P3), cat(X,DPx,Pl,P3)) :- \+ Y~np. reduce (cat (cat (X,DPx,Xl,X2)/cat (Y,P2,P2,Y2) ,_,P1,P2), cat (cat (Y ,P2 ,P2 ,Y2)/cat (z, DPz, ZI, Z2), _ ,P2 ,P3), cat(cat(X,DPx,Xl,X2)/cat(Z,VPz,Zl,Z2),_,P1,P3)) :- \+ Y=np,\+ Y2=-P2. ~>B, cf. ex. 24a reduce(cat(cat(Y,P2,YI,P2)/cat(Z,DPz,Zl,Z2),_,Pl,P2), cat(cat(X,DPx,Xl,X2)/cat(Y,P2,YI,P2),_,P2,P3), cat(cat(X,DPx,ll,X2)/cat(Z,DPz,ZI,Z2),_,PI,P3)) :- \+ Y=np,\+ YI==P2. ~<B, of. ex. 24b ~OrdarPreservingType Raisins: the rule np -> TI(TInp). raise(cat(np,DPnp,Pl,P2), % Binds PI, P2 cat(cat(T,DPt,TI,T2)/cat(cat(T,VPt,TI,T2)/cat(np,DPnp,PI,P2),VPnp,_,_), ~ cf. ex. 26 _,Pl,P2)). ~ Parse sJJmlates reduce-first shift-reduce recosniser with backtracking (inefficiently) parse( [Result] , O, Result). ~ Halt parse([Catl[Stack], Buffer, Result) :- X Raise (syntactic) raise(Carl, Cat2), parse([Cat21Stack], Buffer, Result). parse([Cat2, CatllStack], Buffer, Result) :- ~ Reduce reduce(Carl, Cat2, Cat3), parse([Cat3[Stack], Buffer, Result). parse(Stack, [3/oral[Buffer], Result) :- ~ Shift category(Word, cat(W,DPs, ~/ordlBuffer] ,Buffer)), ~ Position is list-encoded parse ( [cat (W, DPs, [Word I Buff er] ,Buff er) J St ack], Buff er, Result). ~ Example crucially iuvolvin 8 bidirectional T (tsice) and <Bx: [ ?- parse(D, ~ilbert,must,love,madly,brigitte] ,R). R " cat (s ,_37, ~ilbert ,must, love ,madly,hrigitte], D ) ~ ; -- plus 4 more equivalent derivations yes 78
1991
10
EFFICIENT INCREMENTAL PROCESSING WITH CATEGORIAL GRAMMAR Abstract Some problems are discussed that arise for incremental pro- cessing using certain flezible categorial grammars, which in- volve either undesirable parsing properties or failure to allow combinations useful to incrementality. We suggest a new cal- culus which, though 'designed' in relation to categorial inter- pretatious of some notions of dependency grammar, seems to provide a degree of flexibility that is highly appropriate for in- cremental interpretation. We demonstrate how this grammar may be used for efficient incremental parsing, by employing normalisation techniques. Introduction A range of categorial grammars (CGs) have been proposed which allow considerable flexibility in the assignment of syntactic structure, a characteristic which provides for categorial treatments of extrac- tion (Ades & Steedman, 1982) and non-constituent coordination (Steedman, 1985; Dowty, 1988), and that is claimed to allow for incremental processing of natural language (Steedman, 1989). It is this lat- ter possibility that is the focus of this paper. Such 'flexible' CGs (FCGs) typically allow that grammatical sentences may be given (amongst oth- ers) analyses which are either fully or primarily left- branching. These analyses have the property of des- ignating many of the initial substrings of sentences as interpretable constituents, providing for a style of processing in which the interpretation of a sentence is generated 'on-line' as the sentence is presented. It has been argued that incremental interpretation may provide for efficient language processing -- by both humans and machines -- in allowing early fil- tering of thematically or referentially implausible readings. The view that human sentence processing is 'incremental' is supported by both introspective and experimental evidence. In this paper, we discuss FCG approaches and some problems that arise for using them as a ba- sis for incremental processing. Then, we propose a grammar that avoids these problems, and demon- strate how it may be used for efficient incremental processing. Mark Hepple University of Cambridge Computer Laboratory, New Museums Site, Pembroke St, Cambridge, UK. e-mail : mrhQuk, a¢. cam. ¢i Flexible Categorial Grammars CGs consist of two components: (i) a categorial lex- icon, which assigns to each word at least one syn- tactic type (plus associated meaning), (ii) a calculus which determines the set of admitted type combina- tions and transitions. The set of types (T) is defined recursively in terms of a set of basic types (To) and a set of operators (\ and/, for standard bidirectional CG), as the smallest set such that (i) To C T, (ii) if x,y E T, then x\y, x/y E T. 1 Intuitively, lexi- cal types specify subcategorisation requirements of words, and requirements on constituent order. The most basic (non-flexible) CGs provide only rules of application for combining types, shown in (1). We adopt a scheme for specifying the semantics of com- bination rules where the rule name identifies a func- tion that applies to the meanings of the input types in their left-to-right order to give the meaning of the result expression. (1) f: X/Y + Y =~ X (where f= AaAb.(ab)) b: Y + X\Y =~ X (where b = AaAb.(ba)) The Lambek calculus We begin by briefly considering the (product-free) Lambek calculus (LC - Lambek, 1958). Various for- mulations of the LC are possible (although we shall not present one here due to space limitations). 2 The LC is complete with respect to an intuitively sensible interpretation of the slash connectives whereby the type x/y (resp. x\y) may be assigned to any string z which when left-concatenated (resp. right- concatenated) with any string y of type y yields a string x.y (resp. y.x) of type x. The LC can be seen to provide the limit for what are possible 1 We use a categorial notation in which x/y and x\y are both functions from y into x, and adopt a convention of left association, so that, e.g. ((s\np)/pp)/np may be writ- ten s\np/pp/np. 2See Lambek (1958) and Moortgat (1989) for a sequent formulation of the LC. See Morrill, Leslie, Hepple & Barry (1990), and Barry, Hepple, Leslie & Morrill (1991) for a natu- ral deduction formulation. Zielonka (1981) provides a LC for- mulation in terms of (recursively defined) reduction schema. Various extensions of the LC are currently under investiga- tion, although we shall not have space to discuss them here. See Hepple (1990), Morrill (1990) and Moortgat (1990b). 79 type combinations -- the other calculi which we consider admit only a subset of the Lambek type combinations, s The flexibility of the LC is such that, for any com- bination xl,..,x, ==~ x0, a fully left-branching deriva- tion is always possible (i.e. combining xl and x2, then combining the result with x3, and so on). How- ever, the properties of the LC make it useless for practical incremental processing. Under the LC, there is always an infinite number of result types for any combination, and we can only in practice ad- dress the possibility of combining some types to give a known result type. Even if we were to allow only S as the overall result of a parse, this would not tell us the intermediate target types for binary combi- nations made in incrementally accepting a sentence, so that such an analysis cannot in practice be made. Comblnatory Categor|al GrRmmar Combinatory Categorial Grammars (CCGs - Steed- man, 1987; Szabolcsi, 1987) are formulated by adding a number of type combination and transition schemes to the basic rules of application. We can formulate a simple version of CCG with the rules of type raising and composition shown in (2). This CCG allows the combinations (3a,b), as shown by the proofs (4a,b). (2) T: x ::~ y/(y\x) (where T - AxAf.(fz)) B: x/y + y/z =:~ x/z (where B = (3) a. np:z, s\np/np:f =~ s/np:Ay.fyz b. vp/s:f, np:z =~ vp/(s\np):Ag.f(gz) (4) (a) np s\np/np (b) vp/s np T T s/(s\np) ]3 s/(s\nP)B s/np vp/(s\np) The derived rule (3a) allows a subject NP to com- bine with a transitive verb before the verb has com- bined with its object. In (3b), a sentence em- bedding verb is composed with a raised subject NP. Note that it is not clear for this latter case that the combination would usefully contribute to incremen- tal processing, i.e. in the resulting semantic expres- sion, the meanings of the types combined are not di- rectly related to each other, but rather a hypothet- ical function mediates between the two. Hence, any 3In some frameworks, the use of non-Lambek-valid rules such as disharmonic composition (e.g. x/y + y\z ::~ x\z) has been suggested. We shall not consider such rules in this paper. requirements that the verb may have on the seman- tic properties of its argument (i.e. the clause) could not be exploited at this stage to rule out the re- sulting expression as semantically implausible. We define as contentful only those combinations which directly relate the meanings of the expressions com- bined, without depending on the mediation of hy- pothetical functions. Note that this calculus (like other versions of CCG) fails to admit some combinations, which are allowed by the LC, that are contentful in this sense -- for example, (5). Note that although the seman- tics for the result expression in (5) is complex, the meanings of the two types combined are still di- rectly related -- the lambda abstractions effectively just fulfil the role of swapping the argument order of the subordinate functor. (5) x/(y\z):f, y/w\z:g ~ x/w:Av.f(Aw.gwv) Other problems arise for using CCG as a basis for incremental processing. Firstly, the free use of type-raising rules presents problems, i.e. since the rule can always apply to its own output. In practice, however, CCG grammars typically use type specific raising rules (e.g. np =~ s/(s\np)), thereby avoiding this problem. Note that this restriction on type- raising also excludes various possibilities for flexible combination (e.g. so that not all combinations of the form y, x\y/z =~ x/z are allowed, as would be the case with unrestricted type-raising). Some problems for efficient processing of CCGs arise from what has been termed 'spurious ambigu- ity' or 'derivational equivalence', i.e. the existence of multiple distinct proofs which assign the same reading for some combination of types. For exam- ple, the proofs (6a,b) assign the same reading for the combination. Since search for proofs must be exhaustive to ensure that all distinct readings for a combination are found, effort will be wasted con- structing proofs which a....~ ~he same meaning, considerably reducing the elficiency of processing. Hepple & Morrill (1989) suggest a solution to this problem that involves specifying a notion of nor- mal form (NF) for CCG proofs, and ensuring that the parser returns only NF proofs. 4 However, their method has a number of limitations. (i) They con- sidered a 'toy grammar' involving only the CCG rules stated above. For a grammar involving fur- ther combination rules, normalisation would need to be completely reworked, and it remains to be shown that this task can be successfully done. (ii) 4Normalisation has also been suggested to deal with the problem of spurious ambiguity as it arises for the LC. See K6nig (1989), Hepple (1990) and Moortgat (1990). 80 The NF proofs of this system are right-branching -- again, it remains to be shown that a NF can be defined which favours left-branching (or even pri- marily left-branching) proofs. (6) (a) x/y y/z - (b) x/y y/z f B y x/z f f x x Meta-Categorial Grammar In Meta-Categorial Grammar (MCG - Morrill, 1988) combination rules are recursively defined from the application rules (f and b) using the metarnles (7) and (8). The metarules state that given a rule of the form shown to the left of ==~ with name ~, a further rule is allowed of the form shown to the right, with name given by applying tt or L to ¢ as indicated. For example, applying It to backward application gives the rule (9), which allows com- bination of subject and transitive verb, as T and B do for CCG. Note, however, that this calculus does not allow any 'non-contentful' combinations -- all rules are recursively defined on the applica- tion rules which require a proper functional relation between the types combined. However, this calcu- lus also fails to allow some contentful combinations, such as the case x/(y\z), y/w\z =:~ x/w mentioned above in (5). Like CCG, MCG suffers from spurious ambiguity, although this problem can be dealt with via normalisation (Morrill, 1988; Hepple & Morrill, 1989). (7) ¢:x+y:~z =:~ R¢:x+y/w=C,z/w (where R = ~g,~a~b,~c.ga(bc)) (8) ¢:x+y=~z ==~ L¢:x\w+y:C,z\w (where L = ag a bae g(ac)b) (9) Rb: y + x\y/z =~ x/z The Dependency Calculus In this section, we will suggest a new calculus which, we will argue, is well suited to the task of incremen- tal processing. We begin, however, with some dis- cussion of the notions of head and dependent, and their relevance to CG. The dependency grammar (DG) tradition takes as fundamental the notions of head, dependent and the head-dependent relationship; where a head is, loosely, an element on which other elements depend. An analogy is often drawn between CG and DG based on equating categorial functors with heads, whereby a functor x/yl../yn (ignoring directional- ity, for the moment) is taken to correspond to a head requiring dependents Yl..Yn, although there are sev- eral obvious differences between the two approaches. Firstly, a categorial functor specifies an ordering over its 'dependents' (function-argument order, that is, rather than constituent order) where no such or- dering is identified by a DG head. Secondly, the arguments of a categorial functor are necessarily phrasal, whereas by the standard view in DG, the dependents of a head are taken to be words (which may themselves be heads of other head/dependent complexes). Thirdly, categorial functors may spec- ify arguments which have complex types, which, by the analogy, might be described as a head being able to make stipulations about the dependency require- ments of its dependent and also to 'absorb' those dependency requirements. 5 For example, a type x/(y\z) seeks an argument which is a "y needing a dependent z" under the head/functor analogy. On combining with such a type, the requirement "need a dependent z" is gone. Contrast this with the use of, say, composition (i.e. x/y, y/z =~ x/z), where a type x/y simply needs a dependent y, and where composition allows the functor to combine with its dependent y while the latter still requires a depen- dent z, and where that requirement is inherited onto the result of the combination and can be satisfied later on. Barry & Pickering (B&P, 1990) explore the view of dependency that arises in CG when the functor- argument relationship is taken as analogous to the traditional head-dependent relationship. A problem arises in employing this analogy with FCGs, since FCGs permit certain type transformations that un- dermine the head-dependent relations that are im- plicit in lexical type assignments. An obvious exam- ple is the type-raising transformation x =~ y/(y\x), which directly reverses the direction of the head- dependent relationship between a functor and its argument. B&P identify a subset of LC combina- tions as dependency preserving (DP), i.e. those com- binations which preserve the head-dependent rela- tions implicit in the types combined, and call con- stituents which have DP analyses dependency con- stituents. B&P argue for the significance of this notion of constituency in relation to the treatment of coordination and the comparative difficulty ob- served for (human) processing of nested and non- 5Clearly, a CG where argument types were required to be basic would be a closer analogue of DG in not allowing a 'head' to make such stipulations about its dependents. Such a system could be enforced by adopting a more restricted definition of the set of types (T) as the smallest set such that (i) To C T, (ii) if x E T and y E To, then x\y, x/y E T (c.f. the definition given earlier). 81 nested constructionsfi B&P suggest a means for identifying the DP subset of LC transformations and combinations in terms of the lambda expres- sions that assign their semantics. Specifically, a combination is DP iff the lambda expression speci- fying its semantics does not involve abstraction over a variable that fulfils the role of functor within the expression (c.f. the semantics of type raising in (2))ff We will adopt a different approach to B&P for addressing dependency constituency, which involves specifying a calculus that allows all and only the DP combinations (as opposed to a criterion identifying a subset of LC combinations as DP). Consider again the combination x/(y\z), y/w\z =~ x/w, not admit- ted by either the CCG or MCG stated above. This combination would be admitted by the MCG (and also the CCG) if we added the following (Lambek- valid) associativity axioms, as illustrated in (11). (10) a: x\y/z=~x/z\y a: x/y\z=~x\z/y (where a = ~f~a]b.fba) (II) x/(y\z) y/w\z ~ a y\,/w Rf x/w We take it as self-evident that the unary trans- formations specified by these two axioms are DP, since function-argument order is a notion extrane- ous to dependency; the functors x\y/z and x/z\y have the same dependency requirements, i.e. depen- dents y and z. s For the same reason, such reordering of arguments should also be possible for functions that occur as subtypes within larger types, as in (12a,b). The operation of the associativity rules can be 'generalised' in this fashion by including the unary metarules (13), 9 which recursively define eSee Baxry (forthcoming) for extensive discussion of de- pendency and CG, and Pickering (1991) for the relevance of dependency to human sentence processing. 7B&P suggest a second criterion in terms of the form of proofs which, for the natural deduction formulation of the LC that B&P use, is equivalent to the criterion in terms of laznbda expressions (given that a variant of the Curry- Howard correspondence between implicational deductions and lambda expressions obtains). s Clearly, the reversal of two co-directional arguments (i.e. x/y/z =~ x/z/y) would also be DP for this reason, but is not LC-valld (since it would not preserve linear order require- ments). For a unidirectional CG system (i.e. a system with a single connective/, that did not specify linear order require- ments), free reversal of axguments would be appropriate. We suggest that a unidirectional variant of the calculus to be proposed might be the best system for pure reasoning about 'categorial dependency', aside from linearity considerations. 9These unary metarules have been used elsewhere as part of the LC formulation of Zielonka (1981). new unary rules from tile associat, ivit.) axioms. (12) a. a\b/c/d ~ a/ckb/d b. x/(a\b/c) ~ x/Ca/c\b) (13) a. ¢: x=~y ==~ V¢: x/z ::~y/z ¢: x=~y ==~ V¢: x\z =~y\z (where V = f a b.f(ab)) b. ¢:x=~y ==~ Z¢: z/y=~z/x ¢: x==~y ~ Z¢: z\y=~ z\x (where Z = (14) x/(a\b/c):f~ x/(a/c\b):~v./O~a~b.vba) Clearly, the rules {V,Z,a} allow only DP unary transformations. However, we make the stronger claim that these rules specify the limit of DP unary transformations. The rules allow that the given functional structure of a type be 'shuffled' upto the limit of preserving linear order requirements. But the only alternative to such 'shuffling' would seem to be that some of the given type structure be re- moved or further type structure be added, which, by the assumption that functional structure expresses dependency relations, cannot be DP. We propose the system {L,R,V,Z,a,f,b} as a cal- culus allowing all and only the DP combinations and transformations of types, with a 'division of labour' as follows: (i) the rules f and b, allowing the estab- lishment of direct head-dependent relations, (ii) the subsystem {V,Z,a}, allowing DP transformation of types upto the limit of preserving linear order, and (iii) the rules tt and L, which provide for the inher- itance of 'dependency requirements' onto the result of a combination. We call this calculus the depen- dency calculus (DC) (of which we identify two sub- systems: (i) the binary calculus B : {L,R,f,b}, (ii) the unary calculus U : {V,Z,a}). Note that B&P's criterion and the DC do not agree on what are DP combinations in all cases. For example, the seman- tics for the type transformation in (14) involves ab- straction over a variable that occurs as a functor. Hence this transformation is not DP under B&P's criterion, although it is admitted by the DC. We believe that the DC is correct in admitting this and the other additional combinations that it allows. There is clearly a close relation between DP type combination and the notion of contentful combi- nation discussed earlier. The 'dependency require- ments' stated by any lexical type will constitute the sum of the 'thematically contentful' relationships into which it may enter. In allowing all DP com- binations (subject to the limit of preserving linear order requirements), the DC ensures that lexieally 82 originating dependency structure is both preserved and also exploited in full. Consequently, the DC is well suited to incremental processing. Note, how- ever, that there is some extent of divergence be- tween the DC and the (admittedly vague) criterion of 'contentful' combination defined earlier. Con- sider the LC-valid combination in (15), which is not admitted by the DC. This combination would appear to be 'contentful' since no hypothetical se- mantic functor intervenes between land g (although g has undergone a change in its relationship to its own argument which depends on such a hypothet- ical functor). However, we do not expect that the exclusion of such combinations will substraet signif- icantly from genuinely useful incrementality in pars- ing actual grammars. (15) x/(y/z):/, x:l(X .g(Xh.hv)) Parsing and the Dependency Calculus Binary combinations allowed by the DC are all of the form (16) (where the vertical dots abbrevi- ate unary transformations, and ¢ is some binary rule). The obvious naive approach to finding possi- ble combinations of two types x and y under the DC involves searching through the possible unary trans- forms of x and y, then trying each possible pairing of them with the binary rules of B, and then deriv- ing the set of unary transforms for the result of any successful combination. At first sight, the efficiency of processing using this calculus seems to be in doubt. Firstly, the search space to be addressed in checking for possible combinations of two types is considerably greater than for CCG or MCG. Also, the DC will suffer spu- rious ambiguity in a fashion directly comparable to CCG and MCG (obviously, for the latter case, since the above MCG is a subsystem of the DC). For ex- ample, the combination x/y, y/z, z ::~ x has both left and right branching derivations. However, a further equivalence problem arises due to the interderivability of types under the unary subsystem U. For any unary transformation x :=~ y, the converse y :~ x is always possible, and the se- mantics of these transformations are always inverses. (This obviously holds for a, and can be shown to hold for more complex transformations by a simple induction.) Consequently, if parsing assigns distinct types x and y to some substring that are merely variants under the unary calculus, this will engen- der redundancy, since anything that can be proven with x can equivalently be proven with y. (16) x y X 0 Z Normalisation and the Dependency Calculus These efficiency problems for parsing with the DC can be seen to result from equivalence amongst terms occurring at a number of levels within the system. Our solution to this problem involves specifying nor- mal forms (NFs) for terms -- to act as privileged members of their equivalence class -- at three differ- ent levels of the system: (i) types, (ii) binary com- binations, (iii) proofs. The resulting system allows for efficient categorial parsing which is incremental up to the limit allowed by the DC. A standard way of specifying NFs is based on the method of reduction, and involves defining a contraction relation (I>1) between terms, which is stated as a number of contraction rules of the form X !>1 Y (where X is termed a redez and Y its con- tractum). Each contraction rule allows that a term containing a redex may be transformed into a term where that occurrence is replaced by its contractum. A term is said to be in NF if and only if it contains no redexes. The contraction relation generates a re- duction relation (1>) such that X reduces to Y (X I> Y) iff Y is obtained from X by a finite series (pos- sibly zero) of contractions. A term Y is a NF of X iff Y is a NF and X 1> Y. The contraction relation also generates an equivalence relation which is such that X = Y iff Y can be obtained from X by a se- quence of zero or more steps, each of which is either a contraction or reverse contraction. Interderivability of types under U can be seen as giving a notion of equivalence for types. The con- traction rule (17) defines a NF for types. Since contraction rules apply to any redex subformula oc- curring within some overall term, this rule's do- main of application is as broad as that of the as- sociativity axioms in the unary calculus given the generalising effects of the unary metarules. Hence, the notion of equivalence generated by rule (16) is the same as that defined by interderivability un- der U. It is straightforward to show that the reduc- tion relation defined by (16) exhibits two impor- tant properties: (i) strong normalisation 1°, with the 1°To prove strong normalisation it is sufficient to give a metric which assigns each term a finite non-negative integer score, and under which every contraction reduces the score for a term by a positive integer amount. The following metric suffices: (a) X ~ = 1 if X is atomic, (b) (X/Y) t = X ~ + Y~, (c) (X\Y)' = 2(X' + Y'). 83 consequence that every type has a NF, and (ii) the Church-Rosser property, from which it follows that NFs are unique. In (18), a constructive notion of NF is specified. It is easily shown that this con- structive definition identifies the same types to be NFs as the reduetive definition. 11 (17) x/y\,. ~1 x\z/y (18) x\yl.-Yi/Yi+l..Yn where n _~ 0, x is a basic type and each yj (1 < j < n) is in turn of this general form. (19) ¢: x/ut,.u, + y =~ z ==~ L(n)¢: x\w/ul..U, + y =~ z\w (where L(n) ---- A#AaAbAc.#(Ava..vn.avl..vnc)b) We next consider normalisation for binary com- binations. For this purpose, we require a modified version of the binary calculus, called W, having the rules {L(n),R,f,b}), where L(n) is a 'generalised' variant of the metarule L, shown in (19) (where the notation X/Ul..Un is schematic for a function seek- ing n forward directional arguments, e.g. so that for n = 3 we have x/ux..un = X/Ul/U~/Us). Note that the case L(0) is equivalent to L. We will show that for every binary combination X + Y =~ Z under the DC, there is a correspond- ing combination X' + Y~ =* Z' under W, where X ~, Y' and Z' are the NFs of X, Y and Z. To demon- strate this, it is sufficient to show that for every combination under B, there is a corresponding W combination of the NFs of the types (i.e. since for binary combinations under the DC, of the form in (16), the types occurring at the top and bottom of any sequence of unary transformations will have the same NF). The following contraction rules define a NF for combinations under B ~ (which includes the combi- nations of B as a subset -- provided that each use of L is relabelled as L(0)): (20) IF w l>t w' THEN a. f: w/y + y :=~ w 1>1 f: w'/y + y =~ w' b. f: y/w + w ::~ y I>t f: y/w' + w' =~ y c. b: y+w\y=~w E>lb: y+w~\y=~w' d. b: w + y\w :=~ y !>1 b: w' + ykw' :=~ y e. L(i)¢: x\w/ul..Ui + y =~ z\w I>1 L(i)¢: xkw'/ul..u/ + y =~ zkw t f. Re: x + y/w =~ z/w t>l Re: x + y/w' ::~ z/w' laThis NF is based on an arbitrary bias in the restruc- turing of types, i.e. ordering backward directional arguments after forward directional arguments. The opposite bias (i.e. forward arguments after backward arguments) could as well have been chosen. (21) L(i)R¢: x\w/ul..ui + y/v =~ z/v\w t>l RL(i)¢: x\w/ul..ui + y/v ::~ zkw/v (22) L(o)f: x/w\v + w ~ x\v [:>1 f: x\v/w + w =~ x\v (23) L(i)f: xkw/ul..Ui + ui =*" x/ul..ui-t\w t>l f: x\w/ul..ul + ui ~ x\w/ul..u;_~ for i > O. (24) b: ~. + x/y\~, ~ x/y ~1 Rb: z + x\z/y =~ x/y (25) L(i)¢: X/V\W/Ul..U i + y ~ Z\W E> 1 L(i+I)¢: x\w/v/ul..ui + y ==~ z\w (26) IF ¢: x+y==~z 1>1 ¢': x'+y':=~z' THEN R¢:x+y/w:=~z/w I>l Re': x' + y'/w =~ z'/w (27) IF ¢: X/Ul..Ui + y :=~ z I>t ¢~: x'/ul'..ul ~ + y' =~ z' THEN L(i)~b: x\w/ul..ui + y =~ z I>1 L(i)¢': x'\w/ul'..ui' + y' ~ z' These rules also transform the types involved into their NFs. In the cases in (20), a contraction is made without affecting the identity of the particular rule used to combine the types. In (21-25), the transformations made on types requires that some change be made to the rule used to combine them. The rules (26) and (27) recursively define new contractions in terms of the basic ones. This reduction system can be shown to exhibit strong normalisation, and it is straightforward to ar- gue that each combination must have a unique NF. This definition of NF accords with the constructive definition (28). (Note that the notation R n rep- resents a sequence of n Rs, which are to be brack- eted right-associatively with the following rule, e.g. so that R~f = (R(Rf)), and that i takes the same value for each L(i) in the sequence L(i)"L) (28) ¢:x+y~z where x, y, z are NF types, and ¢ is (Rnf) or (RnL(i)mb), for n, m > 0. Each proof of some combination xl,..,xn =~ x0 under the DC can be seen to consist of a number of binary 'subtrees', each of the form (16). If we sub- stitute each binary subtree with its NF combination in W, this gives a proof of Xlt,..,x~ ' =~ x0 t (where each xl ~ is the NF ofxi). Hence, for every DC proof, there is a corresponding proof of the combination of the NFs of the same types under B'. Even if we consider only proofs involving NF com- binations in W, we observe spurious ambiguity of the kind familiar from CCG and MCG. Again, we can deal with this problem by defining NFs for such 84 proofs. Since we are interested in incremental pro- cessing, our method for identifying NF proofs is based on favouring left-branching structures. Let us consider the patterns of functional depen- dency that are possible amongst sequences of three types. These are shown in (29). 12 Of these cases, some (i.e. (a) and (f)) can only be derived with a left-branching proof under B' (or the DC), and others (i.e. (b) and (e)) can only be derived with a right-branching proof. Combinations of the pat- terns (c),(d) and (g) commonly allow both right and left-branching derivations (though not in all cases). (29) (a) ~ (h) ( x y z x y z (c) (d) x y z x y z (e) , (f) • x y z x y z (g) x y z (30) (R"f): x/y + y/ul..un ~ x/ul..u. (31) (R"L(/)mb): x\wl..wm/ul..u, + y\(xlul..n,)lvl..v. =~ y\wl..wm/vl..v,~ NF binary combinations of the pattern in (28) take the two more specific forms in (30) and (31). Knowing this, we can easily sketch out the schematic form of the three element combinations correspond- ing to (29c,d,g) which have equivalent left and right branching proofs, as shown in Figure 1. We can define a NF for proofs under B I (that use only NF combinations) by stating three contraction rules, one for each of the three cases in Figure 1, where each rule rewrites the right branching three- leaf subproof as the equivalent left branching sub- proof. This will identify the optimally left branch- ing member of each equivalence class of proofs as its NF exemplar. Again, it is easily shown that reduc- tion under these rules exhibits strong normalisation and the Church-Rosser property, so that every proof must have a unique normal form. However, it is not so easy to prove the stronger claim that there is only a single NF proof that assigns each distinct read- ing for any combination. 13 We shall not attempt 12Note that various other conceivable patterns of depen- dency do not need to be considered here since they do not correspond to any Lambek-valid combination. ~3 Thls holds if the contraction relation generates an equiv- to demonstrate this property, although we believe that it holds. We can identify the redexes of these three contraction rules purely in terms of the rules used to combine types, i.e. without needing to ex- amine the schematic form of the types, since the rules themselves identify the relevant structure of the types. In fact, the right-branching subproofs for cases (29c,g) collapse to the single schematic redex (32), and that for (29d) simplifies to the schematic redex (33). (Note that the notation ¢~ is used to represent any (NF) rule which is recursively defined on a second rule ~r, e.g. so that ~rb is any NF rule defined on b.) (32) x y zltm f w where n ~_ m v (33) x y z '~b(L(i}b) w where n ~ 1 Ir b V Let us consider the use of this system for pars- ing. In seeking combinations of some sequence of types, we first begin by transforming the types into their NFs. 14 Then, we can search for proofs using only the NF binary combinations. Any proof that is found to contain a proof redexes is discontinued, so that only NF proofs are returned, avoiding the problems of spurious ambiguity. Any result types assigned by such proofs stand as NF exemplars for the set of non-NF types that could be derived from the original input types under the DC. We may want to know if some input types can combine to give a specific result type x. This will be the case if the parser returns the NF of x. Regarding incremental processing, we have seen that the DC is well-suited to this task in terms of al- lowing combinations that may usefully contribute to a knowledge of the semantic relations amongst the phrases combined, and that the NF proofs we have defined (and which the parser will construct) are optimally left-branching to the limit set by the cal- culus. Hence, in left-to-right analysis of sentences, the parser will be able to combine the presented material to the maximal extent that doing so use- fully contributes to incremental interpretation and the filtering of semantically implausible analyses. alence relation that equates any two proofs iff these assign extenslonally equivalent readings. 14The complexity of this transformation is constant in the complexity of the type. 85 C~. (2s~): (a) x/y y/wa..w. W,/Vl..Vm gnf x/wa ..w, .R'nf x/wa ..wn-I/vl..vm C~ (2Sd): (~) w,\q~..qk/u,..us (b) x/y y/wl ..wn Wn/Vl..vm .I%mf y/wl ..Wn--1/Vl .-vmRm+n_l f x/wl..w,-a/va..v,, (b) w,\~..qk/ua..uj y\wl..Wn--l \(wn/ul..Uj)/vl..vi x\(y/vl..Vi)/tl..tm RmL(1)nb y\wl ..wn-a\q,..qk/v, ..vl x\wa ..wn-i \ql..~ltl ..tin Case (28g): (a) y\wl ..wj/ul ..ui x\(y/ul ..ui)/Vl ..Vm vm/ql--qn R'nL(i)~b X\Wl..Wj/Va..Vm Rnf x\wl..w~//vl..Vm-i/ql..qn (b) y\wl ..wj/ul ..ui x\(y/ul..ui)/vl ..vm vm/ql ..qn]Ln f x\(ylul..Ui)/vz..vm-l/ql..qnam+n_ 1 L(i)Jb X\Wl..Wn-l\(wn/ul..uj)/tl..tm Rmg(j)kb x\wl ..w,-a \qu ..qk/ta..t,, y\wl ..w,-I \(wn/ul ..uj)/vl ..ViRiL.j.kb_() x\(y/vl ..vi)/tl ..tin x\wa..w~l,,l..v,,,-, lo~..qn RmL(1) k4n-I b Figure 1: Equivalent left and right-branching three-leaf subproofs References Ades, A.E. and Steedman, M.J. 1982. 'On the order of words.' Linguistics and Philosophy, 4. Barry, G. ]orthcoming:1991. Ph.D. dissertation, Centre for Cognitive Science, University of Edinburgh. Barry, G., Hepple, M., Leslie, N. and Morrill, G. 1991. 'Proof figures and structural operators for categorial grammar'. In EA CL-5, Berlin. Barry, G. and Morrill, G. 1990. (Eds). Studies in Categorlal Grammar. Edinburgh Working Papers in Cognitive Sci- ence, Volume 5. Centre for Cognitive Science, University of Edinburgh. Barry, G. and Piekering, M. 1990. 'Dependency and Con- stituency in Categorial Grammar.' In Barry, G. and Mor- rill, G. 1990. Dowty, D. 1988. 'Type raising, function composition, and non-constituent conjunction.' In Oehrle, R., Bach, E. and Wheeler, D. (Eds), Categorial Grammars and Natural Lan- guage Structures, D. Reidel, Dordrecht. Hepple, M. 1990. 'Normal form theorem proving for the Lam- bek calculus.' In Karlgren, H. (Ed), Proe. o] COLING 1990. Hepple, M. 1990. The Grammar and Processing of Order and Dependency: A Categorial Approach. Ph.D. disser- tation, Centre for Cognitive Science, University of Edin- burgh. Hepple, M. and Morrill, G. 1989. 'Parsing and derivational equivalence.' In EACL-J, UMIST, Manchester. KSnig, E. 1989, 'Parsing as natural deduction.' In Proc. o] A CL-$5, Vancouver. Lambek, J. 1958. 'The mathematics of sentence structure.' American Mathematical Monthly 65. Moortgat, M. 1989. Categorial Investigations: Logical and Linguistic Aspects o] the Lambek Calculus, Foris, Dordrecht. Moortgat, M. 1990. 'Unambiguous proof representations for the Lambek calculus.' In Proe. o] 7th Amsterdam Collo- quium, University of Amsterdam. Moortgat, M. 1990. 'The logic of discontinuous type con- structors.' In Proc. of the Symposium on Discontinuous Constituency, Institute for Language Technology and In- formation, University of Tllburg. Morrill, G. 1988, Extraction and Coordination in Phrase Structure Grammar and Categorial Grammar. Ph.D. dis- sertation, Centre for Cognitive Science, University of Ed- inbturgh. Morrill, G. 1990. 'Grammar and Logical Types.' In Proc. 7th Amsterdam Colloquium, University of Amsterdam. An extended version appears in Barry, G. and Morrill, G. 1990. Morrill, G., Leslie, N., Hepp]e, M. and Barry, G. 1990. 'Cat- egorial deductions and structural operations.' In Barry, G. and Morrill, G. 1990. Piekering, M. 1991. Processing Dependencies. Ph.D. disser- tation, Centre for Cognitive Science, University of Edin- burgh. Steedrnan, Mark. 1985. 'Dependency and Coordination in the Grammar of Dutch and English.' Language, 61:3. Steedman, Mark. 1987. 'Combinatory Grammars and Para- sitic Gaps.' NLLT, 5:3. Steedman, M.J. 1989. 'Gramnaar, interpretation and process- ing from the lexicon.' In Marslen-Wilson, W. (Ed), Lexical Representation and Process, MIT Press, Cambridge, MA. Szabolcsi, A. 1987 'On Combinatory Categorial grammar.' In Proc. o.f the Symposium on Logic and Language, Debre- cen, Akad6miai Kiad6, Budapest. Zielonka, W. 1981. 'AxiomatizabilityofAjdukiewicz-Lambek Calculus by Means of Cancellation Schemes.' Zeitschr. ]. math. Logik und Grundlagen d. Math. 27. 86
1991
11
COMPOSE-REWUCE PARSING Henry S. Thompson1 Mike Dixon2 John Lamping2 1: Human Communication Research Centre University of Edinburgh 2 Buccleuch Place Edinburgh EH8 9LW SCOTLAND 2: Xerox Palo Alto Research Center 3333 Coyote Hill Road Palo Alto, CA 94304 ABSTRACT Two new parsing algorithms for context-free phrase structure gram- mars are presented which perform a bounded amount of processing per word per analysis path, independently of sentence length. They are thus ca- pable of parsing in real-time in a par- allel implementation which forks pro- cessors in response to non-determinis- tic choice points. 0. INTRODUCTION The work reported here grew out of our attempt to improve on the o (n 2) performance of the SIMD parallel parser described in (Thompson 1991). Rather than start with a commitment to a specific SIMD architecture, as that work had, we agreed that the best place to start was with a more abstract architecture-independent considera- tion of the CF-PSG parsing problem-- given arbitrary resources, what algo- rithms could one envisage which could recognise and/or parse atomic category phrase-structure grammars in o (n) ? In the end, two quite differ- ent approaches emerged. One took as its starting point non-deterministic shift-reduce parsing, and sought to achieve linear (indeed real-time) com- plexity by performing a constant-time step per word of the input. The other took as its starting point tabular pars- ing (Earley, C KY), and sought to achieve linear complexity by perform- ing a constant-time step for the identi- fication/construction of constituents of each length from 0 to n. The latter route has been widely canvassed, although to our knowledge has not yet been implemented--see (Nijholt 1989, 90) for extensive references. The former route, whereby real-time pars- ing is achieved by processor forking at non-deterministic choice points in an extended shill-reduce parser, is to our knowledge new. In this paper we pre- sent outlines of two such parsers, which we call compose-reduce parsers. L COMPOSE-Rk~nUCE PARSING Why couldn't a simple breadth- first chart parser achieve linear per- formance on an appropriate parallel system? If you provided enough pro- cessors to immediately process all agenda entries as they were created, would not this give the desired result? No, because the processing of a single word might require many serialised 87 steps. Consider processing the word "park" in the sentence "The people who ran in the park got wet." Given a simple traditional sort of grammar, that word completes an sP, which in turn completes a P P, which in turn completes a vP, which in turn com- pletes an s, which in turn completes a REL, which in turn completes an NP. The construction/recognition of these constituents is necessarily serialised, so regardless of the number of proces- sors available a constant-time step is impossible. (Note that this only pre- cludes a real-time parse by this route, but not necessarily a linear one.) In the shift-reduce approach to parsing, all this means is that for non-linear grammars, a single shift step may be followed by many reduce steps. This in turn suggested the beginnings of a way out, based on categorial gram- mar, namely that multiple reduces can be avoided if composition is al- lowed. To return to our example above, in a simple shift-reduce parser we would have had all the words pre- ceding the word "park" in the stack. When it was shifted in, there would follow six reduce steps. If alterna- tively following a shift step one was al- lowed (non-deterministically) a com- pose step, this could be reduced (!) to a single reduce step. Restricting our- selves to a simpler example, consider just "run in the park" as a vv, given rules VP --) v PP NP --) d n PP --) p NP. With a composition step allowed, the parse would then proceed as fol- lows: Shift run as a v Shift in as a p Compose v and p to give [vP v [PP p • NP]] where I use a combination of brack- eted strings and the 'dotted rule' nota- tion to indicate the result of composi- tion. The categorial equivalent would have been to notate v as vP/P P, P as PP/NP, and the result of the composi- tion as therefore vP/NP. Shift the as d Compose the dotted vp with d to give [VP v [PP p [NP d • n]]] Shift park as n Reduce the dotted vp with n to give the complete result. Although a number of details re- mained to be worked out, this simple move of allowing composition was the enabling step to achieving o(n) pars- ing. Parallelism would arise by fork- ing processors at each non-determin- istic choice point, following the gen- eral model of Dixon's earlier work on parallelising the ATMS (Dixon & de Kleer 1988). Simply allowing composition is not in itself sufficient to achieve o (n) per- formance. Some means of guarantee- ing that each step is constant time must still be provided. Here we found two different ways forward. II. TEn~. FIRST COMPOSE-REDUCE PARSER---CR4 In this parser there is no stack. We have simply a current structure, which corresponds to the top node of the stack in a normal shift-reduce parser. This is achieved by extending the appeal to composition to include a form of left-embedded raising, which will be discussed further below. Special attention is also required to handle left-recursive rules. 88 II.1 The Basic Parsing Algorithm The constant-time parsing step is given below (slightly simplified, in that empty productions and some unit productions are not handled). In this algorithm schema, and in subsequent discussion, the annotation "ND" will be used in situations where a number of alternatives are (or may be) described. The meaning is that these alternatives are to be pursued non-deterministi- cally. Algorithm CR-I 1 Shift the next word; 2 ND look it up in the lexicon; 3 ND close the resulting cate- gory wrt the unit produc- tions; 4a ND reduce the resulting category with the current structure or 4b N D raise* the resulting cat- egory wrt the non-unary rules in the grammar for which it is a left corner, and compose the result with the current structure. If reduction ever completes a category which is marked as the left corner of one or more left-recursive rules or rule sequences, ND raise* in place wrt those rules (sequences), and propagate the marking. Some of these ND steps may at var- ious points produce complete struc- tures. If .the input is exhausted, then those structures are parses, or not, depending on whether or not they have reached the distinguished symbol. If the input is not exhausted, it is of course the incomplete structures, the results of composition or raising, which are carried forward to the next step. The operation referred to above as "raise*" is more than simple raising, as was involved in the simple example in section IV. In order to allow for all possible compositions to take place all possible left-embedded raising must be pursued. Consider the following grammar fragment: S ~NP VP VP -~ v NP CMP CMP --)that S NP -~ propn NP -+ dn and the utterance "Kim told Robin that the child likes Kim". If we ignore all the ND incorrect paths, the current structure after the "that" has been processed is [S [NP [propn Kim]] [VP [v told] [NP [propn Robin] ] [CMP that • S] ] ] In order for the next word, "the", to be correctly processed, it must be raised all the way to s, namely we must have [S [NP [d the] • n] VP]] to compose with the current structure. What this means is that for every en- try in the normal bottom-up reachabil- ity table pairing a left corner with a top category, we need a set of dotted struc- tures, corresponding to all the ways the grammar can get from that left corner to that top category. It is these structures which are ND made avail- able in step 4b of the parsing step algo- rithm CR-I above. 89 II.2 Handling Left Recursion Now this in itself is not sufficient to handle left recursive structures, since by definition there could be an arbi- trary number of left-embeddings of a left-recursive structure. The final note in the description of algorithm CR-I above is designed to handle this. Glossing over some subtleties, left-re- cursion is handled by marking some of the structures introduced in step 3b, and ND raising in place if the marked structure is ever completed by reduc- tion in the course of a parse. Consider the sentence ~Robin likes the child's dog." We add the following two rules to the grammar: D -9 art D -9 NP 's thereby transforming D from a pre- terminal to a non-terminal. When we shift "the", we will raise to inter alia [NP [D [art the]] • n] r with the NP marked for potential re- raising. This structure will be com- posed with the then current structure to produce IS [NP [propn Robin]] [VP Iv likes] [NP (as above) ]r] ] After reduction with ~child", we will have [S [NP [propn Robin]] [VP [v likes] [NP [D [art the]] [n child] jr] ] The last reduction will have com- pleted the marked N P introduced above, so we ND left-recursively raise in place, giving [S [NP [propn Robin]] [VP Iv likes] [NP [D [NP the child] • 'S] n]r]] which will then take us through the rest of the sentence. One final detail needs to be cleared up. Although directly left-recursive rules, such as e.g. NP -9 NP PP, are correctly dealt with by the above mechanism, indirectly left-recursive sets of rules, such as the one exempli- fied above, require one additional sub- tlety. Care must be taken not to intro- duce the potential for spurious ambi- guity. We will introduce the full de- tails in the next section. II.3 Nature of the required tables Steps 3 and 4b of CR-I require tables of partial structures: Closures of unit productions up from pre-terminals, for step 3; left-reachable raisings up from (unit production closures of) pre- terminals, for step 4b. In this section we discuss the creation of the neces- sary tables, in particular Raise*, against the background of a simple exemplary grammar, given below as Table 1. We have grouped the rules accord- ing to type--two kinds of unit produc- tions (from pre-terminals or non-ter- minals), two kinds of left recursive rules (direct and indirect) and the re- mainder. vanilla S --) NP VP VP -9 v NP CMP --) cmp S PP -9 prep NP Table 1. unitl unit2 ird iri NP -9 propn NP -9 CMP NP -9 NP PP NP -9 D n D -9 art VP -9 VP PP D --) NP 's Exemplary grammar in groups by rule type 90 Cl* LRdir LRindir 2 RS* I: 2: [NP pr°pn]l'2 [D art]4 [NP NP PP] 3: [VP VP PP] [NP [D NP 's] n] [CMP cmp S], [pp prep NP] [VP v NP] 3 [NP D n]l, 2, [D NpI 's]4, [NP CMP] 1,2 4: [D [NP D n] 1 's] [NP [CMP cmp s]]l, 2, [D [NP [CMP cmp S]] 1,2 's], [S [NP [CMP cmp S]]I, 2 VP] [S [NP D n]l, 2 VP] [S NpI'2 VP] Table 2. Partial structures for CR-I Ras* [NP -[NP propn] • pp]l,2, [NP [D -[NP propn] • 's] n] 1,2 [D [NP i ~ ° n] 1 's] 4 [CMP cmp • S], [NP [CMP cmp • S]]I, 2, [D [NP [CMP cmp • S]]I, 2 's], [S [NP [CMP cmp ° S]]I, 2 VP] [pp prep • NP] [VP v • NP] 3 [NP [ D ~ " rill'2 • [S [NF J-D art] " n]l'2 VP] [D [Np pr°pn]l " 's]4, [S [NP P r°pn]l'2 " VP] Table 3. Projecting non-terminal left daughters As a first step towards computing the table which step 4b above would use, we can pre-compute the partial structures given above in Table 2. c l* contains all backbone frag- ments constructable from the unit productions, and is already essentially what we require for step 3 of the algo- rithm. LRdir contains all directly left- recursive structures. LRindir2 con- tains all indirectly left-recursive struc- tures involving exactly two rules, and there might be LRindir3, 4,... as well. R s* contains all non-recursive tree fragments constructable from left- embedding of binary or greater rules and non-terminal unit productions. The superscripts denote loci where left-recursion may be appropriate, and identify the relevant structures. In order to get the full Raise* table needed for step 4b, first we need to pro- ject the non-terminal left daughters of rules such as [ s NpI' 2 VP ] down to terminal left daughters. We achieve this by substituting terminal entries from Cl* wherever we can in LRdir, LRindir2 and Rs* to give us Table 3 from Table 2 (new embeddings are underlined). Left recursion has one remaining problem for us. Algorithm CR-I only checks for annotations and ND raises in place after a reduction completes a constituent. But in the last line of Ras* above there are unit constituents 91 [NP [NP propn] • [D [NP [D art] • [CMP cmp • S], pp]l,2, [NP [D [NP propn] • 's] n] 1 ,s] 4 [NP [CMP cmp • S]]1,2, [D [NP [CMP cmp ° S]]I, 2 's], [S [NP [CMP cmp • S]]I, 2 VP] [pp prep • NP] [VP v • NP] 3 [NP [D art] • n]l, 2, [S [NP [D art] ° n]l, 2 VP] [D [NP propn] ° 's]4, [D [NP [NP propn] ° pp]l ,s]4 [S [NP propn] ° VP], [S [NP [NP propn] ° pp]l,2 VP], [S [NP [D [NP propn] • 's] n] 1,2 VP] Table 4. Final form of the structure table Ra i S e * n]l, 2 with annotations. Being already com- plete, they will not ever be completed, and consequently the annotations will never be checked. So we pre-compute the desired result, augmenting the above list with expansions of those units via the indicated left recursions. This gives us the final version of Raise *, now shown with dots in- cluded, in Table 4. This table is now suited to its role in the algorithm. Every entry has a lexical left daughter, all annotated constituents are incomplete, and all unit productions are factored in. It is interesting to note that with these tree fragments, taken together with the terminal entries in Cl*, as the initial trees and LRdir, LRindir2 , etc. as the auxiliary trees we have a Tree Adjoining Grammar (Joshi 1985) which is strongly equivalent to the CF- PSG we started with. We might call it the left-lexical TAG for that CF-PSG, after Schabes et al. (1988). Note fur- ther that if a TAG parser respected the annotations as restricting adjunction, no spuriously ambiguous parses would be produced. Indeed it was via this relationship with TAGs that the details were worked out of how the annotations are distributed, not presented here to con- serve space. II.4 Implementation and Efficiency Only a serial pseudo-parallel im- plementation has been written. Because of the high degree of pre- computation of structure, this version even though serialised runs quite effi- ciently. There is very little computa- tion at each step, as it is straight-for- ward to double index the mai s e* table so that only structures which will compose with the current structure are retrieved. The price one pays for this effi- ciency, whether in serial or parallel versions, is that only left-common structure is shared. Right-common structure, as for instance in P P at- tachment ambiguity, is not shared be- tween analysis paths. This causes no difficulties for the parallel approach in one sense, in that it does not compro- mise the real-time performance of the parser. Indeed, it is precisely because no recombination is attempted that the basic parsing step is constant time. But it does mean that if the CF-PSG be- ing parsed is the first half of a two step process, in which additional con- 92 straints are solved in the second pass, then the duplication of structure will give rise to duplication of effort. Any parallel parser which adopts the strategy of forking at non-determinis- tic choice points will suffer from this weakness, including CR-II below. III. THE SECOND COMPOSE-R~nUCE PARSER CR-II Our second approach to compose- reduce parsing differs from the first in retaining a stack, having a more com- plex basic parsing step, while requir- ing far less pre-processing of the grammar. In particular, no special treatment is required for left-recursive rules. Nevertheless, the basic step is still constant time, and despite the stack there is no potential processing 'balloon' at the end of the input. III. 1 The Basic Parsing Algorithm Algorithm CR-II 1 Shift the next word; 2 ND look it up in the lexicon; 3 ND close the resulting cate- gory wrt the unit produc- tions; 4 N D reduce the resulting cat- egory with the top of the stack--if results are com- plete and there is input re- maining, pop the stack; 5a N D raise the results of (2), (3) and, where complete, (4) and 5b N D either push the result onto the stack or 5c N D compose the result with the top of the stack, replac- ing it. This is not an easy algorithm to understand. In the next section we present a number of different ways of motivating it, together with an illus- trative example. III.2 CR-II Explained Let us first consider how CR-II will operate on purely left-branching and purely right-branching structures. In each case we will consider the se- quence of algorithm steps along the non-deterministically correct path, ignoring the others. We will also re- strict ourselves to considering binary branching rules, as pre-terminal unit productions are handled entirely by step 3 of the algorithm, and non-ter- minal unit productions must be fac- tored into the grammar. On the other hand, interior daughters of non-bi- nary nodes are all handled by step 4 without changing the depth of the stack. III.2.1 Left-branching analysis For a purely left-branching struc- ture, the first word will be processed by steps 1, 2, 5a and 5b, producing a stack with one entry which we can schematise as in Figure 1, where filled circles are processed nodes and unfilled ones are waiting. Figure 1. All subsequent words except the last will be processed by steps 4, 5a and 5b (here and subsequently we will not mention steps 1 and 2, which occur for all words), effectively replacing the previous sole entry in the stack with the one given in Figure 2. 93 Figure 2. It should be evident that the cycle of steps 4, 5a and 5b constructs a left- branching structure of increasing depth as the sole stack entry, with one right daughter, of the top node, wait- ing to be filled. The last input word of course is simply processed by step 4 and, as there is no further input, left on the stack as the final result. The complete sequence of steps for any left- branching analysis is thus raiseJre- duce&raise*--reduce. An ordinary shift-reduce or left-corner parser would go through the same sequence of steps. III.2.2 Right-branching analysis The first word of a purely right- branching structure is analysed ex- actly as for a left-branching one, that is, with 5a and 5b, with results as in Figure 1 (repeated here as Figure 3): z% Figure 3. Subsequent words, except the last, are processed via steps 5a and 5c, with the result remaining as the sole stack entry, as in Figure 4. Figure 4. Again it should be evident that cy- cling steps 5a and 5c will construct a right-branching structure of increas- ing depth as the sole stack entry, with one right daughter, of the most em- bedded node, waiting to be filled. Again, the last input word will be pro- cessed by step 4. The complete se- quence of steps for any right-branch- ing analysis is thus raisem raise&compose*--reduce. A catego- rial grammar parser with a compose- first strategy would go through an isomorphic sequence of steps. III.2.3 Mixed Left- and Right-branch- ing Analysis All the steps in algorithm CR-II have now been illustrated, but we have yet to see the stack grow beyond one entry. This will occur in where an in- dividual word, as opposed to a com- pleted complex constituent, is pro- cessed by steps 5a and 5b, that is, where steps 5a and 5b apply other than to the results of step 4. Consider for instance the sentence "the child believes that the dog likes biscuits. ~ With a grammar which I trust will be obvious, we would arrive at the structure shown in Figure 5 after processing "the child believes that ~, having done raise--reduce& raiseJraise&compose-- raise&compose, that is, a bit of left- branching analysis, followed by a bit of right-branching analysis. 94 S S VP VP S' thai Flr~hle~ir~ili~[::~: be dorieS the child believes t~ v~p with "the" which will allow immediate integration with this. The ND correct path applies steps 5a and 5b, raise&push, giving a stack as shown in Figure 6: S NP the N VP S the child believes that Figure 6. We can then apply steps 4, 5a and 5c, reduce&raise&compose, to "dog", with the result shown in Figure 7. This puts uss back on the standard right-branching path for the rest of the sentence. the dog Figure 7. III.3 An Alternative View of CR-II Returning to a question raised ear- lier, we can now see how a chart parser could be modified in order to run in real-time given enough proces- sors to empty the agenda as fast as it is filled. We can reproduce the process- ing of CR-II within the active chart parsing framework by two modifica- tions to the fundamental rule (see e.g. Gazdar and Mellish 1989 or Thompson and Ritchie 1984 for a tutorial intro- duction to active chart parsing). First we restrict its normal operation, in which an active and an inactive edge are combined, to apply only in the case of pre-terminal inactive edges. This corresponds to the fact that in CR-II step 4, the reduction step, applies only to pre-terminal categories (continuing to ignore unit productions). Secondly we allow the fundamental rule to combine two active edges, provided the category to be produced by one is what is required by the other. This effects composition. If we now run our chart parser left-to-right, left-corner and breadth-first, it will duplicate CR-II. 95 The maximum number of edges along a given analysis path which can be in- troduced by the processing of a single word is now at most four, correspond- ing to steps 2, 4, 5a and 5c of CR-IIDthe pre-terminal itself, a constituent com- pleted by it, an active edge containing that constituent as left daughter, cre- ated by left-corner rule invocation, and a further active edge combining that one with one to its left. This in turn means that there is a fixed limit to the amount of processing required for each word. III.4 Implementation and Efficiency Although clearly not benefiting from as much pre-computation of structure as CR-I, CR-II is also quite ef- ficient. Two modifications can be added to improve efficiencyDa reach- ability filter on step 5b, and a shaper test (Kuno 1965), also on 5b. For the latter, we need simply keep a count of the number of open nodes on the stack (equal to the number of stack entries if all rules are binary), and ensure that this number never exceeds the num- ber of words remaining in the input, as each entry will require a number of words equal to the number of its open nodes to pop it off the stack. This test actually cuts down the number of non- deterministic paths quite dramati- cally, as the ND optionality of step 5b means that quite deep stacks would otherwise be pursued along some search paths. Again this reduction in search space is of limited significance in a true parallel implementation, but in the serial simulation it makes a big difference. Note also that no attention has been paid to unit productions, which we pre-compute as in CR-I. Furthermore, neither CR-I nor CR-II address empty productions, whose effect would also need to be pre-computed. IV. CONCLUSIONS Aside from the intrinsic interest in the abstract of real-time parsablility, is there any practical significance to these results. Two drawbacks, one al- ready referred to, certainly restrict their significance. One is that the re- striction to atomic category CF-PSGs is crucial the fact that the comparison between a rule element and a node la- bel is atomic and constant time is fun- damental. Any move to features or other annotations would put an end to real-time processing. This fact gives added weight to the problem men- tioned above in section II,4, that only left-common analysis results are shared between alternatives. Thus if one finesses the atomic category prob- lem by using a parser such as those described here only as the first pass of a two pass system, one is only putting off the payment of the complexity price to the second pass, in the absence to date of any linear time solution to the constraint satisfaction problem. On this basis, one would clearly prefer a parallel CKY/Earley algorithm, which does share all common substructure, to the parsers presented here. Nevertheless, there is one class of applications where the left-to-right real-time behaviour of these algo- rithms may be of practical benefit, namely in speech recognition. Present day systems require on-line availability of syntactic and domain- semantic constraint to limit the search space at lower levels of the sys- tem. Hitherto this has meant these constraints must be brought to bear during recognition as some form of regular grammar, either explicitly 96 constructed as such or compiled into. The parsers presented here offer the alternative of parallel application of genuinely context-free grammars di- rectly, with the potential added benefit that, with sufficient processor width, quite high degrees of local ambiguity can be tolerated, such as would arise if (a finite subset of) a feature-based grammar were expanded out into atomic category form. ACKNOWLEDGEMENTS The work reported here was car- ried out while the first author was a visitor to the Embedded Computation and Natural Language Theory and Technology groups of the Systems Science Laboratory at the Xerox Palo Alto Research Center. These groups provided both the intellectual and ma- terial resources required to support our work, for which our thanks. REFERENCES Dixon, Mike and de Kleer, Johan. 1988. "Massively Parallel Assumption-based Truth Maintenance". In Proceedings of the AAAI-88 National Conference on Artificial Intelligence, also reprinted in Proceedings of the Second International Workshop on Non-Monotonic Reasoning. Gazdar, Gerald and Mellish, Chris. 1989. Natural Language Processing in LISP. Addison- Wesley, Wokingham, England (sic). Joshi, Aravind K. 1985. "How Much Context-Sensitivity is Necessary for Characterizing Structural Descriptions--Tree Adjoining Grammars". In Dowty, D., Karttunen, L., and Zwicky, A. eds, Natural Language Processing-- Theoretical Computational and Psychological Perspectives. Cambridge University Press, New York. Kuno, Susumo. 1965. "The predictive analyzer and a path elimination technique", Communications of the ACM, 8, 687-698. Nijholt, Anton. 1989. "Parallel parsing strategies in natural language processing ~. In Tomita, M. ed, Proceedings of the International Workshop on Parsing Technologies, 240-253, Carnegie-Mellon University, Pittsburgh. Nijholt, Anton. 1990. The CYK- Approach to Serial and Parallel Parsing. Memoranda Informatica 90-13, faculteit der informatica, Universiteit Twente, Netherlands. Shabes, Yves, Abeill6, Anne and Joshi, Aravind K. 1988. "Parsing Strategies with 'Lexicalized' Grammars: Application to Tree Adjoining Grammars". In Proceedings of the 12th International Conference on Computational Linguistics, 82-93. Thompson, Henry S. 1991. "Parallel Parsers for Context-Free Grammars--Two Actual Implementations Compared". To appear in Adriaens, G. and Hahn, U. eds, Parallel Models of Natural Language Computation, Ablex, Norword NJ. Thompson, Henry S. and Ritchie, Graeme D. 1984. "Techniques for Parsing Natural Language: Two Examples". In Eisenstadt, M., and O'Shea, T., editors, Artificial Intelligence: Tools, Techniques, and Applications. Harper and Row, London. Also DAI Research Paper 183, Dept. of Artificial Intelligence, Univ. of Edinburgh. 97
1991
12
LR RECURSIVE TRANSITION NETWORKS FOR EARLEY AND TOMITA PARSING Mark Perlin School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 Internet: [email protected] ABSTRACT* Efficient syntactic and semantic parsing for ambiguous context-free languages are generally characterized as complex, specialized, highly formal algorithms. In fact, they are readily constructed from straightforward recursive Iransition networks (RTNs). In this paper, we introduce LR-RTNs, and then computationally motivate a uniform progression from basic LR parsing, to Earley's (chart) parsing, concluding with Tomita's parser. These apparently disparate algorithms are unified into a single implementation, which was used to automatically generate all the figures in this paper. 1. INTRODUCTION Ambiguous context-free grammars (CFGs) are currently used in the syntactic and semantic processing of natural language. For efficient parsing, two major computational methods are used. The first is Earley's algorithm (Earley, 1970), which merges parse trees to reduce the computational dependence on input sentence length from exponential to cubic cost. Numerous variations on Earley's dynamic programming method have developed into a family of chart parsing (Winograd, 1983) algorithms. The second is Tomita's algorithm (Tomita, 1986), which generalizes Knuth's (Knuth, 1965) and DeRemer's (DeRemer, 1971) computer language LR parsing techniques. Tomita's algorithm augments the LR parsing "set of items" construction with Earley's ideas. What is not currently appreciated is the continuity between these apparently distinct computational methods. • Tomita has proposed (Tomita, 1985) constructing his algorithm from Earley's parser, instead of DeRemer's LR parser. In fact, as we shall show, Earley's algorithm may be viewed as one form of LR parsing. • Incremental constructions of Tomita's algorithm (Heering, Klint, and Rekers, 1990) may similarly be viewed as just one point along a continuum of methods. * This work was supported in part by grant R29 LM 04707 from the National Library of Medicine, and by the Pittsburgh NMR Institute. The apparent distinctions between these related methods follows from the distinct complex formal and mathematical apparati (Lang, 1974; Lang, 1991) currently employed to construct these CF parsing algorithms. To effect a uniform synthesis of these methods, in this paper we introduce LR Recursive Transition Networks (LR-RTNs) as a simpler framework on which to build CF parsing algorithms. While RTNs (Woods, 1970) have been widely used in Artificial Intelligence (AI) for natural language parsing, their representational advantages have not been fully exploited for efficiency. The LR-RTNs, however, are efficient, and shall be used to construct" (1) a nondeterministic parser, (2) a basic LR(0) parser, (3) Earley's algorithm (and the chart parsers), and (4) incremental and compiled versions of Tomita's algorithm. Our uniform construction has advantages over the current highly formal, non-RTN-based, nonuniform approaches to CF parsing: • Clarity of algorithm construction, permitting LR, Earley, and Tomita parsers to be understood as a family of related parsing algorithm. • Computational motivation and justification for each algorithm in this family. • Uniform extensibility of these syntactic methods to semantic parsing. • Shared graphical representations, useful in building interactive programming environments for computational linguists. • Parallelization of these parsing algorithms. • All of the known advantages of RTNs, together with efficiencies of LR parsing. All of these improvements will be discussed in the paper. 2. LR RECURSIVE TRANSITION NETWORKS A transition network is a directed graph, used as a finite state machine (Hopcroft and Ullman, 1979). The network's nodes or edges are labelled; in this paper, we shall label the nodes. When an input sentence is read, state moves from node to node. A sentence is accepted if reading the entire sentence directs the network traversal so as to arrive at an 98 W ( () (c) ) S ( Start symbol S (A) (B) (O) (E) ~ Nonterminal symbol S ) Rule #1 md VP Figure 1. Expanding Rule#l: S---~NP VP. A. Expanding the nonterminal symbol S. B. Expanding the rule node for Rule #1. C. Expanding the symbol node VP. D. Expanding the symbol node NP. E, Expanding the start node S. accepting node. To increase computational power from regular languages to context-free languages, recursive transition networks (RTNs) are introduced. instantiation of the Rule's chain indicates the partial progress in sequencing the Rule's right-hand-side symbols. An RTN is a forest of disconnected transition networks, each identified by a nonterminal label. All other labels are terminal labels. When, in traversing a transition network, a nonterminal label is encountered, control recursively passes to the beginning of the correspondingly labelled transition network. Should this labelled network be successfully traversed, on exit, control returns back to the labelled calling node. The linear text of a context-free grammar can be cast into an RTN structure (Perlin, 1989). This is done by expanding each grammar rule into a linear chain. The top-down expansion amounts to a partial evaluation (Futamura, 1971) of the rule into a computational expectation: an eventual bottom-up data-directed instantiation that will complete the expansion. Figure 1, for example, shows the expansion of the grammar rule #1 S---~NP VP. First, the nonterminal S, which labels this connected component, is expanded as a nonterminal node. One method for realizing this nonterminal node, is via Rule#l; its rule node is therefore expanded. Rule#1 sets up the expectation for the VP symbol node, which in turn sets up the expectation for the NP symbol node. NP, the first symbol node in the chain, creates the start node S. In subsequent processing, posting an instance of this start symbol would indicate an expectation to instantiate the entire chain of Rule#l, thereby detecting a nonterminal symbol S. Partial The expansion in Figure 1 constructs an LR-RTN. That is, it sets up a Left-to-fight parse of a Rightmost derivation. Such derivations are developed in the next Section. As used in AI natural language parsing, RTNs have more typically been LL-RTNs, for effecting parses of leftmost derivations (Woods, 1970), as shown in Figure 2A. (Other, more efficient, control structures have also been used (Kaplan, 1973).) Our shift from LL to LR, shown in Figure 2B, uses the chain expansion to set up a subsequent data-driven completion, thereby permitting greater parsing efficiency. In Figure 3, we show the RTN expansion of the simple grammar used in our first set of examples: S -+ NP VP NP--)N i DN VP --) V NP . Chains that share identical prefixes are merged (Perlin, 1989) into a directed acyclic graph (DAG) (Aho, Hopcroft, and Ullman, 1983). This makes our RTN a forest of DAGs, rather than trees. For example, the shared NP start node initiates the chains for Rules #2 and #3 in the NP component. In augmented recursive transition networks (ATNs) (Woods, 1970), semantic constraints may be expressed. These constraints can employ case grammars, functional grammars, unification, and so on (Winograd, 1983). In our RTN formulation, semantic testing occurs when instantiating rule nodes: failing a constraint removes a parse from further ( ) (A) () (B) Figure 2. A. An LL-RTN for S~NP VP. This expansion does not set up an expectation for a data-driven leftward parse. B. The corresponding LR-RTN. The rightmost expansion sets up subsequent data-driven leftward parses. 99 processing. This approach applies to every parsing algorithm in this paper, and will not be discussed further. Figure 3. The RTN of an entire grammar. The three connected components correspond to the three nonterminals in the grammar. Each symbol node in the RTN denotes a subsequence originating from its lefimost start symbol. 3. NONDETERMINISTIC DERIVATIONS A grammar's RTN can be used as a template for parsing. A sentence (the data) directs the instantiation of individual rule chains into a parse tree. The RTN instances exactly correspond to parse. tree nodes. This is most easily seen with nondeterministic rightmost derivations. Given an input sentence of n words, we may derive a sentence in the language with the nondeterministic algorithm (Perlin, 1990): Put an instance of nonterminal node S into the last column. From right to left, for every column : From top to bottom, within the column : (i) Recursively expand the column top-down by nondeterministic selection of rule instances. (2) Install the next (leftward) symbol instance. In substep (1), following selection, a rule node and its immediately downward symbol node are instantiated. The instantiation process creates a new object that inherits from the template RTN node, adding information about column position and local link connections. For example, to derive "I Saw A Man" we would nondeterministically select and instantiate the correct rule choices #1, #4, #2, and #3, as in Figure 4. Following the algorithm, the derivation is (two dimensionally) top-down: top-to-bottom and right-to- left. To actually use this nondeterministic derivation algorithm to obtain all parses, one might enumerate and test all possible sequences of rules. This, however, has exponential cost in n, the input size. A more efficient approach is to reverse the top-down derivation, and recursively generate the parse(s) bottom-up from the input data. () [ cJ Figure 4. The completed top-down derivation (parse- tree) of "I Saw A Man". Each parse-tree symbol node denotes a subsequence of a recognized RTN chain. Rule #0 connects a word to its terminal symbol(s). 4. BASIC LR(0) PARSING To construct a parser, we reverse the above top- down nondeterministic derivation teChnique into a bottom-up deterministic algorithm. We first build an inefficient LR-parser, illustrating the reversal. For efficiency, we then introduce the Follow-Set, and modify our parser accordingly. 4.1 AN INEFFICIENT BLR(0) PARSER A simple, inefficient parsing algorithm for computing all possible parse-trees is: Put an instance of start node S into the 0 column. From left to right, for every column : From bottom to top, within the column : (i) Initialize the column with the input word. (2) Recursively complete the column bottom-up using the INSERT method. This reverses the derivation algorithm into bottom-up generation: bottom-to-top, and left-to-right. In the inner loop, the Step (1) initialization is straightforward; we elaborate Step (2). 100 Step (2) uses the following method (Perlin, 1991) to insert instances of RTN nodes: INSERT ( instance ) ( ASK instance (I) Link up with predecessor instances. (2) Install self. (3) ENQUEUE successor instances for insertion. } In (1), links are constructed between the instance and its predecessor instances. In (2), the instance becomes available for cartesian product formation. In (3), the computationally nontrivial step, the instance enqueues any successor instances within its own column. Most of the INSERT action is done by instances of symbol and rule RTN nodes. Using our INSERT method, a new symbol instance in the parse-tree links with predecessor instances, and installs itself. If the symbol's RTN node leads upwards to a rule node, one new rule instance successor is enqueued; otherwise, not. Rule instances enqueue their successors in a more complicated way, and may require cartesian product formation. A rule instance must instantiate and enqueue all RTN symbol nodes from which they could possibly be derived. At most, this is the set SAME-LABEL(rule) = { N • RTN I N is a symbol node, and the label of N is identical to the label of the rule's nonterminal successor node }. For every symbol node in SAME-LABEL(rule), instances may be enqueued. If X • SAME- LABEL(rule) immediately follows a start node, i.e., it begins a chain, then a single instance of it is enqueued. If Y e SAME-LABEL(rule) does not immediately follow a start node, then more effort is required. Let X be the unique RTN node to the left of Y. Every instantiated node in the parse tree is the root of some subtree that spans an interval of the input sentence. Let the left border j be the position just to left of this interval, and k be the rightmost position, i.e., the current column. Then, as shown in Figure 5, for every instance x of X currently in position j, an instance y (of Y) is a valid extension of subsequence x that has support from the input sentence data. The cartesian product { x I x an instance of X in column j } x { rule instance} forms the set of all valid predecessor pairs for new instances of Y. Each such new instance y of Y is enqueued, with some x and the rule instance as its two predecessors. Each y is a parse-tree node representing further progress in parsing a subsequence. RTN chain - X - Y - x" y'~ x'. y' x, y position ~ a i i' i" j k Figure 5. The symbol node Y has a left neighbor symbol node X in the RTN. The instance y of Y is the root ofa parse-subtree that spans (j+l ak). Therefore, the rule instance r enqueues (at leasO all instances of y, indexed by the predecessor product: { x in column j } × {r }. 4.2. USING THE FOLLOW-SET Although a rule parse-node is restricted to enqueue successor instances of RTN nodes in SAME- LABEL(rule), it can be constrained further. Specifically, if the sentence data gives no evidence for a parse-subtree, the associated symbol node instance need never be generated. This restriction can be determined column-by-column as the parsing progresses. We therefore extend our bottom-up parsing algorithm to: Put an instance of start node S into the 0 column. From left to right, for every column: From bottom to top, within the column : (I) Initialize the column with the input word. (2) Recursively complete the column bottom-up using the INSERT method. (3) Compute the column's (rightward) Follow-Set. With the addition of Step (3), this defines our Basic LR(O), or BLR(O), parser. We now describe the Follow-Set. Once an RTN node X has been instantiated in some column, it sets up an expectation for • The RTN node(s) Yg that immediately follow it; • For each immediate follower Yg, all those RTN symbol nodes Wg,h that initiate chains that could recursively lead up to Yg. This is the Follow-Set (Aho, Sethi, and Ullman, 1986). The Follow-Set(X) is computed directly from the RTN by the recursion: 101 Follow-Set(X) LET Result c- For every unvisited RTN node Y following X: Result e- ( Y } to IF Y's label is a terminal symbol, THEN O; ELSE Follow-Set of the start symbol of Y's label Return Result As is clear from the re, cursive definition, Follow-Set (tog {Xg}) = tog Follow-Set (Xg). Therefore, the Follow-Set of a column's symbol nodes can be deferred to Step (3) of the BLR(0) parsing algorithm, after the determination of all the nodes has completed. By only recursing on unvisited nodes, this traversal of the grammar RTN has time cost O(IGI) (Aho, Sethi, and UUman, 1986), where IGI >_ IRTNI is the size of the grammar (or its RTN graph). A Follow-Set computation is illustrated in Figure 6. Figure 6. The Follow-Set (highlighted in the display) of RTN node V consists of the immediately following nonterminal node NP, and the two nodes immediately following the start NP node, D and N. Since D and N are terminal symbols, the traversal halts. The set of symbol RTN nodes that a rule instance r spanning (j+l,k) can enqueue is therefore not SAME-LABEL(rule), but the possibly smaller set of RTN nodes SAME-LABEL(rule) n Follow-Set(j). To enqueue r's successors in INSERT, LET Nodes = SAME-LABEL(rule) rh Follow-Set (j) . For every RTN node Y in Nodes, create and enqueue all instances y inY: Let X be the leftward RTN symbol node neighbor of Y. Let PROD = {x I x an instance of X in column j) x (r), if X exists; {r}, otherwise. Enqueue all members of PROD as instances of y. The cartesian product PROD is nonempty, since an instantiated rule anticipates those elements of PROD mandated by Follow-Sets of preceding columns. The pruning of Nodes by the Follow-Set eliminates all bottom-up parsing that cannot lead to a parse-subtree at column k. In the example in Figure 7, Rule instance r is in position 4, with j=3 and k=4. We have: SAME-LABEL(r) = {N 2, N 3 }, i.e, the two symbol nodes labelled N in the sequences of Rules #2 and #3, shown in the LR-RTN of Figure 6. Follow-Set(3) = Follow-Set(I D 2 }) = {N21. Therefore, SAME-LABEL(r)c~Follow-Set(3) = {N2}. ¢ ® [ Figure 7. Th~ ) ) r ] s rule instance r can only instantiate the single successor instance N 2. r uses the RTN to find the left RTN neighbor D of N 2. r then computes the cartesian product of instance d with r as {d}x{r}, generating the successor instance of N 2 shown. 5. EARLEY'S PARSING ALGORITHM Natural languages such as English are ambiguous. A single sentence may have multiple syntactic structures. For example, extending our simple grammar with rules accounting for Prepositions and Prepositional-Phrases (Tomita, 1986) S -9 S PP NP -9 NP PP PP -9 P NP, the sentence "I saw a man on the hill with a telescope through the window" has 14 valid derivations, In parsing, separate reconstructions of these different parses can lead to exponential cost. For parsing efficiency, partially constructed instance-trees can be merged (Earley, 1970). As before, parse-node x denotes a point along a parse- sequence, say, v-w-x. The left-border i of this parse- sequence is the left-border of the leftmost parse-node in the sequence. All parse-sequences of RTN symbol node X that cover columns i+l through k may be collected into a single equivalence class X(i,k). For 102 the purposes of (1) continuing with the parse and (2) disambiguating parse-trees, members of X(i,k) are indistinguishable. Over an input sentence of length n, there are therefore no more than O(n 2) equivalence classes of X. Suppose X precedes Y in the RTN. When an instance y of Y is added m position k, k.<_n, and the cartesian product is formed, there are only O(k 2) possible equivalence classes of X for y to combine with. Summing over all n positions, there are no more than O(n 3) possible product formations with Y in parsing an entire sentence. Merging is effected by adding a MERGE step to INSERT: INSERT ( instance ) ( instance ~- MERGE (instance) ASK instance (1) Link up with predecessor instances. (2) Install self. (3) ENQUEUE successor instances for insertion. } The parsing merge predicate considers two instantiated sequences equivalent when: (1) Their RTN symbol nodes X are the same. (2) They are in the same column k. (3) They have identical left borders i. The total number of links formed by INSERT during an entire parse, accounting for every grammar RTN node, is O(n3)xO(IGI). The chart parsers are a family of algorithms that couple efficient parse-tree merging with various control organizations (Winograd, 1983). 6. TOMITA'S PARSING ALGORITHM In our BLR(0) parsing algorithm, even with merging, the Follow-Set is computed at every column. While this computation is just O(IGI), it can become a bottleneck with the very large grammars used in machine translation. By caching the requisite Follow-Set computations into a graph, subsequent Follow-Set computation is reduced. This incremental construction is similar to (Heering, Klint, and Rekers, 1990)'s, asymptotically constructing Tomita's all- paths LR parsing algorithm (Tomita, 1986). The Follow-Set cache (or LR-table) can be dynamically constructed by Call-Graph Caching (Perlin, 1989) during the parsing. Every time a Follow-Set computation is required, it is looked up in the cache. When not present, the Follow-Set is computed and cached as a graph. Following DeRemer (DeRemer, 1971), each cached Follow-Set node is finely partitioned, as needed, into disjoint subsets indexed by the RTN label name, as shown in the graphs of Figure 8. The partitioning reduces the cache size: instead of allowing all possible subsets of the RTN, the cache graph nodes contain smaller subsets of identically labelled symbol nodes. When a Follow-Set node has the same subset of (A) • 5 I~-N ~V~3-'~D~4~N (!)) II [ii----I P--2 P P--5 ( P (~ ! L(~) ( 1 V--3 --4--N ® [ ] I (o Figure 8. (A) A parse of "I Saw A Man" using the grammar in Oromita, 1986). (B) The Follow-Set cache dynamically constructed during parsing. Each cache node represents a subset of RTN symbol nodes. The numbers indicate order of appearance; the lettered nodes partition their preceding node by symbol name. Since the cache was created on an as-needed basis, its shape parallels the shape of the parse-tree. (C) Compressing the shape of (B). 103 P P P P ~ ~ P'-23 ~-- ~1 F~--5 1 1P-'I 3~-'D--'I 5-N 1P'--I 9~.D--21--N (A) 1 ~-~N ~V ~3/--~, 4 ~N (B) Figure 9. The LR table cache graph when parsing "I Saw A Man On The Hill With A Telescope Through The Window" (A) without cache node merging, and (B) with merging. grammar symbol nodes as an already existing Follow-Set node, it is merged into the older node's equivalence class. This avoids redundant expansions, without which the cache would be an infinite tree of parse paths, rather than a graph. A comparison is shown in Figure 9. If the entire LR-table cache is needed, an ambiguous sentence containing all possible lexical categories at each position can be presented; convergence follows from the finiteness of the subset construction. 7. IMPLEMENTATION AND CURRENT WORK We have developed an interactive graphical programming environment for constructing LR- parsers. It uses the color MAC/II computer in the Object LISP extension of Common LISP. The system is built on CACHE TM (Perlin, © 1990), a general Call-Graph Caching system for animating AI algorithms. The RTNs are built from grammars. A variety of LR-RTN-based parsers, including BLR(0), with or without merging, and with or without Follow-Set caching have been constructed. Every algorithm described in this paper is implemented. Visualization is heavily exploited. For example, selecting an LR- table cache node will select all its members in the RTN display. The graphical animation component automatically drew all the RTNs and parse-trees in the Figures, and has generated color slides useful in teaching. Fine-grained parallel implementations of BLR(0) on the Connection Machine are underway to reduce the costly cartesian product step to constant time. We are also adding semantic constraints. 8. CONCLUSION We have introduced BLR(0), a simple bottom-up LR RTN-based CF parsing algorithm. We explicitly expand grammars to RTNs, and only then construct our parsing algorithm. This intermediate step eliminates the complex algebra usually associated with parsing, and renders more transparent the close relations between different parsers. Earley's algorithm is seen to be fundamentally an LR parser. Earley's propose expansion step is a recursion analogous to our Follow-Set traversal of the RTN. By explicating the LR-RTN graph in the computation, no other complex data structures are required. The efficient merging is accomplished by using an option available to BLR(0): merging parse nodes into equivalence classes. Tomita's algorithm uses the cached LR Follow-Set option, in addition to merging. Again, by using the RTN as a concrete data structure, the technical feats associated with Tomita's parser disappear. His shared packed forest follows immediately from our merge option. His graph stack and his parse forest are, for us, the same entity: the shared parse tree. Even the LR table is seen to derive from this parsing activity, particularly with incremental construction from the RTN. 104 Bringing the RTN into parsing as an explicit realization of the original grammar appears to be a conceptual and implementational improvement over less uniform treatments. ACKNOWLEDGMENTS Numerous conversations with Jaime Carbonell were helpful in developing these ideas. I thank the students at CMU and in the Tools for Al tutorial whose many questions helped clarify this approach. REFERENCES Aho, A.V., Hopcrofl, J.E., and Ullman, J.D. 1983. Data Structures and Algorithms. Reading, MA: Addison-Wesley. Aho, A.V., Sethi, R., and Ullman, J.D. 1986. Compilers: Principles, Techniques and Tools. Reading, MA: Addison-Wesley. DeRemer, F. 1971. Simple LR(k) grammars. Communications of the ACM, 14(7): 453-460. Earley, J. 1970. An Efficient Context-Free Parsing Algorithm. Communications of the ACM, 13(2): 94-102, Futamura, Y. 1971. Partial evaluation of computation process - an approach to a compiler- compiler. Comp. Sys. Cont., 2(5): 45-50, Heering, J., Klint, P., and Rekers, J. 1990. Incremental Generation of Parsers. IEEE Trans. Software Engineering, 16(12): 1344-1351. Hopcroft, J.E., and Ullman, J.D. 1979. Introduction to Automata Theory, Languages, and Computation. Reading, Mass.: Addison-Wesley. Kaplan, R.M. 1973. A General Syntactic Processor. In Natural Language Processing, Rustin, R., ed., 193-241. New York, NY: Algorithmics Press. Knuth, D.E. 1965. On the Translation of Languages from Left to Right. Information and Control, 8(6): 607-639. Lang, B. 1974. Deterministic techniques for efficient non-deterministic parsers. In Proc, Second Colloquium Automata, Languages and Programming, 255-269. l_xxx:kx, J., ed., (Lecture Notes in Computer Science, vol. 14), New York: Springer-Verlag. Lang, B. 1991. Towards a Uniform Formal Framework for Parsing. In Current Issues in Parsing Technology, Tomita, M., ed., 153-172. Boston: Kluwer Academic Publishers. Perlin, M.W. 1989. Call-Graph Caching: Transforming Programs into Networks. In Proc. of the Eleventh Int. Joint Conf. on Artificial Intelligence, 122-128. Detroit, Michigan, Morgan Kaufmann. Perlin, M.W. 1990. Progress in Call-Graph Caching, Tech Report, CMU-CS-90-132, Carnegie- Mellon University. Perlin, M.W. 1991. RETE and Chart Parsing from Bottom-Up Call-Graph Caching, submitted to conference, Carnegie Mellon University. Perlin, M.W. © 1990. CACHETS: a Color Animated Call-grapH Environment, ver. 1.3, Common LISP MACINTOSH Program, Pittsburgh, PA. Tomita, M. 1985. An Efficient Context-Free Parsing Algorithm for Natural Languages. In Proceedings of the Ninth IJCAI, 756-764. Los Angeles, CA,. Tomita, M. 1986. Efficient Parsing for Natural Language. Kluwar Publishing. Winograd, T. 1983. Language as a Cognitive Process, Volume I: Syntax. Reading, MA: Addison- Wesley. Woods, W.A. 1970. Transition network grammars for natural language analysis. Comm ACM, 13(10): 591-606. 105
1991
13
Polynomial Time and Space Shift-Reduce Parsing of Arbitrary Context-free Grammars.* Yves Schabes Dept. of Computer & Information Science University of Pennsylvania Philadelphia, PA 19104-6389, USA e-mail: schabes~linc.cis.upenn.edu Abstract We introduce an algorithm for designing a predictive left to right shift-reduce non-deterministic push-down machine corresponding to an arbitrary unrestricted context-free grammar and an algorithm for efficiently driving this machine in pseudo-parallel. The perfor- mance of the resulting parser is formally proven to be superior to Earley's parser (1970). The technique employed consists in constructing before run-time a parsing table that encodes a non- deterministic machine in the which the predictive be- havior has been compiled out. At run time, the ma- chine is driven in pseudo-parallel with the help of a chart. The recognizer behaves in the worst case in O(IGI2n3)-time and O(IGIn2)-space. However in practice it is always superior to Earley's parser since the prediction steps have been compiled before run- time. Finally, we explain how other more efficient vari- ants of the basic parser can be obtained by deter- minizing portionsof the basic non-deterministic push- down machine while still using the same pseudo- parallel driver. 1 Introduction Predictive bottom-up parsers (Earley, 1968; Earley, 1970; Graham et al., 1980) are often used for natural language processing because of their superior average performance compared to purely bottom-up parsers *We are extremely indebted to Fernando Pereira and Stuart Shleber for providing valuable technical comments during dis- cussions about earlier versio/m of this algorithm. We are also grateful to Aravind Joehi for his support of this research. We also thank Robert Frank. All remaining errors are the author's responsibility alone. This research wa~ partially funded by ARO grant DAAL03-89-C0031PRI and DARPA grant N00014- 90-J-1863. such as CKY-style parsers (Kasami, 1965; Younger, 1967). Their practical superiority is mainly obtained because of the top-down filtering accomplished by the predictive component of the parser. Compiling out as much as possible this predictive component before run-time will result in a more efficient parser so long as the worst case behavior is not deteriorated. Approaches in this direction have been investigated (Earley, 1968; Lang, 1974; Tomita, 1985; Tomita, 1987), however none of them is satisfying, either be- cause the worst case complexity is deteriorated (worse than Earley's parser) or because the technique is not general. Furthermore, none of these approaches have been formally proven to have a behavior superior to well known parsers such as Earley's parser. Earley himself ([1968] pages 69-89) proposed to pre- compile the state sets generated by his algorithm to make it as efficient as LR(k) parsers (Knuth, 1965) when used on LR(k) grammars by precomputing all possible states sets that the parser could create. How- ever, some context-free grammars, including most likely most natural language grammars, cannot be compiled using his technique and the problem of knowing if a grammar can be compiled with this tech- nique is undecidable (Earley [1968], page 99). Lang (1974) proposed a technique for evaluating in pseudo-parallel non-deterministic push down au- tomata. Although this technique achieves a worst case complexity of O(n3)-time with respect to the length of input, it requires that at most two symbols are popped from the stack in a single move. When the technique is used for shift-reduce parsing, this con- straint requires that the context-free grammar is in Chomsky normal form (CNF). As far as the grammar size is concerned, an exponential worst case behavior is reached when used with the characteristic LR(0) 106 machine. 1 Tomita (1985; 1987) proposed to extend LR(0) parsers to non-deterministic context-free grammars by explicitly using a graph structured stack which represents the pseudo-parallel evaluation of the moves of a non-deterministic LR(0) push-down automaton. Tomita's encoding of the non-deterministic push- down automaton suffers from an exponential time and space worst case complexity with respect to the input length and also with respect to the grammar size (Johnson [1989] and also page 72 in Tomita [1985]). Although Tomita reports experimental data that seem to show that the parser behaves in practice better than Earley's parser (which is proven to take in the worst case O([G[2n3)-time), the duplication of the same experiments shows no conclusive outcome. Modifications to Tomita's algorithm have been pro- posed in order to alleviate the exponential complex- ity with respect to the input length (Kipps, 1989) but, according to Kipps, the modified algorithm does not lead to a practical parser. Furthermore, the algorithm is doomed to behave in the worst case in exponential time with respect to the grammar size for some am- biguous grammars and inputs (Johnson, 1989). 2 So far, there is no formal proof showing that the Tomita's parser can be superior for some grammars and in- puts to Earley's parser, and its worst case complexity seems to contradict the experimental data. As explained, the previous attempts to compile the predictive component are not general and achieve a worst case complexity (with respect to the gram- mar size and the input length) worse than standard parsers. The methodology we follow in order to compile the predictive component of Earley's parser is to define a predictive bottom-up pushdown machine equiva- lent to the given grammar which we drive in pseudo- parallel. Following Johnson's (1989) argument, any parsing algorithm based on the LR(0) characteris- tic machine is doomed to behave in exponential time with respect to the grammar size for some ambigu- ous grammars and inputs. This is a result of the fact that the number of states of an LR(0) characteristic machine can be exponential and that there are some grammars and inputs for which an exponential num- ber of states must be reached (See Johnson [1989] for examples of such grammars and inputs). One must therefore design a different pushdown machine which 1 The same arguraent for the exponential graramar size com- plexity of Tomita's parser (Johnson, 1989) holds for Lang's technique. 2 This problem is particularly acute for natural language pro- cessing since in this context the input length is typically small (10-20 words) and the granunar size very large (hundreds or thousands of rules and symbols). can be driven efficiently in pseudo-parallel. We construct a non-deterministic predictive push- down machine given an arbitrary context-free gram- mar whose number of states is proportional to the size of the grammar. Then at run time, we efficiently drive this machine in pseudo-parallel. Even if all the states of the machine are reached for some grammars and inputs, a polynomial complexity will still be obtained since the number of states is bounded by the gram- mar size. We therefore introduce a shift-reduce driver for this machine in which all of the predictive compo- nent has been compiled in the finite state control of the machine. The technique makes no requirement on the form of the context-free grammar and it behaves in the worst case as well as Earley's parser (Earley, 1970). The push-down machine is built before run- time and it is encoded as parsing tables in the which the predictive behavior has been compiled out. In the worst case, the recognizer behaves in the same O([Gl2nS)-time and O([G[n2)-space as Earley's parser. However in practice it is always superior to Earley's parser since the prediction steps have been eliminated before run-time. We show that the items produced in the chart correspond to equiva- lence classes on the items produced for the same input by Earley's parser. This mapping formally shows its practical superior behavior. 3 Finally, we explain how other more efficient vari- ants of the basic parser can be obtained by deter- minizing portions of the basic non-deterministic push- down machine while still using the same pseudo- parallel driver. 2 The Parser The parser we propose handles any context-free gram- mar; the grammar can be ambiguous and need not be in any normal form. The parser is a predictive shift- reduce bottom-up parser that uses compiled top down prediction information in the form of tables. Before run-time, a non-deterministic push down automa- ton (NPDA) is constructed from a given context-free grammar. The parsing tables encode the finite state control and the moves of the NPDA. At run-time, the NPDA is then driven in pseudo-parallel with the help of a chart. We show the construction of a basic machine which will be driven non-deterministically. In the following, the input string is w -- al...an and the context-free grammar being considered is G = (~, NT, P, S), where ~ is the set of terminal 3The characteristic LR(0) machine is the result of deter- minizing the n~acldne we introduce. Since this procedure in- troduce exponentially more states, the LR(0) machine can be exponentially large. 107 symbols, NT the set of non-terminal symbols, P a set of production rules, S the start symbol. We will need to refer to the subsequence of the input string w = az...aN from position i to j, w]i,j], which we define as follows: f ai+l ... aj , if i < j w]i,~] I, ¢ ,ifi>_j We explain the data-structures used by the parser, the moves of the parser, and how the parsing tables are constructed for the basic NPDA. Then, we study the formal characteristics of the parser. The parser uses two moves: shift and reduce. As in standard shift-reduce parsers, shift moves recognize new terminal symbols and reduce moves perform the recognition of an entire context-free rule. However in the parser we propose, shift and reduce moves behave differently on rules whose recognition has just started (i.e. rules that have been predicted) than on rules of which some portion has been recognized. This be- havior enables the parser to efficiently perform reduce moves when ambiguity arises. 2.1 Data-Structures and the Moves of the Parser The parser collects items into a set called the chart, C. Each item encodes a well formed substring of the input. The parser proceeds until no more items can be added to the chart C. An item is defined as a triple (s,i,jl, where s is a state in the control of the NPDA, i and j are indices referring to positions in the input string (i, j E [0, n]). In an item (s,i,j), j corresponds to the current position in the input string and i is a position in the input which will facilitate the reduce move. A dotted rule of a context-free grammar G is defined as a production of G associated with a dot at some position of the right hand side: A ~ a •/~ with A --~ afl E P. We distinguish two kinds of dotted rules. Kernel dotted rules, which are of the form A ~ a • fl with a non empty, and non-kernel dotted rules, which have the dot at the left most position in the right hand side (A --~ •1~). As we will see, non-kernel dotted rules correspond to the predictive component of the parser. We will later see each state s of the NPDA corre- sponds to a set of dotted rules for the grammar G. The set of all possible states in the control of the NPDA is written S. Section 2.2 explains how the states are constructed. The algorithm maintains the following property (which guarantees its soundness)4: if an item (s, i,j) is in the chart C then for all dotted rules A ~ aofl E s the following is satisfied: (i) if a E (E U NT) +, then B7 E (NT U ~)* such that S~w]o,i]A 7 and a=:=~w]~d]; (ii) if a is the empty string, then B 7 E (NT O ~)* such that S=~w]0./]A 7. The parser uses three tables to determine which move(s) to perform: an action table, ACTION, and two goto tables, the kernel goto table, GOTOk, and the non-kernel goto table, GOTOnk. The goto tables are accessed by a state and a non- terminal symbol. They each contain a set of states: GOTO~(s,X) = {r},GOTOnk(s,X) = {r'} with r, rt,s E S,X E NT. The use of these tables is ex- plained below. The action table is accessed by a state and a ter- minal symbol. It contains a set of actions. Given an item, (s, i,j), the possible actions are determined by the content of ACTION(s, aj+x) where aj+l is the j + 1 th input token. The possible actions contained in ACTION(s, aj+l) are the following: • KERNEL SHIFT s t, (ksh(s t) for short), for s t E S. A new token is recognized in a kernel dotted rule A --* a • aft and a push move is performed. The item (s I, i,j + 1) is added to the chart, since aa spans in this case w]i,j+l]. • NON-KERNEL SHIFT s t, (nksh(s I) for short), for s t E S. A new token is recognized in a non- kernel dotted rule of the form A --* •aft. The item (s',j,j + 1) is is added to the chart, since a spans in this case wljj+x ] • REDUCE X ---. fl, (red(X ---* fl) for short), for X --* ~ E P. The context-free rule X --*/~ has been totally recognized. The rule spans the sub- string ai+z ...aj. For all items in the chart of the form (s ~, k, i), perform the following two steps: - for all rl E GOTOk(s',X), it adds the item (ra, k,j) to the chart. In this case, a dotted rule of the form A ~ a • Xfl is combined with X --* fl• to form A ---* aX •/~; since a spans w]k,i] and X spans wli,j], aX spans w]k,j]. - for all r2 E GOTOnk(s t, X), it adds the item (r2,i,j) to the chart. In this case, a dot- ted rule of the form A ~ • Xf~ is combined with X --~ fl• to form A ~ X •/~; in this case X spans w]idl- 4This property holds for all machines derived from the basic NPDA. 108 The recognizer follows: begin (* recognizer *) Input: al * • • an ACTION GOTO~ GOTOnk start E ,9 .~ C ,q (* input string *) (* action table *) (* kernel goto table *) (* non-kernel goto table *) (* start state *) (* set of final states *) Output:acceptance or rejection of the input string. Initialization: C := {(start, O, 0)} Perform the following three operations until no more items can be added to the chart C: (1) KERNEL SHIFT: if (s,i,j) 6 C and if ksh(s') 6 ACTION(s, aj+I), then (s', i, j + 1) is added to C. (2) NON-KERNEL SHIFT: if (s,i,j) e C and if nksh(s') E ACTION(s, aj+I), then (s',j,j+ 1) is added to C. (3) REDUCE: if (s, i, j) E C, then for all X --~ j3 s.t. red(X ~ ~) 6 ACTION(s, aj+t) and for all (s', k, i) E C, perform the follow- ing: • for all rl 6 GOTO~(s',X), (rl,k,j) is added to C; • for all r2 E GOTOnk(s',X), (r~,i,j) is added to C. If {(s, O, n) I (s, O, n) 6 C and s e .r} .# # then return acceptance otherwise return rejection. end (* recognizer *) In the above algorithm, non-determinism arises from multiple entries in ACTION(s, a) and also from the fact that GOTOk(s,X)and GOTOnk(s,X)con- tain a set of states. 2.2 Construction of the Parsing Tables We shall give an LR(0)-like method for constructing the parsing tables corresponding to the basic NPDA. Several other methods (such as LR(k)-like, SLR(k)- like) can also be used for constructing the parsing tables and are described in (Schabes, 1991). To construct the LR(0)-like finite state control for the basic non-deterministic push-down automaton that the parser simulates, we define three functions, closure, gotok and gotonk. If s is a state, then closure(s) is the state con- structed from s by the two rules: (i) Initially, every dotted rule in s is added to closure(s); (ii) If A --* a • B/~ is in closure(s) and B --* 7 is a production, then add the dotted rule B --* e7 to closure(s) (if it is not already there). This rule is applied until no more new dotted rules can be added to closure(s). If s is a state and if X is a non-terminal or terminal symbol, gotok(s,X) and gotonk(s,X) are the set of states defined as follows: gotok(s, X) = {closure({A • A -* • XZ e s and a E (Z3 U NT) + } gotonk ( s, X ) = {closure({A X .,8))1 A • s} The goto functions we define differ from the one de- fined for the LR(0) construction in two ways: first we have distinguished transitions on symbols from ker- nel items and non-kernel items; second, each state in goto~(s,X) and gOtOn~(S,X) contains exactly one kernel item whereas for the LR(0) construction they may contain more than one. We are now ready to compute the set of states ,9 defining the finite state control of the parser. The SET OF STATES CONSTRUCTION is con- structed as follows: procedure states(G) begin S := {closure({S --, .~ I S-* a e P})} repeat for each state s in 8 for each X E r~ u NT terminal for each r E gotok(s,X) U goton~(s, X) add r to S until no more states can be added to 8 end PARSING TABLES. Now we construct the LR(0) parsing tables ACTION, GOTOk and GOTOnk from the finite state control constructed above. Given a context-free grammar G, we construct ~q, the set of states for G with the procedure given above. We con- struct the action table ACTION and the goto tables using the following algorithm. begin (CONSTRUCTION OF THE PARSING TABLES) Input: A context-free grammar G = (Y,, NT, P, S). Output: The parsing tables ACTION, GOTOk and GOTOnk for G, the start state start and the set of final states ~'. 109 Step 1. Construct 8 = {so,..., sin}, the set of states for G. Step 2. The parsing actions for state si are deter- mined for all terminal symbols a E ~ as follows: (i) for all r e gotok(si,a), add ksh(r) to ACTION(si, a); (ii) for all r E goto, k(si,a), add nksh(r) to to ACTION(si, a); (iii) if A --* a* is in si, then add red(A--* a) to ACTION(si, a) for all terminal symbol a and for the end marker $. Step 4. The kernel and non-kernel goto tables for state si are determined for all non-terminal sym- bols X as follows: (i) VX E NT, GOTO~(si,X) := gotok(si,X) (ii) VX E NT, GOTOnk(si, X) :-- gotonk(si, X) Step 3. The start state of the parser is start := ciosure({S --* .a I S --~ a ~_ P}) Step 4. The set of final states of the parser is Y := {s e SI3 S--* a 6 P s.t. S--. a. E s} end (CONSTRUCTION OF THE PARSING TABLES) Appendix A gives an example of a parsing table. 3 Complexity The recognizer requires in the worst case O([GIn2)- space and O([G[2na)-time; n is the length of the input string, ]GI is the size of the grammar computed as the sum of the lengths of the right hand side of each productions: [GI = E [a I , where la] is the length of a. A-*a EP One of the objectives for the design of the non- deterministic machine was to make sure that it was not possible to reach an exponential number of states, a property without which the machine is doomed to have exponential complexity (Johnson, 1989). First we observe that the number of states of the finite state control of the non-deterministic machine that we constructed in Section 2.2 is proportional to the size of the grammar, IG[. By construction, each state (except for the start state) contains exactly one ker- nel dotted rule. Therefore, the number of states is bounded by the maximum number of kernel rules of the form A --* ao/~ (with a non empty), and is O(IGI). We conclude that the algorithm requires in the worst case O(IGIn~)-space since the maximum number of items (8, i, j) in the chart is proportional to IGIn 2. A close look at the moves of the parser reveals that the reduce move is the most complex one since it in- volves a pair of states (s, i,j) and (s', k,j/. This move can be instantiated at most O(IGI2nS)-time since i,j,k E [0, n] and there are in the worst case O(IGI ~) pairs of states involved in this move. 5 The parser therefore behaves in the worst case in O(IGI2nS)-time. One should however note that in order to bound the worst case complexity as stated above, arrays similar to the one needed for Earley's parser must be used to implement efficiently the shift and reduce moves. 6 As for Earley's parser, it can also be shown that the algorithm requires in the worst case O(IGI2n2)-time for unambiguous context-free grammars and behaves in linear time on a large class of grammars. 4 Retrieving a Parse The algorithm that we described in Section 2 is a rec- ognizer. However, if we include pointers from an item to the other items (to a pair of items for the reduce moves or to an item for the shift moves) which caused it to be placed in the chart, the recognizer can be modified to record all parse trees of the input string. The representation is similar to a shared forest. The worst case time complexity of the parser is the same as for the recognizer (O([GI2n3)-time) but, as for Earley's parser, the worst case space complexity increases to O([G[2n 3) because of the additional book- keeping. 5 Correctness and Comparison with Earley's Parser We derive the correctness of the parser by showing how it can be mapped to Earley's parser. In the pro- cess, we will also be able to show why this parser can be more efficient than Earley's parser. The detailed proofs are given in (Schabes, 1991). We are also interested in formally characterizing the differences in performance between the parser we propose and Earley's parser. We show that the parser behaves in the worst scenario as well as Ear- ley's parser by mapping it into Earley's parser. The parser behaves better than Earley's parser because it has eliminated the prediction step which takes in the worst case O(]GIn)-time for Earley's parser. There- fore, in the most favorable scenario, the parser we SKerael shift and non-kernel shift moves require both at most O(IGIn 2 )-time. 6Due to the lack of space, the details of the implementation are not given in this paper but they are given in (Schabes, 1991). 110 propose will require O(IGln) less time than Earley's parser. For a given context-free grammar G and an input string al .-.an, let C be the set of items produced by the parser and CearZey be the set of items produced by Earley's parser. Earley's parser (Earley, 1970) produces items of the form (A ---* a * ~, i, j) where A --* a • ~ is a single dotted rule and not a set of dotted rules. The following lemma shows how one can map the items that the parser produces to the items that Ear- ley's parser produces for the same grammar and in- put: Lemma 1 If Cs, i,j) E C then we have: (i) for all kernel dotted rules A ~ a • ~ E s, we have C A ~ ct • ~, i, j) E CearIey (ii) and for all non-kernel dotted rules A ---, *j3 E s, we have C A ~ •~, j, j) E Cearaev The proof of the above lemma is by induction on the number of items added to the chart C. This shows that an item is mapped into a set of items produced by Earley's parser. By construction, in a given state s E S, non-kernel dotted rules have been introduced before run-time by the closure of kernel dotted rules. It follows that Ear- ley's parser can require O(IGln) more space since all Earley's items of the form C A ~ •a, i, i) (i E [0, n]) are not stored separately from the kernel dotted rule which introduced them. Conversely, each kernel item in the chart created by Earley's parser can be put into correspondence with an item created by the parser we propose. Lemma 2 If CA --* a • fl, i,j) E CearZev and if (~ # e, then C s, i,j) e C where s = closure({A ~ a • fl}). The proof of the above lemma is by induction on the number of kernel items added to the chart created by Earley's parser. The correctness of the parser follows from Lemma 1 and its completeness from Lemma 2 since it is well known that the items created by Earley's parser are characterized as follows (see, for example, page 323 in Aho and Ullman [1973] for a proof of this invariant): Lemma 3 The item C A --. a • fl, i, j) E Ceartey if and only if, ST E (VNT U VT)* such that S"~W]o,i]XT and X==c, FA=~w]ij]A. The parser we propose is therefore more efficient than Earley's parser since it has compiled out predic- tion before run time. How much more efficient it is, depends on how prolific the prediction is and therefore on the nature of the grammar and the input string. 6 Optimizations The parser can be easily extended to incorporate stan- dard optimization techniques proposed for predictive parsers. The closure operation which defines how a state is constructed already optimizes the parser on chain derivations in a manner very similar to the tech- niques originally proposed by Graham eta]. (1980) and later also used by Leiss (1990). In addition, the closure operation can be designed to optimize the processing of non-terminal symbols that derive the empty string in manner very simi- lar to the one proposed by Graham et al. (1980) and Leiss (1990). The idea is to perform the reduction of symbols that derive the empty string at compila- tion time, i.e. include this type of reduction in the definition of closure by adding (iii): If s is a state, then closure(s) is now the state con- structed from s by the three rules: (i) Initially, every dotted rule in s is added to closure(s); (ii) ifA~ a.Bflisinclosure(s) andB ~ 7is a production, then add the dotted rule B ~ • 7 to closure(s) (if it is not already there); (iii) ifA ~ a.B~ is in closure(s) and ifB=~ e, then add the dotted rule A ~ aB • ~ to closure(s) (if it is not already there). Rules (ii) and (iii) are applied until no more new dotted rules can be added to closure(s). The rest of the parser remains as before. 7 Variants on the basic ma- chine In the previous section we have constructed a ma- chine whose number of states is in the worst case proportional to the size of the grammar. This re- quirement is essential to guarantee that the complex- ity of the resulting parser with respect to the gram- mar size is not exponential or worse than O(IGI2)- time as other well known parsers. However, we may use some non-determinism in the machine to guaran- tee this property. The non-determinism of the ma- chine is not a problem since we have shown how the non-deterministic machine can be efficiently driven in pseudo-parallel (in O([G[2n3)-time). We can now ask the question of whether it is pos- sible to determinize the finite state control of the ma- chine while still being able to bound the complexity of the parser to O([Gl2n3)-time. Johnson (1989) ex- hibits grammars for which the full determinization 111 of the finite state control (the LR(0) construction) leads to a parser with exponential complexity, because the finite state control has an exponential number of states and also because there are some input string for which an exponential number of states will be reached. However, there are also cases where the full determin~ation either will not increase the number of states or will not lead to a parser with exponential complexity because there are no input that require to reach an exponential number of states. We are cur- rently studying the classes of grammars for which this is the case. One can also try to determinize portions of the fi- nite state automaton from which the control is derived while making sure that the number of states does not become larger than O(IGI). All these variants of the basic parser obtained by determinizing portions of the basic non-deterministic push-down machine can be driven in pseudo-parallel by the same pseudo-parallel driver that we previously defined. These variants lead to a set of more efficient machines since the non-determinism is decreased. 8 Conclusion We have introduced a shift-reduce parser for unre- stricted context-free grammars based on the construc- tion of a non-deterministic machine and we have for- mally proven its superior performance compared to Earley's parser. The technique which we employed consists of con- structing before run-time a parsing table that encodes a non-deterministic machine in the which the predic- tive behavior has been compiled out. At run time, the machine is driven in pseudo-parallel with the help a chart. By defining two kinds of shift moves (on kernel dot- ted rules and on non-kernel dotted rules) and two kinds of reduce moves (on kernel and non-kernel dot- ted rules), we have been able to efficiently evaluate in pseudo-parallel the non-deterministic push down ma- chine constructed for the given context-free grammar. The same worst case complexity as Earley's rec- ognizer is achieved: O(IGl2na)-time and O(IG]n2) - space. However, in practice, it is superior to Earley's parser since all the prediction steps and some of the completion steps have been compiled before run-time. The parser can be modified to simulate other types of machines (such LR(k)-like or SLR-like automata). It can also be extended to handle unification based grammars using a similar method as that employed by Shieber (1985) for extending Earley's algorithm. Furthermore, the algorithm can be tuned to a par- ticular grammar and therefore be made more effi- cient by carefully determinizing portions of the non- deterministic machine while making sure that the number of states in not increased. These variants lead to more efficient parsers than the one based on the basic non-deterministic push-down machine. Fur- thermore, the same pseudo-parallel driver can be used for all these machines. We have adapted the technique presented in this paper to other grammatical formalism such as tree- adjoining grammars (Schabes, 1991). Bibliography A. V. Aho and J. D. Ullman. 1973. Theory of Pars- ing, Translation and Compiling. Vol I: Parsing. Prentice-Hall, Englewood Cliffs, NJ. Jay C. Earley. 1968. An Efficient Context-Free Pars- ing Algorithm. Ph.D. thesis, Carnegie-Mellon Uni- versity, Pittsburgh, PA. Jay C. Earley. 1970. An efficient context-free parsing algorithm. Commun. ACM, 13(2):94-102. S.L. Graham, M.A. Harrison, and W.L. Ruzzo. 1980. An improved context-free recognizer. ACM Trans- actions on Programming Languages and Systems, 2(3):415-462, July. Mark Johnson. 1989. The computational complex- ity of Tomlta's algorithm. In Proceedings of the International Workshop on Parsing Technologies, Pittsburgh, August. T. Kasami. 1965. An efficient recognition and syn- tax algorithm for context-free languages. Technical Report AF-CRL-65-758, Air Force Cambridge Re- search Laboratory, Bedford, MA. James R. Kipps. 1989. Analysis of Tomita's al- gorithm for general context-free parsing. In Pro- ceedings of the International Workshop on Parsing Technologies, Pittsburgh, August. D. E. Knuth. 1965. On the translation of languages from left to right. Information and Control, 8:607- 639. Bernard Lang. 1974. Deterministic tech- niques for efficient non-deterministic parsers. In Jacques Loeckx, editor, Automata, Languages and Programming, 2nd Colloquium, University of Saarbr~cken. Lecture Notes in Computer Science, Springer Verlag. 112 Hans Leiss. 1990. On Kilbury's modification of Ear- ley's algorithm. ACM Transactions on Program- ming Languages and Systems, 12(4):610-640, Oc- tober. Yves Schabes. 1991. Polynomial time and space shift-reduce parsing of context-free grammars and of tree-adjoining grammars. In preparation. t t e O Stuart M. Shieber. 1985. Using restriction to ex- 1 tend parsing algorithms for complex-feature-based 2 formalisms. In 23 rd Meeting of the Association 3 4 for Computational Linguistics (ACL '85), Chicago, s July. Masaru Tomita. 1985. Efficient Parsing for Natural Language, A Fast Algorithm for Practical Systems. Kluwer Academic Publishers. Masaru Tomita. 1987. An efficient augmented- context-free parsing algorithm. Computational Linguistics, 13:31-46. D. H. Younger. 1967. Recognition and parsing of context-free languages in time n 3. Information and Control, 10(2):189-208. A An Example We give an example that illustrates how the recog- nizer works. The grammar used for the example gen- erates the language L = {a(ba)nln >_ O} and is in- finitely ambiguous: S--.SbS S~S S --, a The set of states and the goto function are shown in Figure 1. In Figure 1, the set of states is {0, 1, 2, 3, 4, 5}. We have marked with a sharp sign (~) transitions on a non-kernel dotted rule. If an arc from 51 to 52 is labeled by a non-sharped symbol X, then s2 is in gotot(Sl,X). If an arc from sl to 52 is labeled by a sharped symbol X~, then 52 is in gotont(Sx, X). 1 4 $~"(S-~ S'b$)sCL" ~rS--~ Sb'S~ -- TLi .Sb , -.> --~*S / IS-~ S , --#-a J --> SbS-) Figure 1: Example of set of states and goto function. The parsing table corresponding to this grammar is given in Figure 2. ACTION .k,h(3) red(S--*S) red(S~a) nksh(3) red(S--*SbS) I b I $ ksh(4) ,~d(S--.S) ,~a(S-.S) ,~d(S--.,) ,,d(s-~,) red(S -~ 5bS) red(S.--*~SbS') G O T O k S {5) Figure 2: An LR(0) parsing table for L = {a(ba)" I n ~ 0}. The start state is 0, the set of final states is {2, 3, 5}. $ stands for the end marker of the input string. The input string given to the recognizer is: ababa$ ($ is the end marker). The chart is shown in Fig- ure 3. In Figure 3, an arc labeled by s from position i to position j denotes the item (s, i,j). The input is accepted since the final states 2 and 5 span the en- tire string ((2, 0, 5) E C and (5, 0, 5) E C). Notice that there are multiple arcs subsuming the same substring. a ab aba abab ababa items in the chart (0, O, 0 I (3,0,1) (2,10,1) (1,0,1) 14,0,2) (3' 2' 3) (2' 0' 3) (2' 2 l, 3) (1,0, 3)(1,2, 3)15,0,3) (4, O, 4)(4, 2, 4) (3,4,5) (2,0,5) (2,2,5) (2,4,5) (1,0,5) (1,2,5) (1,4,5) (5,0,5)(5,2,5) Figure 3: Chart created ~r the input oal b2a3b4ah$. O o T 0 nk S I {1,2) {1,2} 113
1991
14
Head Corner Parsing for Discontinuous Constituency Gertjan van Noord Lehrstuhl ffir Computerlinguistik Universit~t des Saarlandes Im Stadtwald 15 D-6600 Saarbrficken 11, FRG [email protected] Abstract I describe a head-driven parser for a class of gram- mars that handle discontinuous constituency by a richer notion of string combination than ordinary concatenation. The parser is a generalization of the left-corner parser (Matsumoto et al., 1983) and can be used for grammars written in power- ful formalisms such as non-concatenative versions of HPSG (Pollard, 1984; Reape, 1989). 1 Introduction Although most formalisms in computational lin- guistics assume that phrases are built by string concatenation (eg. as in PATR II, GPSG, LFG and most versions of Categorial Grammar), this assumption is challenged in non-concatenative grammatical formalisms. In Pollard's dissertation several versions of 'qlead wrapping" are defined (Pollard, 1984). In the analysis of the Australian free word-order language Guugu Yimidhirr, Mark Johnson uses a 'combine' predicate in a DCG-like grammar that corresponds to the union of words (Johnson, 1985). Mike Reape uses an operation called 'sequence union' to analyse Germanic semi-free word or- der constructions (l~ape, 1989; Reape, 1990a). Other examples include Tree Adjoining Gram- mars (Joshi et al., 1975; Vijay-Shankar and Joshi, 1988), and versions of Categorial Gram- mar (Dowry, 1990) and references cited there. Motivation. There are several motivations for non-concatenative grammars. First, specialized string combination operations allow elegant lin- guistic accounts of phenomena that are otherwise notoriously hard. Examples are the analyses of Dutch cross serial dependencies by head wrap- ping or sequence union (Reape, 1990a). Furthermore, in non-concatenative grammars it is possible to relate (parts of) constituents that belong together semantically, but which are not adjacent. Hence such grammars facilitate a sim- ple compositional semantics. In CF-based gram- mars such phenomena usually are treated by com- plex 'threading' mechanisms. Non-concatenative grammatical formalisms may also be attractive from a computational point of view. It is easier to define generation algorithms if the semantics is built in a systemat- ically constrained way (van Noord, 1990b). The semantic-head-driven generation strategy (van Noord, 1989; Calder ef al., 1989; Shieber et al., 1989; van Noord, 1990a; Shieber et al., 1990) faces problems in case semantic heads are 'dis- placed', and this displacement is analyzed us- ing threading. However, in this paper I sketch a simple analysis of verb-second (an example of a displacement of semantic heads) by an oper- ation similar to head wrapping which a head- driven generator processes without any problems (or extensions) at all. Clearly, there are also some computational problems, because most 'standard' parsing strategies assume context-free concatena- tion of strings. These problems are the subject of this paper. The task. I will restrict the attention to a class of constraint-based formalisms, in which operations on strings are defined that are more powerful than concatenation, but which opera- tions are restricted to be nonerasing, and linear. The resulting class of systems can be character- ized as Linear Context-Free Rewriting Systems 114 (LCFRS), augmented with feature-structures (F- LCFRS). For a discussion of the properties of LCFRS without feature-structures, see (Vijay- Shanker et al., 1987). Note though that these properties do not carry over to the current sys- tem, because of the augmention with feature structures. As in LCFRS, the operations on strings in F- LCFRS can be characterized as follows. First, derived structures will be mapped onto a set of occurances of words; i.e. each derived structure 'knows' which words it 'dominates'. For example, each derived feature structure may contain an at- tribute 'phon' whose value is a list of atoms repre- senting the string it dominates. I will write w(F) for the set of occurances of words that the derived structure F dominates. Rules combine structures D1 ... Dn into a new structure M. Nonerasure re- quires that the union of w applied to each daugh- ter is a subset of w(M): }I U w(Di) C_ w(M) i=l Linearity requires that the difference of the car- dinalities of these sets is a constant factor; i.e. a rule may only introduce a fixed number of words syncategorematically: Iw(M)l- I U w(Oi)) = c,c a constant i=1 CF-based formalisms clearly fulfill this require- ment, as do Head Grammars, grammars using sequence union, and TAG's. I assume in the re- mainder of this paper that I.Jin=l w(Di) = w(M), for all rules other than lexical entries (i.e. all words are introduced on a terminal). Note though that a simple generalization of the algorithm pre- sented below handles the general case (along the lines of Shieber et al. (1989; 1990)by treating rules that introduce extra lexical material as non- chain-rules). Furthermore, I will assume that each rule has a designated daughter, called the head. Although I will not impose any restrictions on the head, it will turn out that the parsing strategy to be pro- posed will be very sensitive to the choice of heads, with the effect that F-LCFRS's in which the no- tion 'head' is defined in a systematic way (Pol- lard's Head Grammars, Reape's version of HPSG, Dowty's version of Categorial Grammar), may be much more efficiently parsed than other gram- mars. The notion seed of a parse tree is defined recursively in terms of the head. The seed of a tree will be the seed of its head. The seed of a terminal will be that terminal itself. Other approaches. In (Proudian and Pollard, 1985) a head-driven algorithm based on active chart parsing is described. The details of the al- gorithm are unclear from the paper which makes a comparison with our approach hard; it is not clear whether the parser indeed allows for ex- ample the head-wrapping operations of Pollard (1984). Reape presented two algorithms (Reape, 1990b) which are generalizations of a shift-reduce parser, and the CKY algorithm, for the same class of grammars. I present a head-driven bottom-up algorithm for F-LCFR grammars. The algorithm resembles the head-driven parser by Martin Kay (Kay, 1989), but is generalized in order to be used for this larger class of grammars. The disadvan- tages Kay noted for his parser do not carry over to this generalized version, as redundant search paths for CF-based grammars turn out to be gen- uine parts of the search space for F-LCFR gram- mars. The advantage of my algorithm is that it both employs bottom-up and top-down filtering in a straightforward way. The algorithm is closely re- lated to head-driven generators (van Noord, 1989; Calder et al., 1989; Shieber et al., 1989; van No- ord, 1990a; Shieber et ai., 1990). The algorithm proceeds in a bottom-up, head-driven fashion. In modern linguistic theories very much information is defined in lexical entries, whereas rules are re- duced to very general (and very uninformative) schemata. More information usually implies less search space, hence it is sensible to parse bottom- up in order to obtain useful information as soon as possible. Furthermore, in many linguistic the- ories a special daughter called the head deter- mines what kind of other daughters there may be. Therefore, it is also sensible to start with the head in order to know for what else you have to look for. As the parser proceeds from head to head it is furthermore possible to use powerful top-down predictions based on the usual head feature per- colations. Finally note that proceding bottom-up solves some non-termination problems, because in lexicalized theories it is often the case that infor- mation in lexical entries limit the recursive appli- cation of rules (eg. the size of the subcat list of 115 an entry determines the depth of the derivation tree of which this entry can be the seed). Before I present the parser in section 3, I will first present an example of a F-LCFR grammar, to obtain a flavor of the type of problems the parser handles reasonably well. 2 A sample grammar In this section I present a simple F-LCFR gram- mar for a (tiny) fragment of Dutch. As a caveat I want to stress that the purpose of the current section is to provide an example of possible input for the parser to be defined in the next section, rather than to provide an account of phenomena that is completely satisfactory from a linguistic point of view. Grammar rules are written as (pure) Prolog clauses. 1 Heads select arguments using a sub- cat list. Argument structures are specified lexi- cally and are percolated from head to head. Syn- tactic features are shared between heads (hence I make the simplifying assumption that head - functor, which may have to be revised in order to treat modification). In this grammar I use revised versions of Pollard's head wrapping op- erations to analyse cross serial dependency and verb second constructions. For a linguistic back- ground of these constructions and analyses, cf. Evers (1975), Koster (1975) and many others. Rules are defined as rule(Head,Mother,Other) or ~s rule(Mother) (for lexical entries), where Head represents the designated head daughter, Mother the mother category and Other a list of the other daughters. Each category is a term x(Syn,Subcat,Phon,Sem,Rule) where Syn describes the part of speech, Subcat 1 It should be stressed though that other unification grammar formalisms can be extended quite easily to en- code the same grammar. I implemented the algorithm for several grammars written in a version of PATR II without built-in string concate~aation. is a list of categories a category subcategorizes for, Phon describes the string that is dominated by this category, and Sere is the argument struc- ture associated with this category. Rule indicates which rule (i.e. version of the combine predicate eb to be defined below) should be applied; it gen- eralizes the 'Order' feature of UCG. The value of Phon is a term p(Left,Head,R£ght) where the fields in this term are difference lists of words. The first argument represents the string left of the head, the second argument represents the head and the third argument represents the string right of the head. Hence, the string associated with such a term is the concatenation of the three ar- guments from left to right. There is only one pa- rameterized, binary branching, rule in the gram- mar: rule(x(Syn,[x(C,L,P2,S,R)[L],PI,Sem,_), x(Syn,L,P,Sem,_), [x(C,L,P2,S,R)]) :- cb(R, PI, P2, P). In this rule the first element of the subcategoriza- tion list of the head is selected as the (only) other daughter of the mother of the rule. The syntac- tic and semantic features of the mother and the head are shared. Furthermore, the strings associ- ated with the two daughters of the rule are to be combined by the cb predicate. For simple (left or right) concatenation this predicate is defined as follows: cb(left, p(L4-L.H,R), p(L1-L2,L2-L3,L3-L4), p(L1-L,H,R)). cb(right, p(L,H,RI-R2), p(R2-R3,R3-R4,R4-R), p(L,H,RI-R)). Although this looks horrible for people not famil- iar with Prolog, the idea is really very simple. In the first case the string associated with the argument is appended to the left of the string left of the head; in the second case this string is appended to the right of the string right of the head. In a friendlier notation the examples may look like: 116 p(A1.A2.A3-L,H, R) /\ p(L,H,R) p(A1,A2,A3) p(L, H. R-A1.A2.A3) /\ p(L,H,R) p(A1,A2,A3) Lexical entries for the intransitive verb 'slaapt' (sleeps) and the transitive verb 'kust' (kisses) are defined as follows: rule( x(v,[x(n, [] ,_,A,left)], p(P-P,[slaaptlT]-T,R-R), sleep(A),_)). rule( x(v, [x(n, [] ,_,B,left), x(n, [] ,_,A,left)], p (P-P, [kust I T]-T, R-R), kiss(A,B),_)). Proper nouns are defined as: rule( x(n, [] ,p(P-P, [pier [T]-T,R-R), pete,_)). and a top category is defined as follows (comple- mentizers that have selected all arguments, i.e. sentences): top(x(comp,[] ...... )). Such a complementizer, eg. 'dat' (that) is defined as: rule( x(comp, Ix(v, [] ,_,A,right)], p(P-P, [dat I T]-T, R-R), that (A), _) ). The choice of datastructure for the value of Phon allows a simple definition of the verb raising (vr) version of the combine predicate that may be used for Dutch cross serial dependencies: cb(vr, p(L1-L2,H,R3-R), p(L2-L,R1-R2,R2-R3), p(L1-L,H,R1-R)). Here the head and right string of the argument are appended to the right, whereas the left string of the argument is appended to the left. Again, an illustration might help: p(L-AI , II, A2.A3.R) /\ p(L,li,X) p(A1,A2,A3) A raising verb, eg. 'ziet' (sees) is defined as: rule(x(v,[x(n, [] ,_,InfSubj,left), x(inf,[x( ...... InfSubj,_) ],_,B,vr), x(n, [] ,_,A,left)], p(P-P,[ziotIT]-T,R-R), see(A,B),_)). In this entry 'ziet' selects -- apart from its np- subject -- two objects, a np and a VP (with cat- egory inf). The inf still has an element in its subcat list; this element is controlled by the np (this is performed by the sharing of InfSubj). To derive the subordinate phrase 'dat jan piet marie ziet kussen' (that john sees pete kiss mary), the main verb 'ziet' first selects its rip-object 'piet' resulting in the string 'piet ziet'. Then it selects the infinitival 'marie kussen'. These two strings are combined into 'piet marie ziet kussen' (using the vr version of the cb predicate). The subject is selected resulting in the string 'jan pier marie ziet kussen'. This string is selected by the com- plementizer, resulting in 'dat jan piet marie ziet kussen'. The argument structure will be instan- tiated as that (sees (j elm, kiss (pete, mary))). In Dutch main clauses, there usually is no overt complementizer; instead the finite verb occupies the first position (in yes-no questions), or the second position (right after the topic; ordinary declarative sentences). In the following analysis an empty complementizer selects an ordinary (fi- nite) v; the resulting string is formed by the fol- lowing definition of ¢b: cb(v2, p(A-A,B-B,C-C), p(R1-R2,H,R2-R), p(A-A,H,RI-R)). which may be illustrated with: 117 p( [], A2, A1.A3) /\ p([l, [], []) p(A1,A2,A3) The finite complementizer is defined as: xatle(xCcomp, [xCv, FI ,_,A,v2)], p(B-B,C-C,D-D), that (A),_)). Note that this analysis captures the special rela- tionship between complementizers and (fronted) finite verbs in Dutch. The sentence 'ziet jan piet marie kussen' is derived as follows (where the head of a string is represented in capitals): inversion: ZIET jan piet marie kussen / \ e left: jan piet marie ZIET kussen / \ raising: piet marie ZIET kussen JAN / \ left: piet ZIET left: marie KUSSEN /\ /\ ZIET PIET KUSSEN MARIE 3 The head corner parser This section describes the head-driven parsing algorithm for the type of grammars described above. The parser is a generalization of a left- corner parser. Such a parser, which may be called a 'head-corner' parser, ~ proceeds in a bottom-up way. Because the parser proceeds from head to head it is easy to use powerful top-down pre- dictions based on the usual head feature perco- lations, and subcategorization requirements that heads require from their arguments. In left-corner parsers (Matsumoto et aL, 1983) the first step of the algorithm is to select the left- 2This name is due to Pete White.lock. most word of a phrase. The parser then proceeds by proving that this word indeed can be the left- corner of the phrase. It does so by selecting a rule whose leftmost daughter unifies with the category of the word. It then parses other daughters of the rule recursively and then continues by connecting the mother category of that rule upwards, recur- sively. The left-corner algorithm can be general- ized to the class of grammars under consideration if we start with the seed of a phrase, instead of its leftmost word. Furthermore the connect predi- cate then connects smaller categories upwards by unifying them with the head of a rule. The first step of the algorithm consists of the prediction step: which lexical entry is the seed of the phrase? The first thing to note is that the words intro- duced by this lexical entry should be part of the input string, because of the nonerasure require- ment (we use the string as a 'guide' (Dymetman ef al., 1990) as in a left-corner parser, but we change the way in which lexical entries 'consume the guide'). Furthermore in most linguistic theo- ries it is assumed that certain features are shared between the mother and the head. I assume that the predicate head/2 defines these feature perco- lations; for the grammar of the foregoing section this predicate may be defined as: head(x(Syn ..... Sent,_), x(ST- ..... Sn,_)). As we will proceed from head to head these fea- tures will also be shared between the seed and the top-goal; hence we can use this definition to restrict lexical lookup by top-down prediction. 3 The first step in the algorithm is defined as: parse(Cat,PO,P) "- predict_lex(Cat,SmallCat,PO,P1), connect(SmallCat,Cat,P1,P). predict_lex(Cat,SmallCat,P0,P) :- head(Cat,Sma11Cat), rule(SmallCat), string(SmallCat,Words), subset(Words,PO,P). Instead of taking the first word from the current input string, the parser may select a lexical en- 3In the general case we need to compute the transitive closure of (restrictions of) pcesible mother-head relation- ships. The predicate 'head may also be used to compile rules into the format adopted here (i.e. using the defini- tion the compiler will identify the head of a rule). 118 try dominating a subset of the words occuring in the input string, provided this lexical entry can be the seed of the current goal. The predicate subset(L1,L2,L3) is true in case L1 is a subset of L2 with complement L3. 4 The second step of the algorithm, the connect part, is identical to the connect part of the left- corner parser, but instead of selecting the left- most daughter of a rule the head-corner parser selects the head of a rule: connect(X,X,P,P). connect(Small,Big,PO,P) :- rule(Small, Mid, Others), parse_rest(Others,PO,Pl), connect(Mid,Big,PI,P). parse_rest( [] ,P,P). parse_rest([HlT],PO,P) :- parse(H,PO,P1), parse_rest(T,P1,P). The predicate 'start_parse' starts the parse pro- cess, and requires furthermore that the string as- sociated with the category that has been found spans the input string in the right order. start_parse (String, Cat) : - top(Cat), parse (Cat, String, [] ), string(Cat, String). The definition of the predicate 'string' depends on the way strings are encoded in the grammar. The predicate relates linguistic objects and the string they dominate (as a list of words). I assume that each grammar provides a definition of this predicate. In the current grammar string/2 is defined as follows: 4In Prolog this predicate may he defined as follows: subset([],P,P). subset([HIT],P0,P):- selectchk(H, P0,Pl), subset(T, PI,P). select.chk (El, [El IP] ,P) :- !. select_chk (El, [HIP0], [HIP] ) :- select.chk (El, P0, P) . The cut in select.chkls necessary in case the same word occurs twice in the input string; without it the parser would not be 'minima/'; this could be changed by index- ins words w.r.t, their position, hut I will not assume this complication here, string(x( .... Phon .... ),Str):- copy_term(Phon,Phon2), str(Phon2,Str). str(p(P-P1,P1-P2,P2-[]),P). This predicate is complicated using the predi- cate copy_term/2 to prevent any side-effects to happen in the category. The parser thus needs two grammar specific predicates: head/2 and string/2. Example. To parse the sentence 'dat jan slaapt', the head corner parser will proceed as follows. The first call to 'parse' will look like: parse (x(colap, [] ...... ), [dat, j an, slaapt], [] ) The prediction step selects the lexical entry 'dat'. The next goal is to show that this lexical entry is the seed of the top goal; furthermore the string that still has to be covered is now [jan,slaapt]. Leaving details out the connect clause looks as : connect ( x(comp, Ix(v,.. ,right)],.. ), x(comp, 17,.. ), [jan, slaapt], [] ) The category of dat has to be matched with the head of a rule. Notice that dat subcatego- rises for a v with rule feature right. Hence the right version of the cb predicate applies, and the next goal is to parse the v for which this comple- mentizer subcategorizes, with input 'jan, slaapt'. Lexical lookup selects the word slaapt from this string. The word slaapt has to be shown to be the head of this v node, by the connect predi- cate. This time the left combination rule applies and the next goal consists in parsing a np (for which slaapt subcategorizes) with input string jan. This goal succeeds with an empty output string. Hence the argument of the rule has been found successfully and hence we need to connect the mother of the rule up to the v node. This suc- ceeds trivially, and therefore we now have found the v for which dat subcategorizes. Hence the next goal is to connect the complementizer with an empty subcat list up to the topgoal; again this succeeds trivially. Hence we obtain the instanti- ated version of the parse call: 119 parse(x(comp, [] ,p(P-P, [dat IT]-T, [jan,slaapt [q]-q), that (sleeps (j ohn) ),_), [dat, j an, slaapt], O ) and the predicate start_parse will succeed, yielding: Cat = x(comp, [] ,p(P-P, [dat [T]-T, [jan, slaapt IQ]-q), that (sleeps (john) ), _) 4 Discussion and Extensions Sound and Complete. The algorithm as it is defined is sound (assuming the Prolog interpreter is sound), and complete in the usual Prolog sense. Clearly the parser may enter an infinite loop (in case non branching rules are defined that may feed themselves or in case a grammar makes a heavy use of empty categories). However, in case the parser does terminate one can be sure that it has found all solutions. Furthermore the parser is minimal in the sense that it will return one solu- tion for each possible derivation (of course if sev- eral derivations yield identical results the parser will return this result as often as there are deriva- tions for it). Efficiency. The parser turns out to be quite ef- ficient in practice. There is one parameter that influences efficiency quite dramatically. If the no- tion 'syntactic head' implies that much syntac- tic information is shared between the head of a phrase and its mother, then the prediction step in the algorithm will be much better at 'predict- ing' the head of the phrase. If on the other hand the notion 'head' does not imply such feature per- colations, then the parser must predict the head randomly from the input string as no top-down information is available. Improvements. The efficiency of the parser can be improved by common Prolog and parsing techniques. Firstly, it is possible to compile the grammar rules, lexical entries and parser a bit fur- ther by (un)folding (eg. the string predicate can be applied to each lexical entry in a compilation stage). Secondly it is possible to integrate well- formed and non-well-formed subgoal tables in the parser, following the technique described by Mat- sumoto et al. (1983). The usefulness of this tech- nique strongly depends on the actual grammars that are being used. Finally, the current indexing of lexical entries is very bad indeed and can easily be improved drastically. In some grammars the string operations that are defined are not only monotonic with respect to the words they dominate, but also with respect to the order constraints that are defined between these words ('order-monotonic'). For example in Reape's sequence union operation the linear precedence constraints that are defined between elements of a daughter are by definition part of the linear precedence constraints of the mother. Note though that the analysis of verb second in the foregoing section uses a string operation that does not satisfy this restriction. For grammars that do satisfy this restriction it is possible to ex- tend the top-down prediction possibilities by the incorporation of an extra clause in the 'connect' predicate which will check that the phrase that has been analysed up to that point can become a substring of the top string. Acknowledgements This research was partly supported by SFB 314, Project N3 BiLD; and by the NBBI via the Eu- rotra project. I am grateful to Mike Reape for useful com- ments, and an anonymous reviewer of ACL, for pointing out the relevance of LCFRS. Bibliography Jonathan Calder, Mike Reape, and Henk Zeevat. An algorithm for generation in unification cat- egorial grammar. In Fourth Conference of the European Chapter of the Association for Com- putational Linguistics, pages 233-240, Manch- ester, 1989. David Dowty. Towards a minimalist theory of syntactic structure. In Proceedings of the Sym- posium on Discontinuous Constituency, ITK Tilburg, 1990. Marc Dymetman, Pierre Isabelle, and Francois Perrault. A symmetrical approach to parsing 120 and generation. In Proceedings of the 13th In- ternational Conference on Computational Lin- guistics (COLING), Helsinki, 1990. Arnold Evers. The Transformational Cycle in Dutch and German. PhD thesis, Rijksuniver- siteit Utrecht, 1975. Mark Johnson. Parsing with discontinuous constituents. In 23th Annual Meeting of the Association for Computational Linguistics, Chicago, 1985. A.K. Joshi, L.S. Levy, and M. Takahashi. Tree adjunct grammars. Journal Computer Systems Science, 10(1), 1975. Martin Kay. ttead driven parsing. In Proceedings of Workshop on Parsing Technologies, Pitts- burgh, 1989. Jan Koster. Dutch as an SOV language. Linguis- tic Analysis, 1, 1975. Y. Matsumoto, H. Tanaka, It. Itirakawa, It. Miyoshi, and H. Yasukawa. BUP: a bottom up parser embedded in Prolog. New Genera- tion Computing, 1(2), 1983. Carl Pollard. Generalized Context-Free Gram- mars, Head Grammars, and Natural Language. PhD thesis, Stanford, 1984. C. Proudian and C. Pollard. Parsing head-driven phrase structure grammar. In P3th Annual Meeting of the Association for Computational Linguistics, Chicago, 1985. Mike Reape. A logical treatment .of semi-free word order and bounded discontinuous con- stituency. In Fourth Conference of the Euro- pean Chapter of the Association for Computa- tional Linguistics, UMIST Manchester, 1989. Mike Reape. Getting things in order. In Proceed- ings of the Symposium on Discontinuous Con- stituency, ITK Tilburg, 1990. Mike Reape. Parsing bounded discontinous con- stituents: Generalisations of the shift-reduce and CKY algorithms, 1990. Paper presented at the first CLIN meeting, October 26, OTS Utrecht. Stuart M. Shieber, Gertjan van Noord, Robert C. Moore, and Fernando C.N. Pereira. A semantic-head-driven generation algorithm for unification based formalisms. In 27th Annual Meeting of the Association for Computational Linguistics, Vancouver, 1989. Stuart M. Shieber, Gertjan van Noord, Robert C. Moore, and Fernando C.N. Pereira. Semantic- head-driven generation. Computational Lin- guistics, 16(1), 1990. Gertjan van Noord. BUG: A directed bottom- up generator for unification based formalisms. Working Papers in Natural Language Process- ing, Katholieke Universiteit Leuven, Stichting Taaltechnologie Utrecht, 4, 1989. Gertjan van Noord. An overview of head- driven bottom-up generation. In Robert Dale, Chris Mellish, and Michael Zoek, editors, Cur- rent Research in Natural Language Generation. Academic Press, 1990. Gertjan van Noord. Reversible unifieation-based machine translation. In Proceedings of the 18th International Conference on Computa- tional Linguistics (COLING), Helsinki, 1990. K. Vijay-Shankar and A. Joshi. Feature struc- ture based tree adjoining grammar. In Pro- ceedings of the 12th International Conference on Computational Linguistics (COLING), Bu- dapest, 1988. K. Vijay-Shanker, David J. Weir, and Aravind K. Joshi. Characterizing structural descriptions produced by various grammatical formalisms. In 25th Annual Meeting of the Association for Computational Linguistics, Stanford, 1987. 121
1991
15
The Acquisition and Application of Context Sensitive Grammar for English Robert F. Simmons and Yeong-Ho Yu @cs.texas.edu Abstract Department of Computer Sciences, AI Lab University of Texas, Austin Tx 78712 A system is described for acquiring a context- sensitive, phrase structure grammar which is applied by a best-path, bottom-up, deterministic parser. The gram- mar was based on English news stories and a high degree of success in parsing is reported. Overall, this research concludes that CSG is a computationally and concep- tually tractable approach to the construction of phrase structure grammar for news story text. 1 1 Introduction Although many papers report natural language process- ing systems based in part on syntactic analysis, their au- thors typically do not emphasize the complexity of the parsing and grammar acquisition processes that were in- volved. The casual reader might suppose that parsing is a well understood, minor aspect in such research. In fact, parsers for natural language are generally very compli- cated programs with complexity at best of O(n 3) where n is the number of words in a sentence. The gram- mars they usually use are technically, "augmented con- text free" where the simplicity of the context-free form is augmented by feature tests, transformations, and occa- sionally arbitrary programs. The combination of even an efficient parser with such intricate grammars may greatly increase the computational complexity of the sys- tem [Tomita 1985]. It is extremely difficult to write such grammars and they must frequently be revised to maintain internal consistency when applied to new texts. In this paper we present an alternative approach using context-sensitive grammar to enable preference parsing and rapid acquisition of CSG from example parsings of newspaper stories. Chomsky[1957] defined a hierarchy of grammars in- cluding context-free and context-sensitive ones. For nat- ural language a grammar distinguishes terminal, single element constituents such as parts of speech from non- terminals which are phrase-names such as NP, VP, AD- VPH, or SNT 2 signifying multiple constituents. 1 This work was partially supported by the Army Research Office under contract DAAG29-84-K-0060. ~NounPhrase, VerbPhrase, AdverbialPhrase, Sentence A context-free grammar production is characterized as a rewrite rule where a non-terminal element as a left- side is rewritten as multiple symbols on the right. Snt -* NP + VP Such rules may be augmented by constraints to limit their application to relevant contexts. Snt --* NP + VP / anim(np), agree(nbr(np),nbr(vp)) To the right of the slash mark, the constraints are applied by an interpretive program and even arbitrary code may be included; in this case the interpreter would recognize that the NP must be animate and there must be agree- ment in number between the NP and the VP. Since this is such a flexible and expressive approach, its many vari- ations have found much use in application to natural lan- guage applications and there is a broad literature on Aug- mented Phrase Structure Grammar [Gazdar et. al. 1985], Unification Grammars of various types [Shieber 1986], and Augmented Transition Networks [Allen, J. 1987, Sim- moils 1984]. In context-sensitive grammars, the productions are restricted to rewrite rules of the form, uXv ---* uYv where u and v are context strings of terminals or nonter- minals, and X is a non-terminal and Y is a non-empty string . That is, the symbol X may be rewritten as as the string Y in the context u-..v. More generally, the right-hand side of a context-sensitive rule must contain at least as many symbols as the left-hand side. Excepting Joshi's Tree Adjoining Grammars which are shown to be "mildly context-sensitive," [Joshi 1987] context-sensitive grammars found little or no use among natural language processing (NLP) researchers until the reoccurrance of interest in Neural Network computa- tion. One of the first suggestions of their potential utility came from Sejnowski and Rosenberg's NETtalk [1988], where seven-character contexts were largely suf- ficient to map each character of a printed word into its corresponding phoneme -- where each character ac- tually maps in various contexts into several different phonemes. For accomplishing linguistic case analyses McClelland and Kawamoto [1986] and Miikulainen and 122 Dyer [1989] used the entire context of phrases and sen- tences to map string contexts into case structures. Robert Allen [1987] mapped nine-word sentences of English into Spanish translations, and Yu and Simmons [1990] ac- complished context sensitive translations between English and German. It was apparent that the contexts in which a word occurred provided information to a neural net- work that was sufficient to select correct word sense and syntactic structure for otherwise ambiguous usages of lan- guage. An explicit use of context-sensitive grammar was de- veloped by Simmons and Yu [1990] to solve the prob- lem of accepting indefinitely long, recursively embedded strings of language for training a neural network. How- ever although the resulting neural network was trained as a satisfactory grammar, there was a problem of scale- up. Training the network for even 2000 rules took several days, and it was foreseen that the cost of training for 10-20 thousand rules would be prohibitive. This led us to investigate the hypothesis that storing a context-sensitive grammar in a hash-table and accessing it using a scoring function to select the rule that best matched a sentence context would be a superior approach. In this paper we describe a series of experiments in acquiring context-sensitive grammars (CSG) from news- paper stories, and a deterministic parsing system that uses a scoring function to select the best matching con- text sensitive rules from a hash-table. We have accumu- lated 4000 rules from 92 sentences and found the resulting CSG to be remarkably accurate in computing exactly the parse structures that were preferred by the linguist who based the grammar on his understanding of the text. We show that the resulting grammar generalizes well to new text and compresses to a fraction of the example training rules. 2 Context-Sensitive Parsing The simplest form of parser applies two operations shift or reduce to an input string and a stack. A sequence of elements on the stack may be reduced -- rewritten as a single symbol, or a new element may be shifted from the input to the stack. Whenever a reduce occurs, a subtree of the parse is constructed, dominated by the new symbol and placed on the stack. The input and the stack may both be arbitrarily long, but the parser need only consult the top elements of the stack and of the input. The parse is complete when the input string is empty and the stack contains only the root symbol of the parse tree. Such a simple approach to parsing has been used frequently to introduce methods of CFG parsing in texts on computer analysis of natural language [J. Allen 1987], but it works equally well with CSG. In our application to phrase struc- ture analysis, we further constrain the reduce operation to refer to only the top two elements of the stack 2.1 Phrase Structure Analysis with CFG For shift/reduce parsing, a phrase structure anMysis takes the form of a sequence of states, each comprising a condi- tion of the stack and the input string. The final state in the parse is an empty input string and a stack containing only the root symbol, SNT. In an unambiguous analy- sis, each state is followed by exactly one other; thus each state can be viewed as the left-half of a CSG production whose right-half is the succeeding state. stacksinpu~ ~ ::¢, s~ack,+ l inpu~,+ l News story sentences, however, may be very long, sometimes exceeding fifty words and the resulting parse states would make cumbersome rules of varying lengths. To obtain manageable rules we limit the stack and input parts of the state to five symbols each, forming a ten sym- bol pattern for each state of the parse. In the example of Figure 1 we separate the stack and input parts with the symbol "*", as we illustrate the basic idea on the sentence "The late launch from Alaska delayed interception." The symbol b stands for blank, ax-1; for article, adj for adjec- tive, p for preposition, n for noun, and v for verb. The syntactic classes are assigned by dictionary lookup. The analysis terminates successfully with an empty input string and the single symbol "snt" on the stack. Note that the first four operations can be described as shifts followed by the two reductions, adj n --* np and art np --, up. Subsequently the p and n were shifted onto the stack and then reduced to a pp; then the np and pp on the stack were reduced to an np, followed by the shifting of v and n, their reduction to vp, and a final reduction of np vp ~ snt. The grammar could now be recorded as pairs of suc- cessive states as below: b b b np p* nvn bb--*b bnpp n* vn b bb b b np p n* v nb b b--~ b b b np pp* v n bbb but some economy can be achieved by summarizing the right-half of a rule as the operations, shift or reduce, that produce it from the left-half. So for the example imme- diately above, we record: hbbnpp*nvnbb--~(S) bbnp p n* vn b b b--* (Rpp) where S shifts and (R pp) replaces the top two elements of the stack with pp to form the next state of the parse, Thus we create a windowed confexf of 10 symbols as the left half of a rule and an operation as the right half. Note that if the stack were limited to the top two elements, and the input to a single element, the rule system would reduce to a CFG; thus this CSG embeds a CFG. 123 The late launch from Alaska art ads n p n delayed interception. V n b b b b b * ~t ads n p n b b b b ~t * adS n p n v b b b ~t ads * n p n v n b b ~t ads n * p n v n b b b b ~t up* p n v n b bbbbnp*pnvnb bbbnpp*nvnbb bbnppn*vnbbb b b b np pp * v n b b b bbbbnp*vnbbb bbbnpv*nbbbb bbnpvn*bbbbb bbbnp~*bbbbb b b b b snt * b b b b b Figure 1: Successive Stack/Input States in a Parse 2.2 Algorithm for Shift/Reduce Parser The algorithm used by the Shift/Reduce parser is de- scribed in Figure 2. Essentially, the algorithm shifts el- ements from the input onto the stack under the control of the CSG productions. It can be observed that unlike most grammars which include only rules for reductions, this one has rules for recognizing shifts as well. The re- ductions always apply to the top two elements of the stack and it is often the case that in one context a pair of stack elements lead to a shift, but in another context the same pair can be reduced. An essential aspect of this algorithm is to consult the CFG to find the left-half of a rule that matches the sentence context. The most important part of the rule is the top two stack elements, but for any such pair there may be multiple contexts leading to shifts or various re- ductions, so it is the other eight context elements that decide which rule is most applicable to the current state of the parse. Since many thousands of contexts can exist, an exact match cannot always be expected and there- fore a scoring function must be used to discover the best matching rule. C S.S R- P.rse~(Input ,Csg) Input is a strin K of syntactic classes Cs s is the Kiven CSQ production rules. St,ck :---~ empty do u=fiI(Input --.~ empty ~md Steck ~ (SNT)) Windowed-context :---- Append(Top.five(stack),First.five(input)) Operation :---- ConsuIt.CSG(Window-context,Csg) if First(Oper~tlon) = SHIFT then Stack := Pnsh(First(lnput),Stack) Input :~-~ Rest(Input) else Stack := Push(Second(C)peratlon),Pop(Pop(Stack))) end do The functions~ Top.five and First.five, return the lists of top (or first) five elements of the Stack and the Input respectively. If there Lre not enough elements, these procedures pad with bl~nks. The function Append concatenates two lists into one, Cnnsult-CSG consults the given CSO rules to find the next operation to t~ke. The details of thl8 function are the subject of the next section. Push and Pop .dd or delete one element to/from a stack while First and Second return the first or second elements of a flat, respectlvely. Rest Teturns the glven llst minus the first element. Figure 2: Context Sensitive Shift Reduce Parser One of the exciting aspects of neural network re- search is the ability of a trained NN system to discover closest matches from a set of patterns to a given one. We studied Sejnowski and Rosenberg's [1988] analyses of the weight matrices resulting from training NETtalk. They reported that the weight matrix had maximum weights relating the character in the central window to the output phoneme, with weights for the surrounding context char- acters falling off with distance from the central window. We designed a similar function with maximum weights being assigned to the top two stack elements and weights decreasing in both directions with distance from those positions. The scoring function is developed as follows. Let "R be the set of vectors {R1, R2,..., Rn} where R~ is the vector [rl, r2,..., rl0] Let C be the vector [Cl, c2,..., c10] Let p(ci, rl) be a matching function whose value is 1 if ci = n, and 0 otherwise. is the entire set of rules, P~ is (the left-half of) a par- ticular rule, and C is the parse context. Then 7~' is the subset of 7~, where if R~ 6 7~' then #(n4, c4). P(ns, c5) = 1. The statement above is achieved by accessing the hash table with the top two elements of the stack, c4, c5 to produce the set 7~'. We can now define the scoring function for each R~ 6 124 3 10 Score = E It(c,, r,) . i 4- E It(c,, r,)(ll - i) i=1 i=S The first summation scores the matches between the stack elements of the rule and the current context while the second summation scores the matches between the elements in the input string. If two items of the rule and context match, the total score is increased by the weight assigned to that position. The maximum score for a perfect match is 21 according to the above formula. From several experiments, varying the length of vec- tor and the weights, particularly those assigned to blanks, it has been determined that this formula gave the best performance among those tested. More importantly, it has worked well in the current phrase structure and case analysis experiments. 3 Experiments with CSG To support the claim that CSG systems are an improve- ment over Augmented CFG, a number of questions need be answered. • Can they be acquired easily? • Do they reduce ambiguity in phrase structure anal- ysis? • How well do CSG rules generalize to new texts? • How large is the CSG that encompasses most of the syntactic structures in news stories? 3.1 Acquisition of CSG It has been shown that our CSG productions are essen- tially a recording of the states from parsing sentences. Thus it was easy to construct a grammar acquisition sys- tem to present the successive states of a sentence to a lin- guist user, accepting and recording the linguist's judge- ments of shift or reduce. This system has evolved to a sophisticated grammar acquisition/editing program that prompts the user on the basis of the rules best fitting the current sentence context. It's lexicon also suggests the choice of syntactic class for words in context. Generally it reduces the linguistic task of constructing a grammar to the much simpler task of deciding for a given context whether to shift input or to rewrite the top elements of the stack as a new constituent. It reduces a vastly complex task of grammar writing to relatively simple, concrete judgements that can be made easily and reliably. Using the acquisition system, it has been possible for linguist users to provide example parses at the rate of two or three sentences per hour. The system collects the resulting states in the form of CSG productions, allows the user to edit them, and to use them for examining the resulting phrase structure tree for a sentence. To obtain the 4000+ rules examined below required only about four man-weeks of effort (much of which was initial training time.) 3.2 Reduced Ambiguity in Parsing • Over the course of this study six texts were accumulated. The first two were brief disease descriptions from a youth encyclopedia; the remaining four were newspaper texts. Figure 1 characterizes each article by the number of CSG rules or states, number of sentences, the range of sentence lengths, and the average number of words per sentence. Text St~teJ I Seateaces 'Wdl/Snt Mn-Wdl/Sat Hep&tlt/l 236 12 4-19 10.3 Measles 316 I0 4-25 16.3 News-Stor}~ 470 I0 9-51 23.6 APWire-Robots i005 21 11-53 26.0 APW~re-Rocket 1437 25 6-47 29.2 APWire-Shuttle 598 14 12-32 21.9 Total 4062 I 93 4-53 22.8 Table 1: Characteristics of the Text Corpus It can be seen that the news stories were fairly com- plex texts with average sentence lengths ranging from 22 to 29 words per sentence. A total of 92 sentences in over 2000 words of text resulted in 4062 CSG productions. It was noted earlier that in each CFG production there is an embedded context-free rule and that the pri- mary function of the other eight symbols for parsing is to select the rule that best applies to the current sentence state. When the linguist makes the judgement of shift or reduce, he or she is considering the entire meaning of the sentence to do so, and is therefore specifying a semanti- cally preferred parse. The parsing system has access only to limited syntactic information, five syntactic symbols on the stack, and five input word classes and the parsing algorithm follows only a single path. How well does it work? The CSG was used to parse the entire 92 sentences with the algorithm described in Figure 2 augmented with instrumentation to compare the constituents the parser found with those the linguist prescribed. 88 of the 92 sentences exactly matched the linguist's parse. The other four cases resulted in perfectly reasonable complete parse trees that differed in minor ways from the linguist's pre- 125 scription. As to whether any of the 92 parses are truly "correct", that is a question that linguists could only de- cide after considerable study and discussion. Our claim is only that the grammars we write provide our own pre- ferred interpretations -- useful and meaningful segmen- tation of sentences into trees of syntactic constituents. Figure 3 displays the tree of a sentence as analyzed by the parser using CSG. It is a very pleasant surprise to discover that using context sensitive productions, an ele- mentary, deterministic, parsing algorithm is adequate to provide (almost) perfectly correct, unambiguous analyses for the entire text studied. Another mission soon scheduled that also would have pri- ority over the shuttle is the first firing of a trident two intercontinental range missile from a submerged subma- rine. h --vlN ~, --p Figure 3: Sentence Parse 3.3 Generalization of CSG One of the first questions considered was what percent of new constituents would be recognized by various accumu- lations of CSG. We used a system called union-grammar that would only add a rule to the grammar if the gram- mar did not already predict its operation. The black line of Figure 4 shows successive accumulations of 400-rule segments of the grammar after randomizing the ordering of the rules. Of the first 400 CS rules 50% were new; and for an accumulation of 800, only 35% were new. When 2000 rules had been experienced the curve is flattening to an average of 20% new rules. This curve tells us that if the acquisition system uses the current grammar to sug- gest operations to the linguist, it will be correct about 4 out of 5 times and so reduce the linguist's efforts accord- ingly. The curve also suggests that our collection of rule examples has about 80% redundancy in that earlier rules can predict newcomers at that level of accuracy. On the down-side, though, it shows that only 80% of the con- stituents of a new sentence will be recognized, and thus the probability of a correct parse for a sentence never seen before is very small. We experimented with a grammar of 3000 rules to attempt to parse the new shuttle text, but found that only 2 of 14 new sentences were parsed correctly. J oo 7o !° ! ,o I- ra Io o I ....... t .......... ...... i ...... ~mnlb~ d W ~ Figure 4: Generalization of CSG Rules If two parsing grammars equally well account for the same sentences, the one with fewer rules is less redundant, more general, and the one to be preferred. We used union- grammar to construct the "minimal grammar" with suc- cessive passes through 3430 rules, as shown in Figure2. The first pass found 856 rules would account for the rest. A second pass of the 3430 rules against the 856 extracted by the first pass resulted in the addition of 26 more rules, adding rules that although recognized by earlier rules found interference as a result of later ones. The remaining 8 rules discovered in the next pass are apparently identical patterns resulting in differing operations -- contradicto- ries that need to be studied and resolved. The resulting minimal grammar totaling 895 rules succeeds in parsing the texts with only occasional minor differences from the linguist's prescriptions. We must emphasize that the un, retained rules are not identical but only similar to those in the minimal grammar. 126 I Pass I Unretained 2574 3404 3422 3425 Retained Total Rules 856 26 8 5 3430 3430 3430 3430 Table 2: Four Passes with Minimal Grammar 3.4 Estimated Size of Completed CSG A question, central to the whole argument for the utility of CSG, is how many rules will be required to account for the range of structures found in news story text? Refer again to Figure 4 to try to estimate when the black line, CS, will intersect the abscissa. It is apparent that more data is needed to make a reliable prediction. Let us consider the gray line, labeled CF that shows how many new context-free rules are accumulated for 400 CSG rule increments. This line rapidly decreases to about 5% new CFG rules at the accumulation of 4000 CSG pro- ductions. We must recall that it is the embedded context- free binary rule that is carrying the most weight in deter- mining a constituent, so let us notice some of the CFG properties. We allow 64 symbols in our phrase structure analy- sis. That means, there are 642 possible combinations for the top two elements of the stack. For each combination, there are 65 possible operations3: a shift or a reduction to another symbol. Among 4000 CSG rules, we studied how many different CFG rules can be derived by eliminating the context. We found 551 different CFG rules that used 421 different left-side pairs of symbols. This shows that a given context free pair of symbols averages 1.3 different operations. Then, as we did with CSG rules, we measured how many new CFG rules were added in an accumulative fash- ion. The shaded line of Figure 4 shows the result. No- tice that the line has descended to about 5% errors at 4000 rules. To make an extrapolation easier, a log-log graph shows the same data in Figure 5. From this graph, it can be predicted that, after about 25000 CSG rules are accumulated, the grammar will encompass an Mmost complete CFG component. Beyond this point, additional CSG rules will add no new CFG rules, but only fine-tune the grammar so that it can resolve ambiguities more ef- fectively. Also, it is our belief that, after the CSG reaches that point, a multi-path, beam-search parser would be 3 Actually, there are many fewer than 65 possible operations since the stack elements can be reduced only to non-terminal symbols. I 1 ! ;,o ! 1 ... J IGO 1,000 4,0o0 10,000 2s.ooo 100,000 Nbr of Aaaumuktted Ruloe Exlrq~lalon, lie gray Ine, predc~ Ilat 99% of ~ COnlmxt Iree pldrs vdll be achlemcl ~ ~ ac~mlUlalon d 2~.000 c~nte~ sensiUve rules. Figure 5: Log-Log Plot of New CFG Rules able to parse most newswire stories very reliably. This belief is based on our observation that most failures in parsing new sentences with a single-path parser result from a dead-end sequence; i.e., by making a wrong choice in the middle, the parsing ends up with a state where no rule is applicable. The beam-search parser should be able to recover from this failure and produce a reasonable parse. 4 Discussion and Conclusions NeurM network research showed us the power of con- textuM elements for selecting preferred word-sense and parse-structure in context. But since NN training is still a laborious, computation-intensive process that does not scale well into tens of thousands of patterns, we chose to study context-sensitive grammar in the ordinary context of sequential parsing with a hash-table representation of the grammar, and a scoring function to select the rule most applicable to a current sentence context. We find that context-sensitive, binary phrase structure rules with a context comprising the three preceding stack symbols and the oncoming five input symbols, stack1-3 binary-rule inputl_5 --~ operation provide unexpected advantages for acquisition, the com- putation of preferred parsings, and generalization. 127 A linguist constructs a CSG with the acquisition sys- tem by demonstrating successive states in parsing sen- tences. The acquisition system presents the state result- ing from each shift/reduce operation that the linguist pre- scribes, and it uses the total grammar so far accumulated to find the best matching rule and so prompt the linguist for the next decision. As a result CSG acquisition is a rapid process that requires only that a linguist decide for a given state to reduce the top two elements of the stack, or to shift a new input element onto the stack. Since the current grammar is about 80% accurate in its predictions, the linguist's task is reduced by the prompts to an alert observation and occasional correction of the acquisition system's choices. The parser is a bottom-up, determinis- tic, shift/reduce program that finds a best sequence of parse states for a sentence according to the CSG. When we instrument the parser to compare the constituents it finds with those originally prescribed by a linguist, we discover almost perfect correspondence. We observe that the linguist used judgements based on understanding the meaning of the sentence and that the parser using the contextual elements of the state and matching rules can successfully reconstruct the linguist's parse, thus provid- ing a purely syntactic approach to preference parsing. The generalization capabilities of the CSG are strong. With the accumulation 2-3 thousand example rules, the system is able to predict correctly 80% of sub- sequent parse states. When the grammar is compressed by storing only rules that the accumulation does not al- ready correctly predict, we observe a compression from 3430 to 895 rules, a ratio of 3.8 to 1. We extrapolate from the accumulation of our present 4000 rules to predict that about 25 thousand rule examples should approach com- pletion of the CF grammar for the syntactic structures usually found in news stories. For additional fine tun- ing of the context selection we might suppose we create a total of 40 thousand example rules. Then if the 3.8/1 compression ratio holds for this large a set of rules, we could expect our final grammar to be reduced from 40 to about 10 thousand context sensitive rules. In view of the large combinatoric space provided by the ten symbol parse states -- it could be as large as 641° -- our prediction of 25-40 thousand examples as mainly sufficient for news stories seems contra~intuitive. But our present grammar seems to have accumulated 95% of the binary context free rules -- 551 of about 4096 possible binaries or 13% of the possibility space. If 551 is in fact 95% then the total number of binary rules is about 580 or only 14% of the combinatoric space for binary rules. In the compressed grammar, there are only 421 different left-side patterns for the 551 rules, and we can notice that each context-free pair of symbols averages only 1.3 differ- ent operations. We interpret this to mean that we need only enough context patterns to distinguish the different operations associated with binary combinations of the top two stack elements; since there are fewer than an average of two, it appears reasonable to expect that the context- sensitive portion of the grammar will not be excessively large. We conclude, • Context sensitive grammar is a conceptually and computationally tractable approach to unambigu- ous parsing of news stories. • The context of the CSG rules in conjunction with a scoring formula that selects the rule best matching the current sentence context allow a deterministic parser to provide preferred parses reflecting a lin- guist's meaning-based judgements. • The CSG acquisition system simplifies a linguist's judgements and allows rapid accumulation of large grammars. • CSG grammar generalizes in a satisfactory fashion and our studies predict that a nearly-complete ac- counting for syntactic phrase structures of news sto- ries can be accomplished with about 25 thousand example rules. REFERENCES Alien, Robert, "Several Studies on Natural Language and Back Propagation", Proc. Int. Conf. on Neural Networks, San Diego, Calif., 1987. Allen, James, Natural Language Understanding, Ben- jamin Cummings, Menlo Park, Calif., 1987.. Chomsky, Noam, Syntactic Structures, Mouton, The Hague, 1957. Gazdar, Gerald, Klein E., Pullum G., and Sag I., Gen- eralized Phrase Structure Grammar, Harvard Univ. Press, Boston, 1985. Joshi, Aravind K., "An Introduction to Tree Adjoining Grammars." In Manaster-Ramer Ed.),Mathematics of Language, John Benjamins, msterdam, Netherlands, 1985. McClelland, J.L., and Kawamoto, A.H., "Mechanisms of Sentence Processing: Assigning Roles to Con- stituents," In McClelland J. L. and Rumelhart, D. E., Parallel Distributed Processing, Vol. 2. 1986. 128 Miikkulainen, Risto, and Dyer, M., "A Modular Neural Network Architecture for Sequential Paraphrasing of Script-Based Stories", Artif. Intell. Lab., Dept. Comp. Sci., UCLA, 1989. Shieber, Stuart M., An Introduction to Unification Based Approaches to Grammar, Chicago Univ. Press, Chicago, 1986. Sejnowski, Terrence J., and Rosenberg, C., "NETtalk: A Parallel Network that Learns to Read Aloud", in Anderson and Rosenfeld (Eds.) Nearocomputing, MIT Press., Cambridge Mass., 1988. Simmons, Robert F. Computations from the English, Prentice-Hall, Engelwood Cliffs, New Jersey, 1984. Simmons, Robert F. and Yu, Yeong-Ho, "Training a Neural Network to be a Context Sensitive Gram- mar," Proc. 5th Rocky Mountain AI Conf. Las Cruces, N.M., 1990. Tomita, M. Efficient Parsing for Natural Language, Kluwer Academic Publishers, Boston, Ma., 1985. Yu, Yeong-Ho, and Simmons, R.F. "Descending Epsilon in Back-Propagation: A Technique for Better Gen- eralization," In Press, Proc. Int. Jr. Conf. Neural Networks, San Diego, Calif., 1990. 129
1991
16
Two Languages Are More Informative Than One * Ido Dagan Computer Science Department Technion, Haifa, Israel and IBM Scientific Center Haifa, Israel [email protected] Alon Itai Computer Science Department Technion, Haifa, Israel itai~cs.technion.ac.il Ulrike Schwall IBM Scientific Center Institute for Knowledge Based Systems Heidelberg, Germany schwall@dhdibml Abstract This paper presents a new approach for resolving lex- ical ambiguities in one language using statistical data on lexical relations in another language. This ap- proach exploits the differences between mappings of words to senses in different languages. We concen- trate on the problem of target word selection in ma- chine translation, for which the approach is directly applicable, and employ a statistical model for the se- lection mechanism. The model was evaluated using two sets of Hebrew and German examples and was found to be very useful for disambiguation. 1 Introduction The resolution of hxical ambiguities in non-restricted text is one of the most difficult tasks of natural lan- guage processing. A related task in machine trans- lation is target word selection - the task of deciding which target language word is the most appropriate equivalent of a source language word in context. In addition to the alternatives introduced from the dif- ferent word senses of the source language word, the target language may specify additional alternatives that differ mainly in their usages. Traditionally various linguistic levels were used to deal with this problem: syntactic, semantic and pragmatic. Computationally the syntactic methods are the easiest, but are of no avail in the frequent situation when the different senses of the word show *This research was partially supported by grant number 120-741 of the Iarael Council for Research and Development the same syntactic behavior, having the same part of speech and even the same subcategorization frame. Substantial application of semantic or pragmatic knowledge about the word and its context for broad domains requires compiling huge amounts of knowl- edge, whose usefulness for practical applications has not yet been proven (Lenat et al., 1990; Nirenburg et al., 1988; Chodorow et al., 1985). Moreover, such methods fail to reflect word usages. It is known for many years that the use of a word in the language provides information about its meaning (Wittgenstein, 1953). Also, statistical approaches which were popular few decades ago have recently reawakened and were found useful for computational linguistics. Consequently, a possible (though partial) alternative to using manually constructed knowledge can be found in the use of statistical data on the oc- currence of lexical relations in large corpora. The use of such relations (mainly relations between verbs or nouns and their arguments and modifiers) for var- ious purposes has received growing attention in re- cent research (Church and Hanks, 1990; Zernik and Jacobs, 1990; Hindle, 1990). More specifically, two recent works have suggested to use statistical data on lexical relations for resolving ambiguity cases of PP-attachment (Hindle and Rooth, 1990) and pro- noun references (Dagan and Itai, 1990a; Dagan and Itai, 1990b). Clearly, statistical methods can be useful also for target word selection. Consider, for example, the Hebrew sentence extracted from the foreign news section of the daily Haaretz, September 1990 (tran- scripted to Latin letters). 130 (1) Nose ze maria' mi-shtei ha-mdinot mi-lahtom 'al hoze shalom. This sentence would translate into English as: (2) That issue prevented the two countries from signing a peace treaty. The verb 'lab_tom' has four word senses: 'sign', 'seal', 'finish' and 'close'. Whereas the noun 'hose' means both 'contract' and 'treaty'. Here the differ- ence is not in the meaning, but in usage. One possible solution is to consult a Hebrew corpus tagged with word senses, from which we would prob- ably learn that the sense 'sign' of 'lahtom' appears more frequently with 'hoze' as its object than all the other senses. Thus we should prefer that sense. How- ever, the size of corpora required to identify lexical relations in a broad domain is huge (tens of millions of words) and therefore it is usually not feasible to have such corpora manually tagged with word senses. The problem of choosing between 'treaty' and 'con- tract' cannot be solved using only information on He- brew, because Hebrew does not distinguish between them. The solution suggested in this paper is to iden- tify the lexical relationships in corpora of the target language, instead of the source language. Consult- ing English corpora of 150 million words, yields the following statistics on single word frequencies: 'sign' appeared 28674 times, 'seal' 2771 times, 'finish' ap- peared 15595 times, 'close' 38291 times, 'treaty' 7331 times and 'contract' 30757 times. Using a naive ap- proach of choosing the most frequent word yields (3) *That issue prevented the two countries from closing a peace contract. This may be improved upon if we use lexical rela- tions. We consider word combinations and count how often they appeared in the same syntactic relation as in the ambiguous sentence. For the above exam- ple, among the successfully parsed sentences of the corpus, the noun compound 'peace treaty' appeared 49 times, whereas the compound 'peace contract' did not appear at all; 'to sign a treaty' appeared 79 times while none of the other three alternatives appeared more than twice. Thus we first prefer 'treaty' to 'con- tract' because of the noun compound 'peace treaty' and then proceed to prefer 'sign' since it appears most frequently having the object 'treaty' (the or- der of selection is explained in section 3). Thus in this case our method yielded the correct translation. Using this method, we take the point of view that some ambiguity problems are easier to solve at the level of the target language instead of the source language. The source language sentences are con- sidered as a noisy source for target language sen- tences, and our task is to devise a target language model that prefers the most reasonable translation. Machine translation (MT) is thus viewed in part as a recognition problem, and the statistical model we use specifically for target word selection may be compared with other language models in recognition tasks (e.g. Katz (1985) for speech recognition). In contrast to this view, previous approaches in MT typically resolved examples like (1) by stating various constraints in terms of the source language (Niren- burg, 1987). As explained before, such constraints cannot be acquired automatically and therefore are usually limited in their coverage. The experiment conducted to test the statistical model clearly shows that the statistics on lexical re- lations are very useful for disambiguation. Most no- table is the result for the set of examples for Hebrew to English translation, which was picked randomly from foreign news sections in Israeli press. For this set, the statistical model was applicable for 70% of the ambiguous words, and its selection was then cor- rect for 92% of the cases. These results for target word selection in machine translation suggest to use a similar mechanism even if we are interested only in word sense disambigua- tion within a single language! In order to select the right sense of a word, in a broad coverage applica- tion, it is useful to identify lexical relations between word senses. However, within corpora of a single lan- guage it is possible to identify automatically only re- lations at the word level, which are of course not use- ful for selecting word senses in that language. This is where other languages can supply the solution, ex- ploiting the fact that the mapping between words and word senses varies significantly among different languages. For instance, the English words 'sign' and 'seal' correspond to a very large extent to two distinct senses of the Hebrew word 'lab_tom' (from example (1)). These senses should be distinguished by most applications of Hebrew understanding programs. To make this distinction, it is possible to do the same process that is performed for target word selection, by producing all the English alternatives for the lex- ical relations involving 'lahtom'. Then the Hebrew sense which corresponds to the most plausible En- glish lexical relations is preferred. This process re- quires a bilingual lexicon which maps each Hebrew sense separately into its possible translations, similar 131 to a Hebrew-Hebrew-English lexicon (like the Oxford English-English-Hebrew dictionary (Hornby et al., 1980)). In some cases, different senses of a Hebrew word map to the same word also in English. In these cases, the lexical relations of each sense cannot be identi- fied in an English corpus, and a third language is required to distinguish among these senses. As a long term vision, one can imagine a multilingual cor- pora based system, which exploits the differences be- tween languages to automatically acquire knowledge about word senses. As explained above, this knowl- edge would be crucial for lexical disambiguation, and will also help to refine other types of knowledge ac- quired from large corpora 1 . 2 The Linguistic Model The ambiguity of a word is determined by the num- ber of distinct, non-equivalent representations into which the word can be mapped (Van Eynde et al., 1982). In the case of machine translation the ambi- guity of a source word is thus given by the number of target representations for that word in the bilin- gual lexicon of the translation system. Given a spe- cific syntactic context the ambiguity can be reduced to the number of alternatives which may appear in that context. For instance, if a certain translation of a verb corresponds to an intransitive occurrence of that verb, then this possibility is eliminated when the verb occurs with a direct object. In this work we are interested only in those ambiguities that are left after applying all the deterministic syntactic con- straints. For example, consider the following Hebrew sen- tence, taken from the daily Haaretz, September 1990: (4) Diplomatim svurim ki hitztarrfuto shell Hon Sun magdila et ha.sikkuyim l-hassagat hitqad- dmut ba-sihot. Here, the ambiguous words in translation to En- glish are 'magdila', 'hitqaddmut' and 'sih_ot'. To fa- cilitate the reading, we give the translation of the sentence to English, and in each case of an ambiguous selection all the alternatives are listed within curly brackets, the first alternative being the correct one. 1For inatanoe, Hindie (1990) indicates the need to dis- tlnguhsh among aeaases of polysemic words for his statistical c]~Hic~tlon method. 132 (5) Diplomats believe that the joining of Hon Sun { increases I enlarges I magnifies } the chances for achieving { progress [ advance I advancement } in the { talks I conversations I calls }. We use the term a lezical relation to denote the cooccurrence relation of two (or possibly more) spe- cific words in a sentence, having a certain syntac- tic relationship between them. Typical relations are between verbs and their subjects, objects, comple- ments, adverbs and modifying prepositional phrases. Similarly, nouns are related also with their objects, with their modifying nouns in compounds and with their modifying adjectives and prepositional phrases. The relational representation of a sentence is sim- ply the list of all lexical relations that occur in the sentence. For our purpose, the relational represen- tation contains only those relations that involve at least one ambiguous word. The relational represen- tation for example (4) is given in (6) (for readability we represent the Hebrew word by its English equiv- alent, prefixed by 'H' to denote the fact that it is a Hebrew word): (6) a. (subj-verb: H-joining H-increase) b. (verb-obj: H-increase H-chance) c. (verb-obj: H-achieve H-progress) d. (noun-pp: H-progress H-in H-talks) The relational representation of a source sentence is reflected also in its translation to a target sen- tence. In some cases the relational representation of the target sentence is completely equivalent to that of the source sentence, and can be achieved just by substituting the source words with target words. In other cases, the mapping between source and target relations is more complicated, as is the case for the following German example: (7) Der Tisch gefaellt mir. -- I like the table. Here, the original subject of the source sentence becomes the object in the target sentence. This kind of mapping usually influences the translation process and is therefore encoded in components of the trans- lation program, either explicitly or implicitly, espe- cially in transfer based systems. Our model assumes that such a mapping of source language relations to target language relations is possible, an assumption that is valid for many practical cases. When applying the mapping of relations on one lexicai relation of the source sentence we get several alternatives for a target relation. For instance, ap- plying the mapping to example (6-c) we get three alternatives for the relation in the target sentence: (8) (verb-obj: achieve progress) (verb-obj: achieve advance) (verb-obj: achieve advancement) For example (6-d) we get 9 alternatives, since both 'H-progress' and 'H-talks' have three alterna- tive translations. In order to decide which alternative is the most probable, we count the frequencies of all the alter- native target relations in very large corpora. For ex- ample (8) we got the counts 20, 5 and 1 respectively. Similarly, the target relation 'to increase chance' was counted 20 times, while the other alternatives were not observed at all. These counts are given as input to the statistical model described in the next section, which performs the actual target word selection. 3 The Statistical Model Our selection algorithm is based on the following sta- tistical model. Consider first a single relation. The linguistic model provides us with several alternatives as in example (8). We assume that each alternative has a theoretical probability Pi to be appropriate for this case. We wish to select the alternative for which Pi is maximal, provided that it is significantly larger than the others. We have decided to measure this significance by the odds ratio of the two most probable alternatives P = Pl/P2. However, we do not know the theoretical probabilities, therefore we get a bound for p using the frequencies of the alternatives in the corpus. Let/3 i be the probabilities as observed in the cor- pus (101 = ni/n, where ni is the number of times that alternative i appeared in the corpus and n is the to- tal number of times that all the alternatives for the relation appeared in the corpus). For mathematical convenience we bound In p in- stead of p. Assuming that samples of the alternative relations are distributed normally, we get the follow- ing bound with confidence 1 - a: where Z is the eonfidenee coefficient. We approxi- mate the variance by the delta method (e.g. John- son and Wichern (1982)): = ,n _ 1 p~(1 - p~) -) 1 p~(1 - p~) + 2 P*P~ p~ n p~ n npxI~ 1 1 1 1 1 1 = + ~, + =~+~. npa nI~ n~x n~ nl n2 Therefore we get that with probability at least 1--or, In _> In - Zl-a + We denote the right hand side (the bound) by B~,(nl, n2). In sentences with several relations, we consider the best two alternatives for each relation, and take the relation for which B,, is largest. If this Ba is less than a specified threshold then we do not choose between the alternatives. Otherwise, we choose the most fre- quent alternative to this relation and select the tar- get words appearing in this alternative. We then eliminate all the other alternative translations for the selected words, and accordingly eliminate all the al- ternatives for the remaining relations which involve these translations. In addition we update the ob- served probabilities for the remaining relations, and consequently the remaining Ba's. This procedure is repeated until all target words have been determined or the maximal Ba is below the threshold. The actual parameters we have used so far were c~ = 0.05 and the bound for Bawas -0.5. To illustrate the selection algorithm, we give the details for example (6). The highest bound for the odds ratio (Ba = 1.36) was received for the relation 'increase-chance', thus selecting the translation 'in- crease' for 'H-increase'. The second was Ba = 0.96, 133 for 'achieve-progress'. This selected the transla- tions 'achieve' and 'progress', while eliminating the other senses of 'H-progress' in the remaining rela- tions. Then, for the relation 'progress-in-talks' we got Ba = 0.3, thus selecting the appropriate transla- tion for 'H-talks'. 4 The Experiment An experiment was conducted to test the perfor- mance of the statistical model in translation from Hebrew and German to English. Two sets of para- graphs were extracted randomly from current He- brew and German press. The Hebrew set con- tained 10 paragraphs taken from foreign news sec- tions, while the German set contained 12 paragraphs of text not restricted to a specific topic. Within these paragraphs we have (manually) iden- tified the target word selection ambiguities, using a bilingual dictionary. Some of the alternative transla- tions in the dictionary were omitted if it was judged that they will not be considered by an actual compo- nent of a machine translation program. These cases included very rare or archaic translations (that would not be contained in an MT lexicon) and alternatives that could be eliminated using syntactic knowledge (as explained in section 2) 2 . For each of the remain- ing alternatives, it was judged if it can serve as an acceptable translation in the given context. This a priori judgment was used later to decide whether the selection of the automatic procedure is correct. As a result of this process, the Hebrew set contained 105 ambiguous words (which had at least one unaccept- able translation) and the German set 54 ambiguous words. Now it was necessary to identify the lexical rela- tions within each of the sentences. As explained be- fore, this should be done using a source language parser, and then mapping the source relations to the target relations. At this stage of the research, we still do not have the necessary resources to per- form the entire process automatically s, therefore we have approximated it by translating the sentences into English and extracting the lexical relations us- ing the English Slot Grammar (ESG) parser (mc- 2Due to some technicalities, we have also restricted the experiment to cases in which all the relevant translations of a word consists exactly one English word, which is the most frequent situaticm. awe are currently integrating this process within GSG (German Slot Gr~nmm') and LMT-GE (the Germs~a to En- glish MT prototype). Cord, 1989) 4. Using this parser we have classified the lexical relations to rather general classes of syn- tactic relations, based on the slot structure of ESG. The important syntactic relations used were between a verb and its arguments and modifiers (counting as one class all objects, indirect objects, complements and nouns in modifying prepositional phrases) and between a noun and its arguments and modifiers (counting as one class all noun objects, modifying nouns in compounds and nouns in modifying prepo- sitional phrases). The success of using this general level of syntactic relations indicates that even a rough mapping of source to target language relations would be useful for the statistical model. The statistics for the alternative English relations in each sentence were extracted from three cor- pora: The Washington Post articles (about 40 mil- lion words), Associated Press news wire (24 million) and the Hansard corpus of the proceedings of the Canadian Parliament (85 million words). The statis- tics were extracted only from sentences of up to 25 words (to facilitate parsing) which contained alto- gether about 55 million words. The lexical relations in the corpora were extracted by ESG, in the same way they were extracted for the English version of the example sentences (see Dagan and Itai (1990a) for a discussion on using an automatic parser for extract- ing lexical relations from a corpus, and for the tech- nique of acquiring the statistics). The parser failed to produce any parse for about 35% of the sentences, which further reduced the actual size of the corpora which was used. 5 Evaluation Two measurements, applicability and precision, are used to evaluate the performance of the statistical model. The applicability denotes the proportion of cases for which the model performed a selection, i.e. those cases for which the bound Bapassed the thresh- old. The precision denotes the proportion of cases for which the model performed a correct selection out of all the applicable cases. We compare the precision of the model to that of the "word frequencies" procedure, which always selects the most frequent target word. This naive "straw-man" is less sophisticated than other meth- ods suggested in the literature but it is useful as a common benchmark (e.g. Sadler (1989)) since it can 4The parsing process was controlled manually to make sure that we do not get wrong relational representation of the exo amp]es due to parsing errors. 134 be easily implemented. The success rate of the "word frequencies" procedure can serve as a measure for the degree of lexical ambiguity in a given set of examples, and thus different methods can be partly compared by their degree of success relative to this procedure. Out of the 105 ambiguous Hebrew words, for 32 the bound Badid not pass the threshold (applicabil- ity of 70%). The remaining 73 examples were dis- tributed according to the following table: [ Hebrew-Engiish ]] Word Frequencies correct I incorrect I Relations Statistics [ correct Thus the precision of the statistical model was 92% (67/73) 5 while relying just on word frequencies yields 64% (47/73). Out of the 54 ambiguous German words, for 22 the bound Badid not pass the threshold (applicability of 59%). The remaining 32 examples were distributed according to the following table: Oerm English II Word equeoci I ,, correct ] incorrect Relations Statistics [ correct 8 in°°rre ' [[ I 0 I Thus the precision of the statistical model was 75% (24/32), while relying just on word frequencies yields 53% (18/32). We attribute the lower success rate for the German examples to the fact that they were not restricted to topics that are well represented in the corpus. Statistical analysis for the larger set of Hebrew ex- amples shows that with 95% confidence our method succeeds in at least 86% of the applicable examples (using the parameters of the distribution of propor- tions). With the same confidence, our method im- proves the word frequency method by at least 18% (using confidence interval for the difference of pro- portions in multinomial distribution, where the four cells of the multinomial correspond to the four entries in the result table). In the examples that were treated correctly by our 5An a posteriorl observation showed that in three of the six errors the selection of the model was actually acceptable, and the a priori judgment of the hnman translator was too se- vere. For example, in one of these cases the statistics selected the expression 'to begin talks' while the human translator re- garded this expression as incorrect and selected 'to start talks'. If we consider these cases as correct then there are only three selection errors, getting a 96% precision. method, such as the examples in the previous sec- tions, the statistics succeeded to capture two major types of disambiguating data. In preferring 'sign- treaty' upon 'seal-treaty', the statistics reflect the relevant semantic constraint. In preferring 'peace- treaty' upon 'peace-contract', the statistics reflect the hxical usage of 'treaty' in English which differs from the usage of 'h_oze' in Hebrew. 6 Failures and Possible Im- provements A detailed analysis of the failures of the method is most important, as it both suggests possible improve- ments for the model and indicates its limitations. As described above, these failures include either the cases for which the method was not applicable (no selection) or the cases in which it made an incorrect selection. The following paragraphs list the various reasons for both types. 6.1 Inapplicability Insufficient data. This was the reason for nearly all the cases of inapplicability. For instance, none of the alternative relations 'an investigator of corrup- tion' (the correct one) or 'researcher of corruption' (the incorrect one) was observed in the parsed cor- pus. In this case it is possible to perform the correct selection if we used only statistics about the cooc- currences of 'corruption' with either 'investigator' or 'researcher', without looking for any syntactic rela- tion (as in Church and Hanks (1990)). The use of this statistic is a subject for further research, but our initial data suggests that it can substantially increase the applicability of the statistical method with just a little decrease in its precision. Another way to deal with the lack of statistical data for the specific words in question is to use statistics about similar words. This is the basis for Sadler's Analogical Semantics (1989) which has not yet proved effective. His results may be improved if more sophisticated techniques and larger corpora are used to establish similarity between words (such as in (Hindle, 1990)). Conflicting data. In very few cases two alterna- tives were supported equally by the statistical data, thus preventing a selection. In such cases, both alter- natives are valid at the independent level of the lexi- cal relation, but may be inappropriate for the specific context. For instance, the two alternatives of 'to take 135 a job' or 'to take a position' appeared in one of the examples, but since the general context concerned with the position of a prime minister only the latter was appropriate. In order to resolve such examples it may be useful to consider also cooccurrences of the ambiguous word with other words in the broader context. For instance, the word 'minister' seems to cooccur in the same context more frequently with 'position' than with 'job'. In another example both alternatives were appro- priate also for the specific context. This happened with the German verb 'werfen', which may be trans- lated (among other options) as 'throw', 'cast' or 'score'. In our example 'werfen' appeared in the con- text of 'to throw/cast light' and these two correct al- ternatives had equal frequencies in the corpus ('score' was successfully eliminated). In such situations any selection between the alternatives will be appropriate and therefore any algorithm that handles conflicting data will work properly. 6.2 Incorrect Selection Using the inappropriate relation. One of the ex- amples contained the Hebrew word 'matzav', which two of its possible translations are 'state' and 'po- sition'. The phrase which contained this word was: 'to put an end to the {state I position} of war ... '. The ambiguous word is involved in two syntactic rela- tions, being a complement of 'put' and also modified by 'war'. The corresponding frequencies were: (9) verb-comp: put-position 320 verb-comp: put-state 18 noun-nob j: state-war 13 noun-nob j: position-war 2 The bound of the odds ration (Ba) for the first re- lation was higher than for the second, and therefore this relation determined the translation as 'position'. However, the correct translation should be 'state', as determined by the second relation. This example suggests that while ordering the in- volved relations (or using any other weighting mech- anism) it may be necessary to give different weights to the different types of syntactic relations. For in- stance, it seems reasonable that the object of a noun should receive greater weight in selecting the noun's sense than the verb for which this noun serves as a complement. Confusing senses. In another example, the Hebrew word 'qatann', which two of its meanings are 'small' and 'young', modified the word 'sikkuy', which means 'prospect' or 'chance'. In this context, the correct sense is necessarily 'small'. However, the relation that was observed in the corpus was 'young prospect', relating to the human sense of 'prospect' which appeared in sport articles (a promising young person). This borrowed sense of 'prospect' is nec- essarily inappropriate, since in Hebrew it is repre- sented by the equivalent of 'hope' ('tiqva'), and not by 'sikkuy'. The reason for this problem is that after producing the possible target alternatives, our model ignores the source language input as it uses only a mono- lingual target corpus. This can be solved if we use an aligned bilingual corpus, as suggested by Sadler (1989) and Brown et al. (1990). In such a cor- pus the occurrences of the relation 'young prospect' will be aligned to the corresponding occurrences of the Hebrew word 'tiqva', and will not be used when the Hebrew word 'sikkuy' is involved. Yet, it should be brought in mind that an aligned corpus is the re- sult of manual translation, which can be viewed as a manual tagging of the words with their equivalent senses in the other language. This resource is much more expensive and less available than the untagged monolingual corpus, while it seems to be necessary only for relatively rare situations. Lack of deep understanding. By their nature, statistical methods rely on large quantities of shallow information. Thus, they are doomed to fail when dis- ambiguation can rely only on deep understanding of the text and no other surface cues are available. This happened in one of the Hebrew examples, where the two alternatives were either 'emigration law' or 'im- migration law' (the Hebrew word 'hagira' is used for both subsenses). While the context indicated that the first alternative is correct, the statistics preferred the second alternative. It seems that such cases are quiet rare, but only further evaluation will show the extent to which deep understanding is really needed. 7 Conclusions The method presented takes advantage of two lin- guistic phenomena: the different usage of words and word senses among different languages and the im- portance of lexical cooccurrences within syntactic re- lations. The experiment shows that these phenom- ena are indeed useful for practical disambiguation. We suggest that the high precision received in the experiment relies on two characteristics of the am- 136 biguity phenomena, namely the sparseness and re- dundancy of the disambiguating data. By sparseness we mean that within the large space of alternative interpretations produced by ambiguous utterances, only a small portion is commonly used. Therefore the chance of an inappropriate interpretation to be observed in the corpus (in other contexts) is low. Redundancy relates to the fact that different infor- mants (such as different lexical relations or deep un- derstanding) tend to support rather than contradict one another, and therefore the chance of picking a "wrong" informant is low. The examination of the failures suggests that fu- ture research may improve both the applicability and precision of the model. Our next goal is to handle in- applicable cases by using cooccurrence data regard- less of syntactic relations and similarities between words. We expect that increasing the applicability will lead to some decrease in precision, similar to the tradeoff between recall and precision in information retrieval. Pursuing this tradeoff will improve the per- formance of the method and reveal its limitations. 8 Acknowledgments We would like to thank Mori Rimon, Peter Brown, Ayala Cohen, Ulrike Rackow, Herb Leass and Hans Karlgren for their help and comments. References [1] Brown, P., Cocks, J., Della Pietra, S., Della Pietra, V., Jelinek, F., Mercer, R.L. and Rossin P.S., A statistical approach to language transla- tion, Computational Linguistics, vol. 16(2), 79- 85 (1990). [2] Chodorow, M. S., R. J. Byrd and G. E. Heidron, Extracting Semantic Hierarchies from a Large On-Line Dictionary. Proc. of the 23rd Annual Meeting of the ACL, 299-304 (1985). [3] Church, K. W., and Hanks, P., Word associa- tion norms, mutual information, and Lexicogra- phy, Computational Linguistics, vol. 16(1), 22- 29 (1990). [4] Dagan, I. and A. Itai, Automatic Acquisition of Constraints for the Resolution of Anaphora References and Syntactic Ambiguities, COLING 1990, Helsinki, Finland. [5] Dagan, I. and A. Itai, A Statistical Filter for Resolving Pronoun References, Proc. of the 7th Israeli Sym. on Artificial Intelligence and Com- puter Vision, 1990. [6] Hindle, D. Noun Classification from Predicate- Argument Structures, Proc. of the 28rd Annual Meeting of the ACL, (1990). [7] Hindle D. and M. Rooth, Structural Ambiguity and Lexical Relations, Proc. of the Speech and Natural Language Workshop, (DARPA), June 1990. [8] Hornby, A. S., C. Ruse, J. A. Reif and Y. Levy, Ozford Student's Dictionanary for He- brew Speakers, Kernerman Publishing Ltd, Lon- nie Kahn & Co. Ltd. (1986). [9] Johnson, R. A. and D. W. Wichern, Multivariate Statistical Analysis, Prentice-Hall, 1982. [10] Katz, S., Recursive m-gram language model via a smoothing of Turing's formula, IBM Tech. Dis- closure Bull., 1985. [11] Lenat, D. B., R. V. Guha, K. Pittman, D. Pratt and M. Shepherd, Cyc: toward programs with common sense, Comm. ACM, vol. 33(8), 1990. [12] McCord, M. C., A new version of slot grammar, Research Report RC 1~506, IBM Research Di- vision, Yorktown Heights, NY, 1989. [13] Sirenburg, S., (ed.), Machine Translation, Cam- bridge University Press (1987). [14] Nirenburg, S., I. Monarch, T. Kaufmann, I. Nirenburg and J. Carbonell. Acquisition of Very Large Knowledge Bases: Methodology, Tools and Applications, Center for Machine Translation, Carnegie-Mellon, CMU-CMT-88- 108, (1988). [15] Sadler, V., Working with analogical semantics: disambiguation techniques in DLT, Foris Publi- cations, 1989. [16] Van Eynde, F. et al.: The Task of Transfer vis-a- vis Analysis and Generation. Eurotra Final Re- port ET-10-B/NL, (1982). [17] Wittgenstein, L. Philosophical Investigations, Oxford (1953). [18] Zernik U., and P. Jacobs, Tagging for Learn- ing: Collecting Thematic Relations from Cor- pus. Proc. COLING 1990. 137
1991
17
LEARNING PERCEPTUALLY-GROUNDED SEMANTICS IN THE L0 PROJECT Terry Regier* International Computer Science Institute 1947 Center Street, Berkeley, CA, 94704 (415) 642-4274 x 184 [email protected] U • TR "Above" Figure 1: Learning to Associate Scenes with Spatial Terms ABSTRACT A method is presented for acquiring perceptually- grounded semantics for spatial terms in a simple visual domain, as a part of the L0 miniature language acquisi- tion project. Two central problems in this learning task are (a) ensuring that the terms learned generalize well, so that they can be accurately applied to new scenes, and (b) learning in the absence of explicit negative ev- idence. Solutions to these two problems are presented, and the results discussed. 1 Introduction The L0 language learning project at the International Computer Science Institute [Feldman et al., 1990; We- ber and Stolcke, 1990] seeks to provide an account of lan- guage acquisition in the semantic domain of spatial rela- tions between geometrical objects. Within this domain, the work reported here addresses the subtask of learn- ing to associate scenes, containing several simple objects, with terms to describe the spatial relations among the objects in the scenes. This is illustrated in Figure 1. For each scene, the learning system is supplied with an indication of which object is the reference object (we call this object the landmark, or LM), and which object is the one being located relative to the reference object (this is the trajector, or TR). The system is also supplied with a single spatial term that describes the spatial relation *Supported through the International Computer Science Institute. portrayed in the scene. It is to learn to associate all applicable terms to novel scenes. The TR is restricted to be a single point for the time being; current work is directed at addressing the more general case of an arbitrarily shaped TR. Another aspect of the task is that learning must take place in the absence of explicit negative instances. This condition is imposed so that the conditions under which learning takes place will be similar in this respect to those under which children learn. Given this, there are two central problems in the sub- task as stated: • Ensuring that the learning will generalize to scenes which were not a part of the training set. This means that the region in which a TR will be consid- ered "above" a LM may have to change size, shape, and position when a novel LM is presented. • Learning without explicit negative evidence. This paper presents solutions to both of these prob- lems. It begins with a general discussion of each of the two problems and their solutions. Results of training are then presented. Then, implementation details are discussed. And finally, some conclusions are presented. 2 Generalization and Parameterized Regions 2.1 The Problem The problem of learning whether a particular point lies in a given region of space is a foundational one, with sev- eral widely-known "classic" solutions [Minsky and Pa- pert, 1988; Rumelhart and McClelland, 1986]. The task at hand is very similar to this problem, since learning when "above" is an appropriate description of the spatial relation between a LM and a point TR really amounts to learning what the extent of the region "above" a LM is. However, there is an important difference from the classic problem. We are interested here in learning whether or not a given point (the TR) lies in a region (say "above", "in") which is itself located relative to a LM. Thus, the shape, size, and position of the region are dependent on the shape, size, and position of the current LM. For example, the area "above" a small triangle to- ward the top of the visual field will differ in shape, size, 138 and position from the area "above" a large circle in the middle of the visual field. 2.2 Parameterized Regions Part of the solution to this problem lies in the use of pa- rameterized regions. Rather than learn a fixed region of space, the system learns a region which is parameterized by several features of the LM, and is thus dependent on them. The LM features used are the location of the center of mass, and the locations of the four corners of the smallest rectangle enclosing the LM (the LM's "bounding-box"). Learning takes place relative to these five "key points". Consider Figure 2. The figure in (a) shows a region in 2-space learned using the intersection of three half- planes, as might be done using an ordinary perceptron. In (b), we see the same region, but learned relative to the five key points of an LM. This means simply that the lines which define the half-planes have been constrained to pass through the key points of the LM. The method by which this is done is covered in Section 5. Further details can be found in [Re#eL 1990]. The critical point here is that now that this region has been learned relative to the LM key points, it will change position and size when the LM key points change. This is illustrated in (c). Thus, the region is parameterized by the LM key points. 2.3 Combining Representations While the use of parameterized regions solves much of the problem of generalizability across LMs, it is not suf- ficient by itself. Two objects could have identical key points, and yet differ in actual shape. Since part of the definition of "above" is that the TR is not in the inte- rior of the LM, and since the shape of the interior of the LM cannot be derived from the key points alone, the key points are an underspecification of the LM for our purposes. The complete LM specification includes a bitmap of the interior of the LM, the "LM interior map". This is simply a bitmap representation of the LM, with those bits set which fall in the interior of the object. As we shall see in greater detail in Section 5, this representa- tion is used together with parameterized regions in learn- ing the perceptual grounding for spatial term semantics. This bitmap representation helps in the case mentioned above, since although the triangle and square will have identical key points, their LM interior maps will differ. In particular, since part of the learned "definition" of a point being above a LM should be that it may not be in the interior of the LM, that would account for the dif- ference in shape of the regions located above the square and above the triangle. Parameterized regions and the bitmap representation, when used together, provide the system with the ability to generalize across LMs. We shall see examples of this after a presentation of the second major problem to be tackled. (a) omoel~lm~ w w m w ~ w w l nououooooono~n~ \ : /\ (b) " :%./,: .............. (c) Figure 2: Parameterized Regions 139 Figure 3: Learning "Above" Without Negative Instances 3 Learning Without Explicit Negative Evidence 3.1 The Problem Researchers in child language acquisition have often ob- served that the child learns language apparently with- out the benefit of negative evidence [Braine, 1971; Bowerman, 1983; Pinker, 1989]. While these researchers have focused on the "no negative evidence" problem as it relates to the acquisition of grammar, the problem is a general one, and appears in several different aspects of language acquisition. In particular, it surfaces in the context of the learning of the semantics of lexemes for spatial relations. The methods used to solve the prob- lem here are of general applicability, however, and are not restricted to this particular domain. The problem is best illustrated by example. Consider Figure 3. Given the landmark (labeled "LM"), the task is to learn the concept "above". We have been given four positive instances, marked as small dotted circles in the figure, and no negative instances. The problem is that we want to generalize so that we can recognize new instances of "above" when they are presented, but since there are no negative instances, it is not clear where the boundaries of the region "above" the LM should be. One possible generalization is the white region containing the four instances. Another possibility is the union of that white region with the dark region surrounding the LM. Yet another is the union of the light and dark regions with the interior of the LM. And yet another is the cor- rect one, which is not closed at the top. In the absence of negative examples, we have no obvious reason to prefer one of these generalizations over the others. One possible approach would be to take the smallest region that encompasses all the positive instances. It should be clear, however, that this will always lead to closed regions, which are incorrect characterizations of such spatial concepts as "above" and "outside". Thus, this cannot be the answer. And yet, humans do learn these concepts, apparently in the absence of negative instances. The following sec- tions indicate how that learning might take place. 3.2 A Possible Solution and its Drawbacks One solution to the "no negative evidence" problem which suggests itself is to take every positive instance for one concept to be an implicit negative instance for all other spatial concepts being learned. There are prob- lems with this approach, as we shall see, but they are surmountable. There are related ideas present in the child lan- guage literature, which support the work presented here. [Markman, 1987] posits a "principle of mutual exclusiv- ity" for object naming, whereby a child assumes that each object may only have one name. This is to be viewed more as a learning strategy than as a hard-and- fast rule: clearly, a given object may have many names (an office chair, a chair, a piece of furniture, etc.). The method being suggested really amounts to a principle of mutual exclusivity for spatial relation terms: since each spatial relation can only have one name, we take a pos- itive instance of one to be an implicit negative instance for all others. In a related vein, [Johnston and Slobin, 1979] note that in a study of children learning locative terms in En- glish, Italian, Serbo-Croatian, and qMrkish, terms were learned more quickly when there was little or no syn- onymy among terms. They point out that children seem to prefer a one-to-one meaning-to-morpheme mapping; this is similar to, although not quite the same as, the mutual exclusivity notion put forth here. 1 In linguistics, the notion that the meaning of a given word is partly defined by the meanings of other words in the language is a central idea of structuralism. This has been recently reiterated by [MacWhinney, 1989]: "the semantic range of words is determined by the particular contrasts in which they are involved". This is consonant with the view taken here, in that contrasting words will serve as implicit negative instances to help define the boundaries of applicability of a given spatial term. There is a problem with mutual exclusivity, however. Using it as a method for generating implicit negative in- stances can yield many false negatives in the training set, i.e. implicit negatives which really should be positives. Consider the following set of terms, which are the ones learned by the system described here: • above • below • Oil • off 1 They are not quite the same since a difference in meaning need not correspond to a difference in actual reference. When we call a given object both a "chair" and a "throne", these are different meanings, and this would thus be consistent with a one-to-one meaning-to-morpheme mapping. It would not be consistent with the principle of mutual exclusivity, however. 140 • inside • outside • to the left of • to the right of If we apply mutual exclusivity here, the problem of false negatives arises. For example, not all positive instances of "outside" are accurate negative instances for "above", and indeed all positive instances of "above" should in fact be positive instances of "outside", and are instead taken as negatives, under mutual exclusivity. "Outside" is a term that is particularly badly affected by this problem of false implicit negatives: all of the spatial terms listed above except for "in" (and "outside" itself, of course) will supply false negatives to the training set for "outside". The severity of this problem is illustrated in Figure 4. In these figures, which represent training data for the spatial concept "outside", we have tall, rectangular land- marks, and training points 2 relative to the landmarks. Positive training points (instances) are marked with cir- cles, while negative instances are marked with X's. In (a), the negative instances were placed there by the teacher, showing exactly where the region not outside the landmark is. This gives us a "clean" training set, but the use of teacher-supplied explicit negative instances is precisely what we are trying to get away from. In (b), the negative instances shown were derived from positive in- stances for the other spatial terms listed above, through the principle of mutual exclusivity. Thus, this is the sort of training data we are going to have to use. Note that in (b) there are many false negative instances among the positives, to say nothing of the positions which have been marked as both positive and negative. This issue of false implicit negatives is the central problem with mutual exclusivity. 3.3 Salvaging Mutual Exclusivity The basic idea used here, in salvaging the idea of mu- tual exclusivity, is to treat positive instances and implicit negative instances differently during training: Implicit negatives are viewed as supplying only weak negative evidence. The intuition behind this is as follows: since the im- plicit negatives are arrived at through the application of a fallible heuristic rule (mutual exclusivity), they should count for less than the positive instances, which are all assumed to be correct. Clearly, the implicit negatives should not be seen as supplying excessively weak neg- ative evidence, or we revert to the original problem of learning in the (virtual) absence of negative instances. But equally clearly, the training set noise supplied by false negatives is quite severe, as seen in the figure above. So this approach is to be seen as a compromise, so that we can use implicit negative evidence without being over- whelmed by the noise it introduces in the training sets for the various spatial concepts. The details of this method, and its implementation un- der back-propagation, are covered in Section 5. However, 2I.e. trajectors consisting of a single point each (a) O O O Q X X M . O e o m o o X X O G . . . . . O X X O = , X o X I ~ m m l o Lx • -~O QO O 0 O ® ® X X X x x Q x x X x x x x x ~-. - x-I xx ® X X O - X • • - 0 X • - - - X X X 0 X X . . . . . 0 X X Q - - x - • 0 X X • • • - • X X - • * X X X X Q - X o - * X 0 • o - X . X X X " " " • " 0 X 0 x O ~- x -.-~ ® 0 G 0 X X X X X X (b) Figure 4: Ideal and Realistic Training Sets for "Outside" 141 this is a very general solution to the "no negative evi- dence" problem, and can be understood independently of the actual implementation details. Any learning method which allows for weakening of evidence should be able to make use of it. In addition, it could serve as a means for addressing the "no negative evidence" problem in other domains. For example, a method analogous to the one suggested here could be used for object naming, the do- main for which Markman suggested mutual exclusivity. This would be necessary if the problem of false implicit negatives is as serious in that domain as it is in this one. 4 Results This section presents the results of training. Figure 5 shows the results of learning the spatial term "outside", first without negative instances, then using implicit negatives obtained through mutual exclusivity, but without weakening the evidence given by these, and finally with the negative evidence weakened. The landmark in each of these figures is a triangle. The system was trained using only rectangular land- marks. The size of the black circles indicates the appropri- ateness, as judged by the trained system, of using the term "outside" to refer to a particular position, relative to the LM shown. Clearly, the concept is learned best when implicit negative evidence is weakened, as in (c). When no negatives at all are used, the system overgen- eralizes, and considers even the interior of the LM to be "outside" (as in (a)). When mutual exclusivity is used, but the evidence from implicit negatives is not weakened, the concept is learned very poorly, as the noise from the false implicit negatives hinders the learning of the con- cept (as in (b)). Having all implicit negatives supply only weak negative evidence greatly alleviates the prob- lem of false implicit negatives in the training set, while still enabling us to learn without using explicit, teacher- supplied negative instances. It should be noted that in general, when using mutual exclusivity without weakening the evidence given by im- plicit negatives, the results are not always identical with those shown in Figure 5(b), but are always of approxi- mately the same quality. Regarding the issue of generalizability across LMs, two points of interest are that: • The system had not been trained on an LM in ex- actly this position. • The system had never been trained on a triangle of any sort. Thus, the system generalizes well to new LMs, and learns in the absence of explicit negative instances, as desired. All eight concepts were learned successfully, and exhibited similar generalization to new LMs. 5 Details The system described in this section learns perceptually- grounded semantics for spatial terms using the (a) O000000000O000@0000@ O000000000000000000e O000000000000O00000@ OOO0000000000000000@ 00O0000@OOO00000000@ O000OO0@O00OOO000O0~ O00O00O@OO00OO000OO@ 00O000OOO0000000000@ 0000000@000000~0000@ 00000O000000~0000@ OOOOOOOOO0~OOO0@ oooooooo~M~OOOOeI oooooo~M~OOOOel ooooo~M~~OoooeI oooo~ll~M~J~ooooel oooooooooooooooooooel 00OOOOO0OOOOOOOO0OO@l 000O0OO0OO0OO000000~I OOO0000000000000000@I (b) "I 6oo0000@000-. ooo ooe0000@000., .oooe ooo0000@0000* .oooe oooOOO000000 • .OOOe • eooO00@OOOO, ooeoe oooe000@0000- oooee • ooo000@@O000 ooooe ooe000@00000, -ooooe ooo00O@00000-~ooooe @oooo00@0000~[~Jooooe ooooo00@00~ooooe ooooo000~~ooooa ooooo0W~m~~ooooe oooooEl~~oeeoe ooool'd~l;~J~JJJJJ~ooooq eooo-.oooooooooooooa eooo-.oooooooooooooa oooo-~gOOOOOO00oooll VOID- QOQgOQDOOOOO|! I ~ o o e o l m ~ M ~ m A ~ d (c) o@ooooo@@oooooooooo@ ooooooooooooooooooo@ @oooooo@ooooooooooo@ @oooooooooooooooooo@ @oooooooooooooooooo@ oooooooooooooooooooe ooooooooooooooooooo@ ooooooo@ooooooooooo@ oooooooo@ooooo~ooooe ooooooooooooEII~]oooo@ oooooooo00EII~!~00oooql ooooooo0130131~Jl~00oOOel ooooo0~131~D~E~l~0oooel oooO~[3[ZII~EJOOOO@l ooooooooooooo0ooooo@l ooooooooooooooooooo|J oooooooooooooooo000@ I 0000000000000o0000011 Figure 5: "Outside" without Negatives, and with Strong and Weak Implicit Negatives 142 quiekprop 3 algorithm [Fahlman, 1988], a variant on back-propagation [Rumelhart and McClelland, 1986]. This presentation begins with an exposition of the rep- resentation used, and then moves on to the specific net- work architecture, and the basic ideas embodied in it. The weakening of evidence from implicit negative in- stances is then discussed. 5.1 Representation of the LM and TR As mentioned above, the representation scheme for the LM comprises the following: • A bitmap in which those pixels corresponding to the interior of the LM are the only ones set. • The z, y coordinates of several "key points" of the LM, where z and y each vary between 0.0 and 1.0, and indicate the location of the point in question as a fraction of the width or height of the image. The key points currently being used are the center of mass (CoM) of the LM, and the four corners of the LM's bounding box (UL: upper left, UR: upper right, LL: lower left, LR: lower right). The (punctate) TR is specified by the z, V coordinates of the point. The activation of an output node of the system, once trained for a particular spatial concept, represents the appropriateness of using the spatial term in describing the TR's location, relative to the LM. 5.2 Architecture Figure 6 presents the architecture of the system. The eight spatial terms mentioned above are learned simul- taneously, and they share hidden-layer representations. 5.2.1 Receptive Fields Consider the right-hand part of the network, which receives input from the LM interior map. Each of the three nodes in the cluster labeled "I" (for interior) has a receptive field of five pixels. When a TR location is specified, the values of the five neighboring locations shown in the LM interior map, centered on the current TR location, are copied up to the five input nodes. The weights on the links between these five nodes and the three nodes labeled "I" in the layer above define the receptive fields learned. When the TR position changes, five new LM interior map pixels will be "viewed" by the receptive fields formed. This allows the system to detect the LM interior (or a border between interior and exterior) at a given point and to bring that to bear if that is a relevant semantic feature for the set of spatial terms being learned. 5.2.2 Parameterized Regions The remainder of the network is dedicated to com- puting parameterized regions. Recall that a parameter- ized region is much the same as any other region which might be learned by a perceptron, except that the lines 3Quickprop gets its name from its ability to quickly con- verge on a solution. In most cases, it exhibits faster conver- gence than that obtained using conjugate gradient methods [Fahlman, 1990]. which define the relevant half-planes are constrained to go through specific points. In this case, these are the key points of the LM. A simple two-input perceptron unit defines a line in the z, tt plane, and selects a half-plane on one side of it. Let wffi and w v refer to the weights on the links from the z and y inputs to the pereeptron unit. In general, if the unit's function is a simple threshold, the equation for such a line will be zw~ + wy = O, (1) i.e. the net input to the perceptron unit will be herin = actor. + yltO~. (2) Note that this line always passes through the origin: (0,0). If we want to force the line to pass through a particular point (zt,yt) in the plane, we simply shift the entire coordinate system so that the origin is now at (zt, yt). This is trivially done by adjusting the input values such that the net input to the unit is now ,,et,,, = (x - x,)w, + (V - V,)w,. (3) Given this, we can easily force lines to pass through the key points of an LM, as discussed above, by setting (zt, V~) appropriately for each key point. Once the sys- tem has learned, the regions will be parameterized by the coordinates of the key points, so that the spatial concepts will be independent of the size and position of any particular LM. Now consider the left-hand part of the network. This accepts as input the z, y coordinates of the TR location and the LM key points, and the layer above the input layer performs the appropriate subtractions, in line with equation 3. Now each of the nodes in the layer above that is viewing the TR in a different coordinate system, shifted by the amount specified by the LM key points. Note that in the BB cluster there is one node for each corner of the LM's bounding-box, while the CoM clus- ter has three nodes dedicated to the LM's center of mass (and thus three lines passing through the center of mass). This results in the computation, and through weight up- dates, the learning, of a parameterized region. Of course, the hidden nodes (labeled 'T') that receive input from the LM interior map are also in this hidden layer. Thus, receptive fields and parameterized regions are learned together, and both may contribute to the learned semantics of each spatial term. Further details can be found in [Regier, 1990]. 5.3 Implementing "Weakened" Mutual Exclusivity Now that the basic architecture and representations have been covered, we present the means by which the evi- dence from implicit negative instances is weakened. It is assumed that training sets have been constructed us- ing mutual exclusivity as a guiding principle, such that each negative instance in the training set for a given spa- tial term results from a positive instance for some other term. 143 above below on 0 0 0 off in out left 0 0 0 0 right BBI i CoM z y UL (LM) z y UR (LM) z y (TR) z y z y LL LR (LM) (LM) ZTR z y CoM (LM) ! r Figure 6: Network Architecture 144 • Evidence from implicit negative instances is weak- ened simply by attenuating the error caused by these implicit negatives. • Thus, an implicit negative instance which yields an error of a given magnitude will contribute less to the weight changes in the network than will a positive instance of the same error magnitude. This is done as follows: Referring back to Figure 6, note that output nodes have been allocated for each of the spatial terms to be learned. For a network such as this, the usual error term in back-propagation is 1 E = ~ ~_,(t~,p - oj,p) 2 (4) J,P where j indexes over output nodes, and p indexes over input patterns. We modify this by dividing the error at each output node by some number/~j,p, dependent on both the node and the current input pattern. 1 V.(ti,p - oj,p E = ~ ~ ~; )2 (5) $,P The general idea is that for positive instances of some spatial term, f~j,p will be 1.0, so that the error is not at- tenuated. For an implicit negative instance of a term, however, flj,p will be some value Atten, which corre- sponds to the amount by which the error signals from implicit negatives are to be attenuated. Assume that we are currently viewing input pattern p, a positive instance of "above". 'then the target value for the "above" node will be 1.0, while the target values for all others will be 0.0, as they are implicit negatives. Here, flabove,p = 1.0, and fll,p = Atten, Vi ~ above. The value Atten = 32.0 was used successfully in the experiments reported here. 6 Conclusion The system presented here learns perceptually-grounded semantics for the core senses of eight English preposi- tions, successfully generalizing to scenes involving land- marks to which the system had not been previously ex- posed. Moreover, the principle of mutual exclusivity is successfully used to allow learning without explicit nega- tive instances, despite the false negatives in the resulting training sets. Current research is directed at extending this work to the case of arbitrarily shaped trajectors, and to handling polysemy. Work is also being directed toward the learn- ing of non-English spatial systems. References [Bowerman, 1983] Melissa Bowerman, "How Do Chil- dren Avoid Constructing an Overly General Grammar in the Absence of Feedback about What is Not a Sen- tence?," In Papers and Reports on Child Language Development. Stanford University, 1983. [Braine, 1971] M. Braine, "On Two Types of Models of the Internalization of Grammars," In D. Slobin, editor, The Ontogenesis of Grammar. Academic Press, 1971. [Fahlman, 1988] Scott Fahlman, "Faster-Learning Vari- ations on Back Propagation: An Empirical Study," In Proceedings of the 1988 Connectionist Models Summer School. Morgan Kaufmann, 1988. [Fahlman, 1990] Scott Fahlman, (personal communica- tion), 1990. [Feldman et al., 1990] J. Feldman, G. Lakoff, A. Stolcke, and S. Weber, "Miniature Language Acquisition: A Touchstone for Cognitive Science," Technical Report TR-90-009, International Computer Science Institute, Berkeley, CA, 1990, also in the Proceedings of the 12th Annual Conference of the Cognitive Science Society, pp. 686-693. [~lohnston and Slobin, 1979] Judith Johnston and Dan Slobin, "The Development of Locative Expressions in English, Italian, Serbo-Croatian and Turkish," Jour- nal of Child Language, 6:529-545, 1979. [MacWhinney, 1989] Brian MacWhinney, "Competition and Lexical Categorization," In Linguistic Categoriza- tion, number 61 in Current Issues in Linguistic The- ory. John Benjamins Publishing Co., Amsterdam and Philadelphia, 1989. [Markman, 1987] Ellen M. Markman, "How Children Constrain the Possible Meanings of Words," In Con- cepts and conceptual development: Ecological and in- tellectual factors in categorization. Cambridge Univer- sity Press, 1987. [Minsky and Papert, 1988] Marvin Minsky and Sey- mour Papert, Perceptrons (Expanded Edition), MIT Press, 1988. [Pinker, 1989] Steven Pinker, Learuability and Cogni- tion: The Acquisition of Argument Structure, MIT Press, 1989. [Regier, 1990] Terry Regier, "Learning Spatial Terms Without Explicit Negative Evidence," Technical Re- port 57, International Computer Science Institute, Berkeley, California, November 1990. [Rumelhart and McClelland, 1986] David Rumelhart and James McClelland, Parallel Distributed Proccess- ing: Ezplorations in the microstructure of cognition, MIT Press, 1980. [Weber and Stolcke, 1990] Susan Hollbach Weber and Andreas Stolcke, "L0: A Testbed for Miniature Lan- guage Acquisition," Technical Report TR-90-010, In- ternational Computer Science Institute, Berkeley, CA, 1990. 145
1991
18
SUBJECT-DEPENDENT CO-OCCURRENCE AND WORD SENSE DISAMBIGUATION Joe A. Guthrie,* Louise Guthrie, Yorick Wilks, and Homa Aidinejad Computing Research LabomtoD, Box 30001 New Mexico State University Las Cruces, NM 88003-0001 ABSTRACT We describe a method for obtaining subject-dependent word sets relative to some (subjecO domain. Using the subject classifications given in the machine-readable ver- sion of Longman's Dictionary of Contemporary English, we established subject-dependent co- occurrence links between words of the defining vocabulary to construct these "neighborhoods". Here, we describe the application of these neigh- borhoods to information retrieval, and present a method of word sense disambiguation based on these co-occurrences, an extension of previous work. INTRODUCTION Word associations have been studied for some time in the fields of psycholinguis- tics (by testing human subjects on words), linguistics (where meaning is often based on how words co-occur with each other), and more recently, by researchers in natural language processing (Church and Hanks, 1990; Hindle and Rooth, 1990; Dagan, 1990; McDonald et al., 1990; Wilks et al., 1990) using statistical measures to identify sets of associated words for use in various natural language processing tasks. One of the tasks where the statistical data on associated words has been used with some success is lexical disambiguation. However, associated word sets gathered * Present address: Mathematics Department, University of Texas at k-:l Paso, El Paso, Tx 79968 from a general corpus may contain words that are associated with many different senses. For example, vocabulary associated with the word "bank" includes "money", "rob", "river" and "sand". In this paper, we describe a method for obtaining subject- dependent associated word sets, or "neigh- borhoods" of a given word, relative to a par- ticular (subject) domain. Using the subject classifications of Longman's Dictionary of Contemporary English (LDOCE), we have established subject-dependent co-occurrence finks between words of the defining vocabu- lary to construct these neighborhoods. We will describe the application of these neigh- borhoods to information reuieval, and present a method of word sense disambigua- tion based on these co-occurrences, an extension of previous work. CO-OCCURRENCE NEIGHBORHOODS Words which occur frequently with a given word may be thought of as forming a "neighborhood" of that word. If we can determine which words (i.e. spelling forms) co-occur frequently with each word sense, we can use these neighborhoods to disambi- guate the word in a given text. Assume that we know of only two of the classic senses of the word bank: 1) A repository for money, and 2) A pile of earth on the edge of a river. We can expect the "money" sense of bank to co-occur frequently with such words 146 as "money", "loan", and "robber", while the "fiver" sense would be more frequently asso- ciated with "river", "bridge", and "earth". In order to disambiguate "bank" in a text, we would produce neighborhoods for each sense, and intersect them with the text, our assumption being that the neighborhood which shared more words with the text would determine the correct sense. Varia- tions of this idea appear in (l.,esk, 1986; McDonald, et al., 1990; Wilks, 1987; 1990; Veronis and Ide, 1990). Previously, McDonald and Plate (McDonald et al., 1990; Schvaneveldt, 1990) used the LDOCE definitions as their text, in order to generate co-occurrence data for the 2,187 words in the LDOCE control (defining) vocabulary. They used various methods to apply this data to the problem of disambiguating control vocabulary words as they appear in the LDOCE example sen- tences. In every case however, the neighbor- hood of a given word was a co-occurrence neighborhood for its spelling form over all the definitions in the dictionary. Distinct neighborhoods corresponding to distinct senses had to be obtained by using the words in the sense definition as a core for the neighborhood, and expanding it by com- bining it with additional words from the co- occurrence neighborhoods of the core words. SUBJECT-DEPENDENT NEIGHBORHOODS The study of word co-occurrence in a text is based on the cliche that "one (a word) is known by the company one keeps". We hold that it also makes a difference where that company is kept: since a word may occur with different sets of words in dif- ferent contexts, we construct word neighbor- hoods which depend on the subject of the text in question. We call these, naturally enough, "subject-dependent neighborhoods". A unique feature of the electronic ver- sion of LDOCE is that many of the word sense definitions are marked with a subject field code which tells us which subject area the sense pertains to. For example, the "money"-related senses of bank are marked EC (Economics), and for each such main subject heading, we consider the subset of LDOCE definitions that consists of those sense definitions which sham that subject code. These definitions are then collected into one file, and co-occurrence data for their defining vocabulary is generated. Word x is said to co-occur with word y if x and y appear in the same sense definition; the total number of times they co-occur is denoted as We then construct a 2,187 x 2,187 matrix in which each row and column corresponds to one word of the defining vocabulary, and the entry in the xth row and yth column represents the number of times the xth word co-occurred with the yth word. (This is a symmetric matrix, and therefore it is only necessary to maintain half of it.) We denote by f, the total number of times word x appeared. While many statistics may be used to measure the relatedness of words x and y, we used the function r (x,y ) = f x~ . in this study. We choose a co-occurrence neighborhood of a word x from a set of closely related words. We may choose the ten words with the highest relatedness statis- tic, for instance. Neighborhoods of the word "metal" in the category "Economics" and "Business" are presented below: Table 1. Economics neighborhood of metal Subject Code EC ffi Economics metal idea coin them silver w, al should pocket gold well him Table 2. Business neighborhood of recta/ Subject Code BU = Business metal bear apparatus mouth spring entrance plate tight sheet inside brags 147 In this example, the ~ghborhoods reflect a fundamental difference between the two subject areas. Economics is a more theoretical subject, and therefore its neigh- borhood contains words like "idea", "gold", "silver", and "real", while in the more practi- cal domain of Business, we find the words "brass", "apparatus", "spring", and "plate". We can expect the contrast between subject neighborhoods to be especially great for words with senses that fall into different subject areas. Consider the actual neighbor- hoods of our original example, bank. Table 3. Economics neighborhood of bank bank Subject Code EC = Economies account cheque money by into have keep order out pay at put from draw an busy more supply it safe Table 4. Engineering neighborhood of bank bank Subject Code EG = Engineering river wall flood thick earth prevent opposite chair hurry paste spread overflow walk help we throw clay then wide level Notice that even though we included the twenty most closely related words in each neighborhood, they are still unrelated or disjoint, although many of the words which appear in the lists are indeed sugges- tive of the sense or senses which fall under that subject category. In LDOCE, three of the eleven senses of bank are marked with the code EC for Economics, and these represent the "money" senses of the word. It is a quirk of the classification in LDOCE that the "river" senses of bank are not marked with a subject code. This lack of a subject code for a word sense in LDOCE is not uncommon, how- ever, and as was the case with bank, some word senses may have subject codes, while others do not. We label this lack of a sub- ject code the "null code", and form a neigh- borhood of this type of sense by using all sense definitions without code as text. This "null code neighborhood" can reveal the common, or "generic" sense of the word. The twenty most frequently occurring words with bank in definitions with the null subject code form the following neighbor- hood: Table 5. Null Code neighborhood of bank Subject Code NULL = no code assigned bank rob river account lend overflow flood money criminal lake flow snow cliff police shore heap thief borrow along steep earth It is obvious that approximately half of these words are associated with our two main senses of bank-but a new element has crept in: the appearance of four out of eight words which refer to the money sense ("rob", "criminal", "police", and "thief") reveal a sense of bank which did not appear in the EC neighborhood. In the null code definitions, there are quite a few references to the potential for a bank to be robbed. Finally, for comparison, consider a neighborhood for bank which uses all the LDOCE definitions (see McDonald et al., 1990; Schvaneveldt, 1990; Wilks et al., 1990): Table 6. Unrestricted neighborhood of bank Subject Code All bank account bank busy cheque criminal earn flood flow interest lake lend money overflow pay river rob safes and thief wall Only four of these words ("bank", "cam", "sand", and "thief") are not found in 148 the other three neighborhoods, and the number of words in the intersection of this neighborhood with the Economics, Engineering, and Null neighborhoods are: six, four, and eleven, respectively. Recalling that the Economics and Engineering neigh- borhoods are disjoint, this data supports our hypothesis that the subject-dependent neigh- borhoods help us to distinguish senses more easily than neighborhoods which are extracted from the whole dictionary. There are over a hundred main subject field codes in LDOCE, and over three- hundred sub-divisions within these. For example, "medicine-and-biology" is a main subject field (coded "MD"), and has twenty- two sub-divisions such as "anatomy" and "biochemistry". These main codes and their sub-divisions constitute the only two levels in the LDOCE subject code hierarchy, and main codes such as "golf' and "sports" are not related to each other. Cknrently, we use only the main codes when we are construct- ing a subject-dependent neighborhood. But even this division of the definition text is fine enough so that, given a word and a sub- ject code, the word may not appear in the definitions which have that subject code at all. To overcome this problem, we have adopted a restructured hierarchy of the sub- ject codes, as developed b~y Slator (1988). This tree structure has a node at the top, representing all the definitions. At the next level are six fundamental categories such as "science" and "transportation", as well as the null code. These clusters are further sub- divided so that some main codes become sub-divisions of others ("golf' becomes a sub-division of "sports", etc.). The max- imum depth of this tree is five levels. If the word for which we want to pro- duce a neighborhood appears too infre- quently in definitions with a given code, we travel up the hierarchy and expand the text under consideration until we have reached a point where the word appears frequently enough to allow the neighborhood to be con- structed. The worst case scenario would be one in which we had traveled all the way to the top of the hierarchy and used all the definitions as the text, only to wind up with the same co-occurrence neighborhoods as did McDonald and Plate (Schvaneveldt, 1990; Wilks et al., 1990)! There are certain drawbacks in using LDOCE to construct the subject-dependent neighborhoods, however, the amount of text in LDOCE about any one subject area is rather limited, is comprised of a control vocabulary for dictionary definitions only, and uses sample sentences which were con- cocted with non-native English speakers in mind. In the next phase of our research, large corpora consisting of actual documents from a given subject area will be used, in order to obtain neighborhoods which more accurately reflect the sorts of texts which will be used in applications. In the future, these neigh- borhoods may replace those constructed from LDOCE, while leaving the subject code hierarchy and various applications intact. WORD SENSE DISAMBIGUATION In this section, we describe an applica- tion of subject-dependent co-occurrence neighborhoods to the problem of word sense disambiguation. The subject-dependent co- occurrence neighborhoods are used as build- ing blocks for the neighborhoods used in disambiguation. For each of the subject codes (including the null code) which appear with a word sense to be disambiguated, we intersect the corresponding subject- dependent co-occurrence neighborhood with the text being considered (the size of text can vary from a sentence to a paragraph). The intersection must contain a pre-selected minimum number of words to be considered. But if none of the neighborhoods intersect at greater than this threshold level, we replace the neighborhood N by the neighborhood N(1), which consists of N together with the first word from each neighborhood of words in N, using the same subject code. If neces- sary, we add the second most strongly asso- ciated word for each of the words in the ori- ginal neighborhood N, forming the neighbor- 149 hood N(2). We continue this process until a subject-dependent co-occurrence neighbor- hood has intersection above the threshold level. Then, the sense or senses with this subject code is selected. If more than one sense has the selected code, we use their definitions as cores to build distinguishing neighborhoods for them. These are again intersected with the text to determine the correct sense. The following two examples illustrate this method. Note that some of the neigh- borhoods differ from those given earlier since the text used to construct these neigh- borhoods includes any example sentences which may occur in the sense definitions. Those neighborhoods presented earlier ignored the example sentences. In each example, we attempt to disambiguate the word "bank" in a sentence which appears as an example sentence in the Collins COBUILD English Language Dictionary. The disambiguation consists of choosing the correct sense of "bank" from among the thir- teen senses given in LDOCE. These senses are summarized below. bank(l) : [ ] : land along the side of a fiver, lake, etc. bank(2) : [ ] : earth which is heaped up in a field or garden. bank(3) : [ ] : a mass of snow, clouds, mud, etc. bank(4) : [AU] : a slope made at bends in a road or race-track. bank(5) : [ ] : a sandbank in a river, etc. bank(6) : [ALl] : to move a ear or aircraft with one side higher than the other. bank('/) : [ ] : a row, especially of oars in an ancient boat or keys on a typewriter. bank(8) : [EC] : a place in which money is kept and paid out on demand. bank(9) : [MD] : a place where something is held ready for use, such as blood. bank(10) : [GB] : (a person who keeps) a supply of money or pieces for payment in a gam- bling game. bank(ll) : [ ] : break the bank is to win all the money in bank(10). bank(12) : [EC] : to put or keep (money) in a bank. bank(13) : [EC] : to keep ones money in a bank. Example 1. The sentence is 'Whe air- craft turned, banking slightly." The neighborhoods of "bank" for the five relevant subject codes are given below. Table 7. Automotive neighborhood of bank Subject Code ALl = Automotive bank make go up move so they high also round car side turn road aircraft slope bend safe Table 8. Economics neighborhood of bank bank Subject Code EC = Economics have it person out into take money put write keep pay order another paper draw supply account safe sum cheque Table 9. Gambling neighborhood of bank bank Subject Code GB = Gambling person use money piece play keep pay game various supply chance Table 10. Medical neighborhood of bank Subject Code MD - Medicine and Biology bank something use place hold medicine ready blood human origin organ store hospital tream~ent product comb 150 Table 11. Null Code neighborhood of bank bank Subject Code NULL = No code assigned game earth stone boat fiver bar snow lake sand shore mud framework flood cliff heap harbor ocean parallel overflow clerk The AU neighborhood contains two words, "aircraft" and "turn", which also appear in the sentence. Note that we con- sider all forms of tum (tumed, tuming, etc.) to match "turn". Since none of the other neighborhoods have any words in common with the sentence, and since our threshold value for this short sentence is 2, AU is selected as the subject code. We must now decide between the two senses which have this code. At this point we remove the function words from the sense definitions and replace each remaining word by its root form. We obtain the following neighborhoods. Table 12. Words in sense 4 of bank Definition bank(4) slope make bend road so they safe car go round Table 13. Words in sense 6 of bank Definition bank(6) car aircraft move side high make turn Since bank(4) has no words in com- mon with the sentence, and bank(6) has two Ctum" and "aircraft"), bank(6) is selected. This is indeed the sense of "bank" used in the sentence. Example 2. The sentence is "We got a bank loan to buy a car." The original neighborhoods of "bank" are, of course, the same as in Example 1. The threshold is again 2. None of the neighborhoods has more than one word in common with the sentence, so the iterative process of enlarg- ing the neighborhoods is used. The AU neighborhood is expanded to include "engine" since it is the first word in the AU neighborhood of "make". The first word in the AU neighborhood of "up" is "increase", so "increase" is added to the neighborhood. If the word to be added already appears in the neighborhood of "bank", no word is added. On the fifteenth iteration, the EC neighborhood contains "get" and "buy". None of the other neighborhoods have more than one word in common with the sentence, so EC is selected as the subject code. Definitions 8, 12, and 13 of bank all have the EC subject code, so their definitions are used as cores to build neighborhoods to allow us to choose one of them. After twenty-three iterations, bank(8) is selected. Experiments are underway to test this method and variations of it on large numbers of sentences so that its effectiveness may be compared with other disambiguation tech- niques. Results of these experiments will be reported elsewhere. FURTHER APPUCATIONS Several applications of subject- dependent neighborhoods in addition to word-sense disambiguation are being pur- sued, as well. For information retrieval, pre- viously constructed neighborhoods relevant to the subject area can be used to expand a query and the target (titles, key words, etc.) to include more words in the intersection, and improve both recall and precision. Another application is the determination of the subject area of a text. Since the effec- tiveness of searching for key words to deter- mine the topic of a text is limited by the choice of the particular list of key words, and the fact that the text may use synonyms or refer to the concept the key word represents without using it (for example by using a pronoun in its place), we could look for word associations (thereby involving more words in the process and making it less vulnerable to the above problems), 151 rather than simply searching for key words indicative of a topic. Neighborhoods of words in the text could be constructed for each of the six fundamental categories, and intersected with the surrounding words in the text. After choosing the category with the greatest intersection, we would then traverse the subject code tree downward to arrive at a more specific code, stopping at any point where there is not enough data to allow us to choose one code over the others at that level. Once a subject code is selected for a text, it could be used as a context for word-sense disambiguation. CONCLUSION Although the words in the LDOCE definitions constitute a small text (almost one million words, compared with the mega-texts used in other co-occurrence stu- dies), the unique feature of subject codes which can be used to distinguish many definitions, and LDOCE's small control vocabulary (2,187 words) make it a useful corpus for obtaining co-occurrence data. The development of techniques for informa- tion retrieval and word-sense disambiguation based on these subject-dependent co- occurrence neighborhoods is very promising indeed. ACKNOWLEDGEMENTS This research was supported by the New Mexico State University Computing Research Laboratory through NSF Grant No. IRI-8811108. Grateful acknowledgement is accorded to all the members of the CRL Natural Language Group for their comments and suggestions. REFERENCES Church, Kenneth W., and Patrick Hanks (1990). Word Association Norms, Mutual Infor- mation, and Lexicography. Computational Linguistics, 16, 1, pp.22-29. Dagan, Ido, and Alon Itai (1990). Process- ing Large Corpora for Reference Resolution. Proceedings of the 13th International Conference on Computational Linguistics (COLING-90), Helsinki, Finland, 3, pp.330-332. Hindle, Donald, and Mats Rooth (1990). Structural Ambiguity and Lexical Relations. Proceedings of the DARPA Speech and Natural Language Workshop. Lesk, Michael E. (1986). Automatic Sense Disambiguation Using Machine Readable Dic- tionaries: How to Tell a Pine Cone from an Ice Cream Cone. Proceedings of the ACM SIGDOC Conference, Toronto, Ontario. McDonald, James E., Tony Plate, and Roger W. Schvaneveldt (1990). Using Pathfinder to extract semantic information from text. In R. W. Schvaneveldt (ed.), Pathfinder Associative Networks: Studies in Knowledge Organization. Norwood, NJ: Ablex. Schvaneveldt, Roger W. (1990). Path- finder Associative Networks: Studies in Knowledge Organization. New Jersey: Ablex. Slator, Brian M. 0988). Constructing Contextually Organized Lexical Semantic Knowledge-bases. Proceedings of the Third Annual Rocky Mountain Conference on Artificial Intelligence (RMCAI-88), Denver, CO, pp.142- 148. Veronis, Jean., Nancy Ide (1990). Very Large Neural Networks for Word-sense Disambi- guation. COLING '90, 389-394. Wilks, Yorick A., Dan C. Fass, Cheng- ming Guo, James E. McDonald, Tony Plate, and Brian M. Slator (1987). A Tractable Machine Dictionary as a Resource for Computational Semantics. Memorandum in Computer and Cog- nitive Science, MCCS-87-105, Computing Research Laboratory, New Mexico State Univer- sity. In Branimir Boguraev and Ted Briscoe (eds.), Computational IJ.xicography for Natural Language Processing. Harlow, Essex, England: Longman Group Limited. Wilk.% Ymick A., Dan C. Fass, Cheng- ming Guo, James E. McDonald, Tony Plate, and Brian M. Slator (1990). Prodding Machine Tractable Dictionary Tools. Journal of Machine Translation, 2. Also to appear in Theoretical and Computational Issues in Lexical Semantics , J. Pnstejovsky (~!.) 152
1991
19
INCLUSION, DISJOINTNESS AND CHOICE: THE LOGIC OF LINGUISTIC CLASSIFICATION Bob Carpenter Computational Linguistics Program Philosophy Department Carnegie Mellon University Pittsburgh, PA 15213 carp~caesar.lcl.cmu.edu Carl Pollard Linguistics Department Ohio Sate University Columbus, OH 43210 pollard~hpuxa.ircc.ohio-st ate.edu Abstract We investigate the logical structure of concepts generated by conjunction and disjunction over a monotonic multiple inheritance network where concept nodes represent linguistic categories and links indicate basic inclusion (ISA) and disjoint- hess (ISNOTA) relations. We model the distinction between primitive and defined concepts as well as between closed- and open-world reasoning. We ap- ply our logical analysis to the sort inheritance and unification system of HPSG and also to classifica- tion in systemic choice systems. Introduction Our focus in this paper is a stripped-down mono- tonic inheritance-based knowledge representation system which can be applied directly to provide a clean declarative semantics for Halliday's sys- temic choice systems (see Winograd 1983, Mel- lish 1988, Kress 1976) and the inheritance module of head-driven phrase-structure grammar (HPSG) (Pollard and Sag 1987, Pollard in press). Our in- heritance networks are constructed from only the most rudimentary primitives: basic concepts and ISA and ISNOTA links. By applying general al- gebraic techniques, we show how to generate a meet semilattice whose nodes correspond to con- sistent conjunctions of basic concepts and where meet corresponds to conjunction. We also show how to embed this result in a distributive lattice where the elements correspond to arbitrary con- junctions and disjunctions of basic concepts and where meet and join correspond to conjunction and disjunction, respectively. While we do not consider either role- or attribute-based reasoning in this paper, our constructions are directly appli- cable as a front-end for the combined attribute- and concept-based formalisms of Ait-Kaci (1986), Nebel and Smolka (1989), Carpenter (1990), Car- penter, Pollard and Franz (1991) and Pollard (in press). The fact that terms in distributive lattices have disjunctive normal forms allows us to factor our construction into two stages: we begin with the consistent conjunctive concepts generated from our primitive concepts and then form arbitrary disjunctions of these conjunctions. The conjunc- tive construction is useful on its own as its result is a semilattice where meet corresponds to conjunc- tion. In particular, the conjunctive semilattice is ideally suited to conjunctive logics such as those employed for unification, as in HPSG. We will consider the distinction between prim- itive and defined concepts, a well-known distinc- tion expressible in terminological reasoning sys- tems such as KL-ONE (Brachman 1979, Brach- man and Schmolze 1985), and its descendants (such as LOOM (MacGregor" 1988) or CLASSIC (Borgida et al. 1989)). We also tackle the va- riety of closed-world reasoning that is necessary for modeling constraint-based grammars such as HPSG. A similar form of closed-world reasoning is supported by LOOM with the disjoint-covering construction. One of the benefits of our notion of inheritance is that it allows us to express the natural seman- tics of both systemic choice systems and HPSG in- heritance hierarchies using basic concepts and ISA and ISNOTA links. In particular, we will see how choice systems correspond to ISNOTA reasoning, multiple choices can be captured in our conjunc- tive construction and how dependent choices can be represented by inheritance. One result of our construction will be a demonstration that the sys- temic classification and ttPSG systems are variant graphical representations of the same kind of un- derlying information regarding inclusion, disjoint- ness and choice. Inheritance Networks Our inheritance networks are particularly simple, being constructed from basic concepts and two kinds of "inheritance" links. Definition 1 (Inheritance Network) An inheritance net is a tuple (BasConc, ISA,ISNOTA) lohere: • BasConc: a finite set of basic concepts • ISA C BasConc x BasConc: the basic inclu- sion relation • ISNOTA C_ BasConc × BasConc: the basic dis- jointness relation The interpretation of a net is straightforward: each basic concept is thought of as representing a set of empirical objects, where P ISA Q means that all P's are Q's and P ISNOTA Q means that no P's are Q's. Our primary interest is in the logical relationships between concepts rather than in the actual extensions of the concepts them- selves. This is in accord with standard linguis- tic practice, where the focus is on types of utter- ances rather than utterance tokens. An example of an inheritance network is given in Figure 1. We have followed the standard convention of placing the more specific elements toward the bottom of the network, with arrows indicating the direction- ality of the ISA links (for instance, d ISA f and b ISNOTA C). Y /\ d e / \ / \ a b I c Figure 1: Inheritance Hierarchy We can automatically deduce all of the inclusion and disjointness relations that follow from the ba- sic ones (Carpenter and Thomason 1990). Definition 2 (Inclusion/Disjointness) The inclusion relation mA* C BasConc × BasConc is the smallest such that: • P ISA* P • /f P ISA Q and Q ISA* R then P ISA* R (Reflexive) (Transitive) The disjointness relation ISNOTA* C BasConc × BasConc is the smallest such that: • /f P ISNOTA Q or Q ISNOTA P then P ISNOTA* Q • if P ISA* Q and Q ISNOTA* R then P ISNOTA* R (Symmetry) (Chaining) These derived inclusion and disjointness relations express all of the information that follows from the basic relations. In particular, ISA* is the smallest pre-order extending ISA. For convenience, we al- low concepts P such that P ISNOTA* P; any such inconsistent concepts are automatically filtered out by the conjunctive construction. Similarly, we allow concepts P and Q such that P ISA* Q and Q ISh* P. In this case, P and Q are merged during the conjunctive construction so that they behave identically. Conjunctions A conjunctive concept is modeled as a set P C BasConc of basic concepts. A conjunctive concept P corresponds to the conjunction of the concepts P E P; an object is a P if and only if it is a P for every P E P. But arbitrary sets of basic concepts are not good models for conjunctive concepts; we need to identify conjunctive concepts which con- vey identical information and also remove those conjunctive concepts which are inconsistent. We address the first issue by requiring conjunctive concepts to be closed under inheritance and the second by removing any concepts which contain a pair of disjoint basic concepts. Definition 3 (Conjunctive Concept) A set P C C_ BasConc is a conjunctive concept if: • ifP E P and P ISA* P' then P' E P • no P, P~ E P are such that P ISNOTA* P~ Let ConjConc be the set of conjunctive concepts. 10 There is a natural inclusion or specificity order- ing on our conjunctive concepts; if P C Q then every object which can be classified as a Q can also be classified as a P. The conjunctive concepts derived from the inheritance net in Figure 1 are displayed in Figure 2, where we have P C Q for every derived "ISA" arc Q ---* P. {} f {Y} {d, f} {e, f} /\/\ {a,d,f} {d,e,f} {c,e,f} {a,d,e,f} {b,d,e,f} {c,d,e,f} T:><f {a,b,d,e,f} {a,c,d,e,f} Figure 2: Conjunctive Concept Ordering Defined Concepts So far, we have considered only primitive basic concepts. A defined basic concept is taken to be fully determined by its set of superconcepts (in the general terminological case with roles, restrictions on role values can also contribute to the definition of a concept (Brachman and Schmolze 1985)). In particular, a defined basic concept P is assumed to carry the same information as the conjunction of all of the concepts P' such that P ISA P~. For example, consider the basic concept b in Figure 1. The conjunctive concept {b, d, e, f} is strictly more informative than {d, e, f}; the primitiveness of b allows for the possibility that there is information to be gained from knowing that an object is a b that can not be gained from knowing that it is both a d and an e. On the other hand, if we assume that b is defined, then the presence of d and e in a conjunctive concept should ensure the presence of b, thus eliminating the sets {d,e,f}, {c, d, e, f} and {a, d, e, f} from consideration, as they are equivalent to the conjunctive concepts 11 {b,d,e,f}, {b,c,d,e,f} and {a,b,d,e,f} respec- tively. In the primitive case, being a d and an e is a necessary condition for being a b; in the defined case, being a d and e is also a sufficient condition for being a b. In general, suppose that DefConc C_ BasConc is the subset of defined concepts. To account for this new information, we add the following additional clause to the conditions that P must satisfy to be a conjunctive concept: (1) If P e DefConc and {P~ [ P~P~andPIsA* P'}CP then P E P. With the example in Figure 1 and the assumption that DefConc = {b, f}, we generate the conjunc- tive concepts in Figure 3. We have adopted the condition of only displaying the maximally specific primitive concepts of a conjunctive concept, as the other basic concepts can be determined from these. Note that the assumption that f, the most (} {d} {e} {.} {d,e} {c} {.,e} {.,c} Figure 3: Conjunctive Construction with Defined Concepts general basic concept, is defined means that ev- ery conjunctive concept must contain f, because the set {P [ f ~ P and f ISA P} is empty and thus a subset of every conjunctive concept. Thus {} is equivalent to {f} in terms of conjunctive in- formation so that every object is classified as an /. The set of conjunctive concepts ordered by re- verse set inclusion has the pleasant property of be- ing closed under consistent meets, where the meet operation represents conjunction ("unification"). More precisely, a set 79 C ConjConc of conjunc- tive concepts is consistent if there is a conjunctive concept P which contains all of the concepts con- tained in the conjunctive concepts in 7 9 so that U 79 C P. The following theorem states that for every consistent set 79 of concepts, there is a least P such that P __D U 7~- This least P is written II 7 9 agr / n~lm per \ plu sng 3rd 1st \ / gen rose fem neu Figure 4: Systemic Choice Network and called the meet of 7 ) . Theorem 4 The meet in (ConjConc, :D) for a con- sistent set 7 9 C_ ConjConc of conjunctive concepts is given by: n79 -- N{P' • ConjConc I P' -~ P for each P • 7 ~} = N{P' • Co.jCor,¢ I P' U79} = {P • BasConc I for every P' • ConjConc, } pi ~ Up implies P • pi Proof: This is an immediate consequence of the fact that ConjConc is closed under arbitrary in- tersections. Another way to generate the meet of a collection of conjunctive concepts is to close their union under inheritance and concept definition. It should be observed that joins (intersections), while always existing, in general represent only informational generalizations, not necessarily disjunctions. Systemic Choice Systems Mellish (1988) showed how the concepts express- ible using a systemic choice network such as that found in Figure 4 can be embedded into the lat- tice of first-order terms with conjunction repre- sented by unification. Our characterization of the concepts expressible in a systemic net instead re- lies on the translation of systemic notation into an inheritance network with IsAand ISNOTA links. The resulting conjunctive concepts correspond to the concepts that can be expressed in the systemic net. An example of a systemic choice network in the notation of Mellish (1988), is Figure 4. The connective I, of which there are three in the di- agram, signals disjoint alternatives; for instance, the connective for gender is taken to indicate that a gender must be exactly one of masculine, femi- nine or neuter. The connective }, of which there is one before gender, indicates necessary precon- ditions for a choice; in this case, a gender is only chosen if the number is singular and the person is third. Finally, the connective {, of which there is one labeled agr, indicates that a choice for an agreement value requires a choice for both number and person. We construct an inheritance hierarchy from a systemic network by taking a basic primitive con- cept for every choice in the network. The choices in Figure 4 are those items in bold face; the itali- cized items simply label connectives and are only for convenience (alternatively, we could take the italicized elements to be defined basic concepts). The ISNOTA relation between basic concepts is de- fined so that P ISNOTA Q if P and Q are connected by the choice connective I. For example, we have 3rd ISNOTA 1st and msc ISNOTA neu. Finally, the ISA relation is defined so that if P is one of the choices for a connective which has a precondition P~ attached to it, then we include P ISA P~. For "instance, we have msc ISA sng and msc ISA 3rd. In Figure 5, we disply the conjunctive concepts 12 {} {lst} {3rd} {sng} {plu} (lst,sng} {3rd,sng} {lst,plu} {3rd,plu} (3rd,sng,msc} (3rd,sng,fem} {3rd,sng, neu} Figure 5: Systemic Choices generated by the inheritance net stemming from the choice system in Figure 4. A fully determined choice in a choice system corresponds to a maxi- mally specific conjunctive concept, of which there are six in Figure 5. Sort Inheritance in HPSG An example of an HPSG sort inheritance hierarchy which represents the same information as the sys- temic choice system in Figure 4, in the notation of Pollard and Sag (1987), is given in Figure 6. The basic principle behind the HPSG notation is that the bold elements correspond to basic concepts, while the boxed elements correspond to partitions, s~called because the concepts in a partition are both pairwise disjoint and exhaustive. In terms of an inheritance network, the elements of a partition (those concepts directly below the partition in the diagram) are related by basic ISNOTA links. For instance, we would have plu ISNOTA sing. Each partition may also have dependencies which must be fulfilled for the choice to be made; in our case, before an element of the gender partition is chosen, singular must be chosen for number and third for person. These dependencies generate our basic IsA relation. For instance, we must have plu ISA agr and fern ISA sng. Carrying out this translation of the HPSG notation into an inheritance net pro- duces to the same result as the translation of the systemic choice system in Figure 4, thus generat- ing the conjunctive concept hierarchy in Figure 5. In HPSG, it is useful to allow sorts to be de- fined by conjunction. An example is main A base A strict-trans, which classifies the inputs to the passivization lexical rule (Pollard and Sag 1987:211). Translating the example to our sys- tem produces a defined conjunctive concept cor- responding to the conjunction of those three ba- sic concepts. On the other hand, a primitive sort such as aux cannot be defined as the conjunction of the sorts from which it inherits, namely verb and intrans-raising, because auxiliaries are not the only intransitive raising verbs. In the hierar- chy in Figure 6, it is most natural to consider the basic concept agr to be defined rather than prim- itive; it could simply be eliminated with the same effect. However, in the context of a grammar, agr would be one of many possible basic sorts (others being boolean, verb-form, etc.) and would thus not be eliminable. Disjunctive Concepts While meets in the conjunctive concept order- ing represent conjunction, joins (intersections) do not represent disjunction. For instance, {msc} U {fern} = {msc} U {neu} = {3rd, sng}, but the information that an object is masculine or fem- inine is different than the information that it is masculine or neuter, and more specific than the information that it is simply third-singular. The granularity of the original network dramatically affects the disjunctive concepts which can be rep- 13 agr plu sng 3rd 1st rose fern neu Figure 6: HPSG Inheritance Network Notation resented (see Borgida and Etherington 1989). For example, we could have partitioned gender into animate and neu concepts and then partitioned the animate concept into msc and fern. This move would distinguish the join of msc and fern from the join of msc and neu. To complete our study of the logic of sim- ple inheritance, we employ a well-known lattice- theoretic technique for embedding a partial order into a distributive lattice; when applied to con- junctive concept hierarchies, the result is a dis- tributive lattice where concepts correspond to ar- bitrary conjunctions and disjunctions of basic con- cepts with joins and meets representing disjunc- tion and conjunction. We model a disjunctive concept as a set 79 C ConjConc of conjunctive concepts interpreted dis- junctively; an object is classified as a 79 just in case it can be classified as a P for some P E 79. As with the conjunctive concepts, we identify disjunctive concepts which convey the same information. In this case, we can add more specific concepts to a disjunctive concept 79 without affecting its infor- mation content. Definition 5 (Disjunctive Concepts) A sub- set 7 9 C ConjConc of conjunctive concepts is said to be a disjunctive concept if whenever P,Q E ConjConc are such that Q D P and P E 7 9 then qe79. Let DisjConc be the collection of disjunctive con- cepts. The inclusion ordering between disjunctive con- cepts represents specificity, but this time if 79 C_ Q then 7 ~ is at least as specific as Q, as Q admits as many possibilities as 79. Note that the upper- closed sets of a partial ordering form a distributive lattice when ordered by inclusion, since it is a sub- lattice of a powerset lattice. Proposition 6 The structure (DisjConc, C) is a distributive lattice. Unions (joins) represent disjunctions in in DisjConc. Likewise, intersections (meets) repre- sent conjunctions. Furthermore, the function ¢ that maps a conjunctive concept P to the dis- junctive concept ¢(P) = {P' I P' _D P} is an embedding of ConjConc into DisjConc that pre- serves existing meets, so that ¢(P n P') = ¢(P) n ¢(P'). Note that this embedding coincides with the standard embedding of a domain into its up- per (Smyth) powerdomain (Gunter and Scott in press), with the only difference being that we have reversed the orders of both domains (with the in- formationally more specific elements toward the bottom), as is conventional in inheritance net- works. More than 30 disjunctive concepts result from the conjunctive concepts in Figure 3, so we will not provide a graphic display of the results of the disjunctive construction applied to a realistic ex- ample (for examples of the general construction, see Davey and Priestley 1990). Closed World Reasoning In HPSG, Pollard and Sag (1987) partition the concept sign into two sub-concepts, phrase and 14 word. This arrangement generates the conjunc- tive concepts {sign}, {phrase} and {word}. Applying the disjunctive construction to this result, though, gives us a disjunctive concept {{word}, {phrase}} which is strictly more infor- mative than {{sign}}. This distinction demon- strates the open-world nature of our construction; it allows for the possibility of signs which are neither words nor phrases. This form of open- world reasoning is the standard in terminologi- cal reasoning systems such as KL-ONE or CLAS- SIC, though LOOM provides a notion of disjoint- covering which provides the kind of closed-world reasoning we require. In dealing with linguistic grammars, on the other hand, we clearly wish to exclude any expres- sion from signhood that is neither a phrase nor a word; these choices are meant to be exhaustive in a grammar. The fact that signs can be either words or phrases is explicit; what we need is a way to say that nothing else can be a sign. In general, we require a set ClosConc C BasConc of closed concepts to be specified. When con- structing the disjunctive concepts, we identify a closed concept with the disjunction of its imme- diate subconcepts. In particular, we can replace every occurence of a closed concept with the dis- junction of its immediate subconcepts, so that {P} and {P' [ P' IsA P} are identified. Closed con- cepts are treated dually to defined concepts; a de- fined concept is taken to be the conjunction of its immediate superconcepts, while a closed concept is identified with the disjunction of its immediate subconcepts. The simplest way to achieve this ef- fect is to generate the disjunctive concepts from the subset of conjunctive concepts which contain at least one subconcept of every closed concept which they contain. This leads to the following restriction: (2) 79 E OisjConc only if for every P E 79 and P E P f3 ClosConc there is some P~ E P such that P~ ISA P Thus if sign E ClosConc, we would only consider the conjunctive concepts {phrase} and {word}; the concept {sign} contains a closed concept sign, but none of its subconcepts. Consequently, the set {{sign}} is no longer a disjunctive concept, while {{phrase}, {word}} would be allowed (assuming for this example that phrase and word are not themselves closed). In grammar development, it will often be the case that all but the maximally specific concepts are closed. In this case, the disjunctive construc- tion will produce the boolean algebra with maxi- mally specific conjunctive concepts as atoms. Such maximally specific conjunctive concepts were sim- ply taken as primitive by King (1989), who gener- ated a boolean algebra of types corresponding to disjunctions of maximal concepts. Acknowledgements We would like to thank Bob Kasper for invaluable suggestions. References A'/t-Kaci, Bassan (1986). An algebraic seman- tics approach to the effective resolution of type equations. Theoretical Computer Sci- ence, 45:293-351. Borgida, Alexander; Brachman, Ronald J.; McGuinness, Deborah L. and Resnick, Lori A. (1989). CLASSIC: A structural data model for objects. In Proceedings of the SIGMOD International Conference on Management of Data, Portland, Oregon. Borgida, Alex and Etherington, David W. (1989). Disjunction in inheritance hierar- chies. In Proceedings of the First Interna- tional Conference on Principles of Knowl- edge Representation and Reasoning, pages 33--43. Morgan Kanfmann. Brachman, Ronald J. (1979). On the episte- mological status of semantic networks. In Findler, N., editor, Associative Networks: Representation and Use of Knowledge by Computers, pages 3-50. Academic Press. Brachman, Ronald J. and Schmolze, James G. (1985). An overview of the KL-ONE knowl- edge representation system. Cognitive Sci- ence, 9:171-216. Carpenter, Bob (1990). Typed feature struc- tures: Inheritance, (in)equations and exten- sionality. In Proceedings of the First Inter- national Workshop on Inheritance in Nat- ural Language Processing, 9-13, Tilburg, The Netherlands. 15 Carpenter, Bob; Pollard, Carl and Franz, Alex (1991). The specification and implementa- tion of constraint-based unification gram- mars. In Proceedings of the Second Inter- national Workshop on Parsing Technology, Cancun, Mexico. Carpenter, Bob and Thomason, Richmond (1990). Inheritance theory and path-based reasoning: An introduction. In Kyburg, Jr., Henry E.; Loui, Ronald P. and Carl- son, Greg N., (ed.), Defeasible Reasoning and Knowledge Representation, volume 5 of Studies in Cognitive Systems, 309-344. Kluwer. Davey, B.A. and Priestley, H.A. (1990). Intro- duction to Lattices and Order. Cambridge University Press. Flickinger, Daniel; Pollard, Carl J. and Wa- sow, Thomas (1985). Structure-sharing in lexical representation. In Proceedings of the ~3rd Annual Conference of the Association for Computational Linguistics. Gunter, Carl and Scott, Dana S. (in press). Se- mantic domains. In Theoretical Computer Science. North-Holland. Kasper, Robert T. (1989). Unification and classification: An experiment in information-based parsing. In First Inter- national Workshop on Parsing Technolo- gies, 1-7. Pittsburgh. King, Paul (1989). A Logical Formalism for Head-Driven Phrase Structure Grammar. PhD thesis, University of Manchester. Kress, Gunther (ed.) (1976) Halliday: System and Function in Language. University of Oxford Press. MacGregor, Robert (1988). A deductive pat- tern matcher. In Proceedings of the 1988 National Conference on Artificial Intelli- gence, pages 403-408, St. Paul, Minnesota. Mellish, Christopher S. (1988). Implement- ing systemic classification via unification. Computational Linguistics, 14:40-51. Nebel, Bernhard and Smolka, Gert (1989). Representationa dn reasoning with attribu- tive descriptions. IWBS Report 81, IBM - Deutschland GmbH, Stuttgart. Pollard, Carl J. (in press). Sorts in unification- based grammar and what they mean. In Pinkal, M. and Gregor, B., editors, Unifi- cation in Natural Language Analysis. MIT Press. Pollard, Carl J. and Sag, Ivan A. (1987). Information-Based Syntax and Semantics: Volume I - Fundamentals, volume 13 of CSLI Lecture Notes. Chicago University Press. Winograd, Terry (1983). Language as a Cogni- tive Process: Volume I - Syntax. Addison- Wesley. 18
1991
2
A SYSTEM FOR TRANSLATING LOCATIVE PREPOSITIONS FROM ENGLISH INTO FRENCH* Nathalie Japkowicz Department of Computer Science Rutgers University New Brunswick, NJ 08903 nat~yoko.rutgers.edu Janyce M. Wiebe Department of Computer Science University of Toronto Toronto, Canada M5S 1A4 wiebe~cs.toronto.edu Abstract Machine translation of locative prepositions is not straightforward, even between closely re- lated languages. This paper discusses a sys- tem of translation of locative prepositions be- tween English and French. The system is based on the premises that English and French do not always conceptualize objects in the same way, and that this accounts for the major differences in the ways that locative preposi- tions are used in these languages. This paper introduces knowledge representations of con- ceptualizations of objects, and a method for translating prepositions based on these con- ceptual representations. 1 Introduction This paper presents an analysis of the differ- ences in the uses of locative prepositions in two languages, and then describes an auto- matic system of translation that is based on this analysis. Our research originated from the observa- tion that even between two closely related lan- guages such as English and French, locative prepositions of even simple sentences do not seem to be translated from one language to the other in a clearly systematic and coherent way. However, the translation becomes more coherent if we introduce Herskovits' idea of the ideal meaning of a preposition (Herskovits 1986) and Lakoff's idea of Idealized Cognitive Models (ICM's) (Lakoff 1987). A central part of our research was to design entities based *The research described in this paper was con- ducted at the Uxfivez~ity of Toronto. on Lakoff's ICM's. We call these entities cor.- ceptual representations of objects. The main thesis of this paper is that, even though the ideal meanings of the locative prepositions we studied are the same in English and in French, these two languages do not always conceptual- ize the objects involved in s scene in the same way and that this leads to differences in the translation of locative prepositions. This the- ory seems suitable to pairs of languages other than English and French, as well. In addition, we will also desccibe how the system detects abnormalities and ambiguities using knowledge required for the translation task. This paper is organized as follows: section 2 presents an analysis of and a solution to the problem of translating locative prepositions from English into French, section 3 presents the conceptual representations of objects, sec- tion 4 presents the algorithm we designed and implemented for translating locative preposi- tions, section 5 discusses the detection of ab- normalities and ambiguities, and section 6 is the conclusion. 2 Translating Locative Prepositions We now describe the differences between En- glish and French locative expressions and give a possible analysis of the problem. Specifi- cally, we concentrate on the translation of the three locative prepositions 'in', 'on', and 'at', into the French prepositions 'dana', 'surf, and '&', in the context of simple sentences or ex- pressions of the form: 153 (located object)(be)(locative preposition) (reference object) (located object)(locative preposition) (reference object) 2.1 Examples of the problem While in the most representative uses of loca- tive prepositions, there is a direct correspon- dence between English and French ('in' corre- sponding to 'dans', 'on' to 'sur', and 'at' to 'tL'), in many cases, this correspondence does not hold. The following pairs of sentences illustrate cases in which the correspondences hold: (1) The boy is in his room. Le garcon est dazes sa chambre. (2) The glass is on the table. Le verre est sur la table. (3) The secretary is at her desk. La secr~taire est d son bureau. Senten (4), (5), and (6), in contrast, trate cases in which the correspondences do not hold: (4) (5) My friend is in the picture. Mon and(e) est sur la photo. The lounge chair is in the shade. La chaise longue est d l'ombre. (6) Our professor is on the bus. Notre professeur est dan le bus. At first sight, the correspondence between En- glish and French locative prepositions may seem arbitrary. Our analysis, however, reveals that coherence might be found. 2.2 Analysis of the problem Our analysis takes its principal sources in the works of Herskovits (1986) and Grimaud (1988). 2.2.1 Herskovits' contribution Herskovits (1986) contributed to the solution to our problem by introducing the concept of the ideal meaning of a locative preposition. This concept is inspired by Rosch's (1977) pro- totype theory, in which human categorization of objects is viewed as organized around pro- totypes (best instances of the category) and distances from these prototypes (the shorter the distance of an object away from a proto- type, the more representative of the category the object is). In the case of prepositions, ?ro- to~ypical or ideal meanings are geometrical re- lations between the located object, the object whose location is being specified in the sen- tence, and the reference object, the object in- dicating the location of the located object. A second contribution of Herskovits is her case study of the three locative prepositions 'in', 'on', and 'at'. Our own study of 35 dif- ferent cases is heavily based on this part of Herskovits' work. 2.2.2 Grimaud's contribution Grimaud (1988) presents a linguistic analy- sis of locative prepositions in English versus French. His theory is based on Lakoff & John- son (1980) and Lakoff (1987) and uses the no- tion of com:eptua//zatioas of objects. A con- ceptualization is a mental representation of an object or an idea which takes into considera- tion not only the =objective truth ~ about that object or idea, but also human biological per- ception and experience. In his theory, Grimaud suggests that the cases in which the correspondences described in section 2.1 do not hold are not simply ex- ceptional but rather are due to differences in the ways that English and French concep- tualize the objects involved in the relation. The reason why the same object can be con- ceptualized as different geometrical objects in different languages, given a particular situa- tion, is that objects have several properties (or aspects) and different languages might not choose to highlight and hide the same proper- ties (or aspects) of a given object in a given situation. This happens in (6), for example (under the interpretation in which the profes- sor is riding the bus rather than being located on the roof of the bus)-- English conceptu- alizes the bus as a surface that can support entities, by highlighting only its bottom plat- form, while French conceptualizes the bus as a volume that can contain entities, by highlight- ing its bottom surface, its sides, and its roof altogether. This leads to a difference in the way that English and French express the spa- tial relation: English uses 'on', the preposition 154 appropriate for expressing a relation between a point and a surface, and French uses 'dans' (the French equivalent of 'in'), the preposition appropriate for expressing a relation between a point and a volume. The appropriateness of a preposition for expressing a certain relation is determined by its ideal meanings. 2.2.3 Our synthesis Our task consisted of synthesizing Herskovits' and Grimand's contributions and making this synthesis suitable for a computational system, since both Herskovits and Grimaud's analyses are mainly linguistic and not directly geared towards computation. Our first task was to define the ideal mean- ings of each preposition: AT/k: • relation between two points. ON/SUIt: • relation between a point and a surface whose boundaries are ir- relevant. • relation between a point and a line. IN/DANS: • relation between a point and a bounded surface. • relation between a point and an empty volume. • relation between a point and a full volume. ~ Our next task was to develop a knowledge representation of a conceptualization of an ob- ject, that is, a representation of the way an object can be conceptualized, given a particu- lar language, a particular situation, etc. Typ- ically, in our application, these conceptualiza- tions are geometrical objects, such as points, lines, surfaces, and volumes. 1 Note that Herskovlts' notion of ideal meaning in- volves more information than ours: rather than the vague term 'relation', Herskovits identifies the specific sort of relation that holds between the two objects, such as coincidence, support, and containment. For the specific problem in translation that we address, such specifications axe unnecessary. They would be necessary, however, in a system designed for a deeper understanding than ours is designed to achieve. Our final task was to design a system of translation. Our system works as follows: given the source-language sentence, its objec- tive meaning (i.e., its language-independent meaning) is derived. This is done by first us- ing the ideal meanings of the source-language preposition to find the conceptualization that applies to the reference object, and then de- riving the objective meaning of the sentence from this conceptualization. (Because each conceptualization of an object used as a ref- erence object corresponds to some objective meaning, this last step is easily performed.) Given the objective meaning of the sentence, the conceptualization of the reference object that should be used in the target language is then found. Finally, using the list of ideal meanings of the target.language prepositions together with the target-language conceptual- ization, the system derives the preposition to be used in the target-language sentence. 2.2.4 Other work Independently, Zelinsky-Wibbelt (1990) took an. approach sin~lar to ours to the problem of translating locative prepositions. She worked on translation between English and German rather than English~and French. This sup- ports our hypothesis that the theory we use can be extended to pairs of languages other than English and French. In addition to the types of expressions our system translates, her system translates sen- tences with verbs other than 'to be'. The reason why we chose not to process sen- fences using verbs other than 'to be' was to study the prepositions themselves in detail, before addressing the more complicated prob- lem of their interactions with verbs. Zelinsky- Wibbelt does not refer to any preliminary de- tailed study of the prepositions themselves. We carried on a detailed bilingual study of locative prepositions by adapting and expand- ing the case studies of Herskovits (1986). 3 The Conceptual Repre- sentation of Objects The central entity in our research is the conceptual representation of objects (or con- ceptual representation), which represents a conceptualization together with information 155 about the conditions necessary for the con- ceptualization to hold. A conceptual representation of an object is composed of a conditional part and a descrip- tive part. The conditional part is a list of properties of the object and of its situation in the sentence. The former kind of prop- erty is objective information about the ob- ject, such as its shape, the parts it is made of, and its function. The latter properties are whether the object is a located or refer- ence object, and whether the sentence is in English or French. The descriptive part is a description of a conceptualization of that ob- ject. This part is conceptual, rather than ob- jective. Here follows a detailed description of conceptual representations. 2 3.1 The conditional part The conditional part is made up of the follow- ing types of properties: * The ro/e in the sentence of the object being considered (located or reference object). 3 * The/gnguage in which the sentence is ut- tered (English or French). This condition is crucial to the system because not all conceptu- aiizations are possible in both languages, and these differences account for differences in use of the prepositions. This point is important, for example, for pairs of sentences (4), where a picture is conceptualized as a volume in En- glish and as a surface in French; for pairs of sentences (5), where the shade is conceptual- ized as a Volume in English and as a point in French; and for pairs of sentences (6), where a bus is conceptualized as a surface in English and as a volume in French. * The properties of the reference object that are relevant to the objective spatial relation expressed in the sentence (these properties are ~Certain e~pects of the conceptual representations were implemented for extensihillty or for the purposes Of'LmhlgUlty and error detection. For the sake of com- pletez~ss, we describe all aspects in this section, even those not directly related to tr~nA|~tion (see Japkowlcz 1990 for furthe¢ explanation of these aspects). aNote that a located object is cdways conceptual- ized as a point. This is so because the conceptualiza- tion of the located object has no impact on the use of the prepositions. It is the conceptualization of the reference object that is relevant. language independent). This part of the con- ceptual representation specifies the objective situation in which the object being conceptu- alized is involved. It is central to the system because it is common to English and French (since it describes an objective situation) and is the part of the conceptual representation that allows a matching between English and French. For example, consider (4). The prop- erties of a picture that are relevant given the objective meaning of the sentence are the fact that it is the re-creator of an environment, with entities included in that environment, and that it is an object with a very small, almost non-existent, width. These properties are common to English and French. What dif- fers are the conceptualizations: English high- lights the first property, conceptualizing the picture as a volume, while French highlights the second, considering the width to be non- existent and conceptualizing the picture as a surface. * World-lmowledge conditions involving the located object of the sentence (for ~mple, whether the located object can be supported by the reference object). These conditions are used to check the plausibility of a sentence with respect to the located object. For ~Y,~rn. pie, the sentences in (6) are plausible, while the sentence (7) The elephant is on the bus is not, since an elephant is too heavy to be supported by a bus. In general, this condi- tion is used to check for abnormalities within one language rather than to account for dif- ferences between English and French. Section 5 describes how the system detects such ab- normalities. * Ez4ra-sentential constraints. Extra- sentential constraints are pragmatic con- straints, derived from the context in which the sentence is uttered, that can influence the choice of preposition. For example: (8) The gas station is at the freeway. [Her- skovits 1986, p. 138] This sentence is valid only when the speaker pictures himself or herself as being on a tra- jectory intersecting the reference object at the 156 point of focus. At its current state, the sys- tem deals solely with isolated sentences, so it is unable to perform this checking. 3.2 The descriptive part The descriptive part of a conceptual represen- tation includes the following three types of in- formation about the conceptualization: its di. mension, its fullness, and its width. * Its dimension is the main information about the conceptualization. The possible val- ues of the dimension field include point, line, surface, and volume. * Its fullness can take the values empty or ful/. Fullness is important when, for example, the dimension is volume. Consider the follow- ing sentences. (9) The girl is in the tree. (10) The nail is in the tree. One needs to differentiate between the situ- ation of (9), in which the located object (the girl) is located in the tree, and the one of (10), in which the located object (the nail) is em- bedded in the tree. This distinction, however, is not needed to translate between English and French (it might be needed with other lan- guages, though); rather, it is needed to un- derstand the sentence. * Its width takes the values ezistent or non~- ezistent. 4 Width is important for sentences such as those in (4), where the width is con- ceptualized as being non-existent in French, and existent in English, this difference lead- ing to a difference in the use of the locative prepositions (French uses 'sur' and English uses 'in'). 4Remember that the descriptive part describes con- ceptualizations. Therefore, when we describe the width to be existent or non-existent, it is the width in the conceptualization that is in question, not that of the real object. Objectively, for example, a pic- ture has a width, but this width is so small that it is ignored in some of its conceptualizations. Objectively also, a picture is the re-creator of an environment. The conceptualizations in which this objective property is highlighted have an existent width, since environments can contain 3-clJmensional entities. 4 The Algorithm 4.1 Overview Our method of translation first transforms the source-language sentence into a source- language representation (the English con- ceptual level), and then translates the source-language representation into a target- language representation (the French concep- tual level). This target-language representa- tion is finally used to generate the target- language sentence. The algorithm works in four phases: i. Initialization 2. Derivation of the objective meaning of the sentence 3. Derivation of the target-language preposition 4. Finalization 4.2 Phases In the description that follows, each step is explained and illustrated with example (6). 4.2.1 In|tiAHcatlon The initialization phase is composed of two steps. The first consists of parsing the in- put sentence and returning some information about each noun, such as its role in the sen- tence (located or reference object), its French translation, and certain useful French mor- phological and syntactic information about it. In sentence (6), for example, this informa- tion is that 'Our professor' is the located ob- ject, that its French translation is 'Notre pro- fesseur', and that 'professeur' is a masculine common noun in French; and also that 'bus' is the reference object, that its French trans- lation is 'bus', and that 'bus' is a masculine common noun in French. The second step consists of building the conceptual representations of the located and reference objects (see Japkowicz 1990 and Japkowicz & Wiebe 1990). All possible conceptual representations are built at this point--the discrimination of those that are relevant to the sentence from the others is clone in the next phase. 157 4.2.2 Derivation of the objective meaning of the sentence This phase is also performed in two steps. The first step identifies the English conceptual rep- resentations relevant to the sentence, accord- ing to the preposition used. That is, given the ideal meaning of the preposition used in the English sentence, certain conceptual rep- resentations that were built in the previous phase are discarded. In example (6), the only conceptual representation of a bus that will re- main is that of a surface, since the ideal mean- ing of 'on' allows the reference object to be a surface or a line and, while a bus is sometimes conceptualized as a surface, it is never concep- tualized as a line. The second step discards even more concep- tual representations, this time based on the type and/or properties of the located object. In sentence (6), no conceptual representation is discarded at this point. This is so because the only condition on the located object is that it can be supported by the reference object, and this condition is verified for (6) because a human being can be supported by a bus. In sentence (7), however, the conceptual repre- sentations of a bus as a surface are discarded because an elephant c~nnot be supported by a bus. The second step also builds the objective meaning of the sentence. The objective mean- ing of a sentence is derived from the concep- tual representation chosen in the first step of this phase. Its main component is the proper- ~ies field. This properties field has the same type of content as the properties field of the conceptual representations. It is this shared field that allows a matching between the En- glish conceptual representation and an objec- tive meaning. In certain cases, in this step, several objec- tive meanings can be derived. In these cases, the sentence is ambiguous (see section 5). 4.2.3 Derivation of the target- language preposition This phase has, once again, two steps. The first consists of matching the objective mean- ing of the sentence to a French conceptual- ization. This can be done in a way similar to that of the previous step: by matching the properties field of the objective meaning of the sentence with the properties field of the French conceptual representation of the reference ob- ject. The second step consists of matching a French preposition to the French conceptual representation derived by the previous step. This is done in a straight-forward way, using a look-up table. In example (6), the French conceptualization is matched to the preposi- tion 'dans'. 4.2.4 Finalization The Finalization phase consists of only one step: that of generating the French sentence. In example (6), it is at this point that the French version, "Notre professeur est darts le bus", is generated, s 4.3 Coverage We implemented the system on a large num- ber of cases, where each case is an "objective situation ~, such as an object being on a hori- zontal support or an object being in a closed environment. There are 35 cases, which can be divided into the following three categories: • Specific, i.e., cases in which the ref- erence object is a given object; the expressions 'on the wall' (meaning against the wall), 'at sea', and 'in the air' are the specific cases in the system. • Semi-genera~ i.e., cases in which the reference object belongs to a well de- fined category of objects. Examples are being in a country (e.g., 'in England' and 'in France') and being in a piece of clothing (e.g., 'in a hat', 'in a shirt', and 'in a pair of shorts'). • Genera~ i.e., cases in which the refer- ence object belongs to an abstract ea~ egory of objects. Examples are being on a planar surface (e.g., 'on the table', 'on the floor', 'on the chair', and 'on the roof') and being at an artifact with a given purpose (e.g., 'at the door', 'at his books', 'at his desk', and 'at his typewriter'). SNote that we are not taking ambiguity into con- aideratlon here. If we were, then the sentence "Notre professeur est Bur le bus." would also be generated (mearfing that our professor is on the roof of the bus). This ca~e will be discussed in section 5. 158 Of the 35 cases, only 3 are in the specific category. Of the remaining, 18 cases are in the semi-general category and 14 are in the general category. 5 Error and Ambiguity Detection The conceptual representations that were de- signed for the purpose of translation can also be used to detect certain kinds of errors and ambiguities. Below, we describe two kinds that can be detected by the system: concep- tual errors and conceptual ambiguity. 5.1 Conceptual errors The system can detect two types of conceptual errors: conceptualization errors and usage er- rors or abnormalities. 5.1.1 Conceptualization errors Conceptualization errors occur when the preposition requires the reference object to be conceptualized in a way that it cannot be in the language considered. An example of a sen- tence where such an error occurs is (11) * The boy is at the shade. This sentence is erroneous because 'at' re- quires 'shade' to be conceptualized as a point, but 'shade' used as a reference object can never be conceptualized as a point in English. This error can be detected by the system be- cause no conceptual representation of shade as a reference object is built whose conceptual- ization is point. This error is detected in the first step of the second phase of the system. 5.1.2 Usage errors and abnormalities Usage errors and abnormalities occur when the demands of the preposition are satisfied by the reference object, but the conditious re- quired of the located object by the conceptual representation, or general conditions required of all types of relations , are not. Such an error occurs in the following: (12) * The man is in the board. The use of 'in' is fine, considering just the ref- erence object; for example, a nail can be lo- cated in a board. The problem is that the located object is 'man', and a man cannot be embedded in a board under normal circum- stances. This error is detected by the system because the condition on the located object (in the conditional part of the conceptual rep- resentation) is not verified. This error is de- tected in the second step of the second phase of the system. 5.2 Conceptual ambiguities Conceptual ambiguity is ambiguity where the English preposition has several meanings in French. The system can detect two types of conceptual ambiguities: simple and complex. Both are detected during the first step of the second phase of the system. 5.2.1 Simple conceptual amblgulty In the case of simple conceptual ambiguity, an ambiguous English preposition is translated into a single French preposition that is am- biguous in the same way. For example: (18) The boy is at the supermarket. Sentence (13) can be understood to mean ei- ther that the boy is shopping at the supermar- ket, or that he is on a trajectory going by the supermarket, and is currently located at the supermarket. Its French translation is (14) Le garcon est au supermarch~, which carries the same ambiguity as the En- glish sentence. This type of ambiguity is de- tected when several English conceptual rep- resentatious can be iustantiated for a single sentence. All instantiated English concep- tual representations have:identical descriptive parts. In the case of simple conceptual am- biguity, all the French conceptual represen- tations happen to have the same descriptive part. 5.2.2 Complex conceptual ambiguity The difference between simple and complex conceptual ambiguity is the following: in the former, the French sentence carries the same ambiguity as the English sentence, but in the latter, the ambiguity is not carried through the translation (so the English sentence has two different French translations). Complex conceptual ambiguity is present in (6), which is repeated here as sentence (15): 159 (15) Our professor is on the bus. As discussed earlier, this sentence is ambigu- ous in that the professor could be riding the bus, or he could be located on the roof of the bus. This sentence is translated into two French sentences, one for each case: e (16) Notre professeur est daus le bus. (17) Notre professeur est sur le bus. In (16), the professor is riding the bus, while in (17), he is located on the roof of the bus. This type of ambiguity is detected in the same way as simple conceptual ambiguity, the only difference being that in the complex case, all the French conceptual representations do not have the same descriptive parts. 6 Conclusion In this paper, we have described a system of translation for locative prepositions that uses Herskovits' idea of the ideal meaning of prepo- sitions and Lakoff's idea of ICM's. While our work does not prove the linguistic and psycho- logical theories on which it is based, it suggests that they can be useful in machine transla- tion. We chose to use conceptual knowledge to deal with the translation of locative prepo- sitions, first, because it provides an elegant so- lution to the problem, and second, because we believe that conceptual knowledge of the sort that we use could be useful in other cognitive tasks such as story understanding, vision, and robot planning. 7 Acknowledgments We wish to thank Graeme Hirst for invaluable comments and detailed readings of many ver- sions of this work, and to gratefully acknowl- edge the financial support of the Department of Computer Science, University of Toronto, and the Natural Sciences and Engineering Re- search Council of Canada. and French," Journal of the American Socieiy of Gcolinguistics, vol. 14, pp. 54-76, 1988. [Herskovits 1986] A. Herskovits, Zanguage and Spatial Cognition: An Interdisciplinary Study of the Prepositions in English, Cambridge University Press, Cambridge, MA, 1986. [Japkowicz 1990] N. Japkowicz, "The Trans- lation of Basic Topological Prepositions from English into French," M.S. Thesis, published as Technical Report CSRI-~3, University of Toronto, 1990. [3apkowics & Wiebe 1990] N. Japkowics & J. Wiebe, "Using Conceptual Informa- tion to Translate Locative Prepositions from English into French," Current Treads in SNePS--Proceediugs of the 1990 t#or~hop, Ali, Chalupsky, Kumar (eds.), forthcoming. [Lakoff & Johnson 1980] G. Lakoff & M. Johnson, Metaphors we Zire by, University of Chicago Press, Chicago, 1980. [Lakoff 1987] G. Lakoff, Women, Fire, and Dangerous Things: What Categories Reveal about the Mind, University of Chicago Press, Chicago, 1987. [Rosch 1977] E. Rosch, "Human Categoriza- tion," in Advances in Cross-Cultural Psychol- ogy, voL 1, N. Warren (ed.), pp. 1-49, Aca- demic Press, London, 1977. [Zelinsky-Wibbelt 1990] C. Zelinsky-Wibbelt, "The Semantic Representation of Spatial Con- figurations: a conceptual motivation for gen- eration in Machine Translation," Proceedings of the lSth International Conference on Com- putational Linguistics, vol. 3, pp. 299-303, 1990. 8 References [Grimaud 1988] M. Grimaud, '~roponyrns, Prepositions, and Cognitive Maps in English Sin sections 1, 2, and 3, ody the fu'st case was considered. 160
1991
20
TRANSLATION BY QUASI LOGICAL FORM TRANSFER Hiyan Alshawi, David Carter and Manny P~yner SRI International Cambridge Computer Science Research Centre 23 Millers Yard, Cambridge CB2 1RQ, U.K. hiyan@cam, sri. com, dmc@cam, sri. com, manny¢cam, sri. com BjSrn Gambiick Swedish Institute of Computer Science Box 1263, S- 164 28 KISTA, Stockholm gain@sits, se ABSTRACT The paper describes work on applying a gen- eral purpose natural language processing system to transfer-based interactive translation. Trans- fer takes place at the level of Quasi Logical Form (QLF), a contextually sensitive logical form rep- resentation which is deep enough for dealing with cross-linguistic differences. Theoretical arguments and experimental results are presented to support the claim that this framework has good proper- ties in terms of modularity, compositionality, re- versibility and monotonicity. 1 INTRODUCTION In this paper we describe a translation project whose aim is to build an experimental Bilingual Conversation Interpreter (BCI) which will allow communication through typed text between two monolingual humans using different languages (of Miike et al, 1988). The choice of languages for the prototype system is English and Swedish. Input sentences are analysed by the Core Language En- gine (CLE 1) as far as the level of Quasi Logical Form (QLF; Alshawi, 1990), and then, instead of further ambiguity resolution, undergo transfer into another QLF having constants and predicates cor- responding to word senses in the other language. The transfer rules used in this process correspond to a certain kind of meaning postulate. The CLE then generates an output, sentence from the target 1 Tile CLE is described in Alshawi (1991) which includes more detailed discussion of the BCI architecture in a chap- ter by the present, authors, language QLF, using the same linguistic data as is used for analysis of that language. QLFs were selected as the appropriate level for transfer because they are far enough removed from surface linguistic form to provide the flexibility re- quired by cross-linguistic differences. On the other hand, the linguistic, unification-based processing involved in creating them can be carried out effi- ciently and without the need to reason about the domain or context; the QLF language has con- structs for explicit representation of contextually sensitive aspects of interpretation. When it is necessary, for correct translation, to resolve an ambiguity present at QLF level, the BCI system interacts with the source language user to make the necessary decision, asking for a choice between word sense paraphrases or between alter- native partial bracketings of the sentence. There • is thus a strong connection between our choice of a representation sensitive to context and the use of interaction to resolve context dependent ambi- guities, but in this paper we concentrate on repre- sentational and transfer issues. 2 CLE REPRESENTATION LEVELS In this section we explain how QLF fits into the overall architecture of the CLE and in section 3 we discuss the reasons for choosing it for interactive dialogue translation. 161 2.1 CLE Processing Phases A coarse view of the CLE architecture is that it consists of a linguistic analysis phase followed by a contextual interpretation phase. The output of the first phase is a set of alternative QLF analy- ses of a sentence, while the output of the second is an RQLF (resolved QLF) representation of the interpretation of an utterance: Sentence --linguistic analysis--~ QL Fs Q X, Fs ---contextual interpretation---*" R Q L F. Deriving a fairly conventional Logical Form (LF) from the RQLF is then a simple formal mapping which removes the information in the RQLF that is not concerned with truth conditions. Linguistic analysis and contextual interpreta- tion each consist of several subphases. For anal- ysis these are: orthography, morphological anal- ysis, syntactic analysis (parsing), and (composi- tional) semantic analysis. Apart from the first, these analysis subphases are based on the unifica- tion grammar paradigm, and they all use declara- tive bidirectional rules. When the CLE is being used as an interface to a computerized information system (e.g. a database system), its purpose is to derive an LF represen- tation giving the truth conditions of an utterance input by a user. The LF language is based on first order predicate logic extended with general- ized quantifiers and some other higher order con- structs (Alshawi and van Eijck, 1989). For ex- ample, in a context where she can refer to Mary Smith, and one to "a car", a possible LF for She hired one is: quant (exists ,C, [carl ,C], quant (exists ,E, [event ,El, [past, [hir • I, E, mary_smith, C] ] ) ). This can be paraphrased as "There is a car C, and an event E such that, in the past, ~. is a hiring event by Mary Smith of e." In this notation, quan- tified formulae consist of a generalized quantifier, a variable, a restriction and a scope; square brack- ets are used for the application of predicates and operators to their arguments. To arrive at such LF representations, a number of intermediate lev- els of representation are produced by successive modular components. Generation of linguistic expressions in the CLE takes place from QLFs (or from RQLFs by map- ping them to suitable QLFs). Since the rules used during the analysis phase are declarative and bidirectional, these are also used for generation. To achieve computationally efficient analysis and generation, the rules are pre-compiled in different ways for application in the two directions. Gen- eration uses the semantic-head driven algorithm (Shieber et al, 1990). 2.2 The QLF Language The QLF representations produced for a sen- tence are neutral with respect to the choice of ref- erents for pronouns and definite descriptions, and relations implied by compound nouns and ellip- sis. They are also neutral with respect to other ambiguities corresponding to alternative scopings of quantifiers and operators and to the collec- tive/distributive and referential/attributive dis- tinctions. The QLF is thus the level of represen- tation encoding the results of compositional lin- guistic analysis independently of contextually sen- sitive aspects of understanding. These aspects are addressed by the contextual interpretation phase which has the following subphases: quan- tifier scoping (Moran 1988), reference resolution (Alshawi 1990), and plausibility judgement. The QLF language is a superset of the LF language containing additional expressions corre- sponding, for example, to unresolved anaphors. More specifically, there are two additional term constructs (anaphoric terms and quanti- fied terms), and one additional formula construct (anaphoric formulae): a_term( Category, Entity Vat, Restriction). q_term( Category, Entity Vat, Restriction). a_form(Category, Pred Var , Restriction). These QLF constructs contain syntactic and morphological information in the Category and logical (truth-conditional) information in the Restriction, itself a QLF formula binding the vari- able. A QLF from which the LF for She hired one could have been derived is: [past, [hire, q_term (<t =quant, n=s ing>, E, [event, E] ), a_term(<t =ref, p=pro, l=she, n=sing>, Y, [female, Y] ), q_t erm (<t =quant, n=sing>, C, a_f orm(<t =pred, l=one>, P, [P.C]))]]. 162 in which categories are shown as lists of feature- value specifications (the feature shown are t for QLF expression type, n for number, p for phrase type, and 1 for lexical information). The differ- ences between the QLF shown here and the LF shown earlier are that the quantified terms have been scoped, the anaphoric term for she has been resolved to Mary Smith, and the anaphoric NP restriction implicit in one has been resolved using the predicate car. The RQLF representation of an utterance in- cludes all the information from the QLF, together with the resolutions of QLF constructs made dur- ing the contextual interpretation phase. For ex- ample, the referent of an a_term is unified with the a_term variable. Some constraints on plausibility can be ap- plied at the QLF level before a full interpreta- tion has been derived. This is because most of the predicate-argument structure of an utterance has been determined at that point, allowing, in particular, the application of sortal constraints expected by predicates of their arguments. Sor- tal constraints cut down on structural (e.g. at- tachment) ambiguity, and on word sense ambigu- ity, the latter being particularly important for the translation application in the context of large vo- cabularies. 3 REPRESENTATION LEVELS FOR TRANSFER The representational structures on which trans- fer operates must contain information correspond- ing to several linguistic levels, including syntax and semantics. For transfer to be general, it must operate recursively on input representations. We call the level of representation on which this re- cursion operates the "organizing" level; semantic structure is the natural choice, since the basic re- quirement of translation is that it preserves mean- ing. Syntactic phrase structure transfer, or deep- syntax transfer (e.g. Thurmair 1990, Nagao and Tsujii 1986) results in complex transfer rules, and the predicate-argument structure which is re- quired for the application of sortal restrictions is not represented. McCord's (1988, 1989) organizing level appears to be/hat, of surface syntax, with additional deep syntactic and semantic content attached to nodes. As we have argued, this level is not optimal, which may be related to the fact that McCord's sys- tem is explicitly not symmetrical: different gram- mars are used for the analysis and synthesis of the same language, which are viewed as quite differ- ent tasks. Isabelle and Macklovitch (1986) argue against such asymmetry between analysis and syn- thesis on the grounds that, although it is tempting as a short-cut to building a structure sufficiently well-specified for synthesis to take place, asym- metry means that the transfer component must contain a lot of knowledge about the target lan- guage, with dire consequences for the modularity of the system and the reusability of different parts of it. In the BCI, however, the transfer rules con- tain only cross-linguistic knowledge, allowing the analysis and generation to make use of exactly the same data. Kaplan et al (1989) allow multiple levels of representation to take part in the transfer rela- tion. However, Sadler et al (1990) point out that the particular approach to realizing this taken by Kaplan et al has problems of its own and does not cleanly separate monolingual from contrastive knowledge. The CLE processing subphases offer three se- mantic representations of different depth as can- didates for an appropriate transfer level, namely QLF, RQLF and LF. At the LF level, sortal re- strictions can be applied, but the form of noun phrase descriptions used and also information on topicalization is no longer present; the LF rep- resentation is too abstract for transfer. On the other hand, not all the information appearing in the RQLF about how QLF constructs have been resolved is necessary for translation. Resolved ref- erents are not an adequate generator input for def- inite descriptions in the target language, since the view of the referent in the source is lost during translation. Another case is that translation from resolved ellipsis can result in unwieldy target sen- tences. In arguing for QLF-level transfer, we are asserting that predicate-argument relations of the type used in QLF are the appropriate organizing level for compositional transfer, while not denying the need for syntactic information to ensure that, for example, topichood or the given/new distinc- tion is preserved. Finally, in contrast to systems such as Rosetta (Landsbergen, 1986) which depend on stating rule by rule correspondences between source and target grammars, we wish to make the monolingual de- scriptions as independent as possible from the task of translating between two languages. Apart from 163 its attractions from a theoretical point of view, this has practical advantages in allowing gram- mars to be reused for different language pairs and for applications other than translation. 4 QLF TRANSFER QLF transfer involves taking a QLF analysis of a source sentence, say QLF1, and deriving from it another expression, QLF2, from which it is possi- ble to generate a sentence in the target language. Leaving aside unresolved referential expressions, the main difference between QLF1 and QLF2 is that they will contain constants, particularly pred- icate constants, that originate in word sense en- tries from the lexicons of the respective languages. If more than one candidate source language QLF exists, the appropriate one is selected by present- ing the user with choices of word sense paraphrases and of bracketings relating to differences in the syntactic analyses from which the QLFs were de- rived. A transfer rule specifies a pair of QLF patterns. The left hand side matches QLF expressions for one language and the right hand side matches those for the other: trans(<QLFl subexpression pattern> <Operator> <QLF2 subexpression pattern>). If the operator is == then the rule is bidirectional. Otherwise, a single direction of applicability is in- dicated by use of one of the operators >= or =<. Transfer rules are applied recursively, this pro- cess following the recursive structure of the source QLF. In order to allow transfer between struc- turally different QLFs, rules with 'transfer vari- ables' need to be used. These variables, which take the form tr(atom), show how subexpressions in the source QLF correspond to subexpressions translating them in the target QLF. For exam- ple, the following rule expresses an equivalence between the English to be called ("I am called John"), and the Swedish beta ("Jag heter John"). trans ( [call_name, tr(ev), q_term(<tfquant ,n=sing>, A, [entity,A] ), tr(ag), tr(name)] [heCal, Cr (ev), tr (ag), Cr (name) ] ). Transfer rules often correspond directly to inter- lingual meaning postulates: when the expressions in a transfer rule are formulae, the symbols ==, >=, and =< can be read as the logical operators <-->, -->, and <-- respectively. A rule like Crans ([and, [bafll ,X], [luckl ,X]] [otur I, x] ) translating between the English bad luck and the Swedish otur, can be interpreted in this way. We will now assess the method's strengths and weaknesses, as they have manifested themselves in practice. We will pay particular attention to the criteria of expressiveness, compositionality, sim- plicity, reversibility and monotonicity. We take the last point first, since it is the most straightforward one. Since rules are applied purely nondeterministically and by pure unification, we get monotonicity "for free" - although there is a case for disallowing transfer by decomposition of a complex QLF structure which directly matches one side of a transfer rule. The other points need more discussion. 4.1 Expressiveness Since we are intentionally limiting ourselves by not allowing access to full syntactic information (but only to that placed in QLF categories) in the transfer phase, it is legitimate to wonder whether the formalism can really be sufficiently expressive. Here, we will attempt to answer this criticism; we begin by noting that shortcomings in this area can be of several distinct kinds. Sometimes, a formal- ism can appear to make it necessary to write many rules, where one feels intuitively that one should be enough; we treat this kind of problem under the heading of compositionality. In other cases, the difficulty is rather that there does not appear to be any way of expressing the rule at all in terms of the given formalism. In our case, a fair proportion of problems that at first seem to fall into this cate- gory can be eliminated by having adequate mono- lingual grammars and using the target grammar as a filter; the idea is to allow the transfer com- ponent to produce unacceptable QLFs which are filtered out by fully constrained target grammars. A good example of the use of this technique is the English definite article, which in Swedish can be translated as a gender-dependent article, but preferably is omitted; however, an article is oblig- atory before an adjective. Solving this problem 164 [, Table 1: Types of complex transfer used Type Example Different John likes Mary particles John tyeker om Mary Passive Insurance is included to active FSrs~ikring ingAr Verb John owes Mary $20 to adjective John ~ir skyldig Mary $20 Support verb John had an accident to normal verb John rltkade ut fdr en olycka Single verb to phrase Idiomatic use of PP John wants a car John vii1 ha en bil (lit.: "wants to have") John is in a hurry John har br•ttom (lit.: "has hurry") at transfer level is not possible, since the transfer component has no way of knowing that a piece of logical form will be realized as an adjective; there are many cases where an adjective-noun combina- tion in English is best translated as a compound noun in Swedish. Exploiting the fact that the rele- vant constraint is present in the Swedish grammar, however, the "transfer-and-filter" method reduces the problem to two simple lexical rules. Sortal re- strictions at the target end can also be used as a filter in a similar way. 4.2 Simplicity and reversibility The most obvious way to put the case with re- gard to simplicity is by giving a count of the vari- ous categories of rule, and providing evidence that there is a substantial proportion of rules which are simple in our framework, but would not necessar- ily be so in others. The transfer component currently contains 718 rules. 576 of these (80.2%) have the property that both the right- and left-hand sides are atomic. 502 members of this first group (69.9%) translate senses of single words to senses of single words; the remaining 74 (10.3%) translate atomic con- stants representing the senses of complex syntactic constructions, most commonly verbs taking parti- cles, reflexives, or complementizers. An example is the following rule, which defines an equivalence between English care about ('John cares about Mary") and Swedish bry sig om ( "John bryr sig om Mary", lit. "John cares himself about Mary"). J Table ~: Transfer contexts used 'context Example ' Perfect tense Negated John has liked Mary John har tyckt om Mary John doesn't like Mary John tycker inte om Mary YN-question Does John like Mary? Tycker John om Mary? WH-question Who does John like? Veto tycker John om? Passive Mary was liked by John Mary blev omtyckt av John Relative The woman that John likes clause Sentential complement Embedded question VP modifier Object raising Change of aspect Kvinnan som John tycker om I think John likes Mary Jag tror John tycker om Mary I know who John likes Jag vet vem John tycker om John likes Mary today John tycker om Mary idag I want John to like Mary Jag vill att John ska tycka om Mary ("I want that J. shall like M.") John stopped liking Mary John slutade tycka om Mary ("J. stopped like-INF M.") trans(care_about == bry_sig_om). Since vocabulary has primarily been selected with regard to utility (we have, for example, made considerable use of frequency dictionaries (Alldn 1970)), we think it reasonable to claim that QLF- based transfer is simplifying the construction of transfer rules in a substantial proportion of the commonly encountered cases. On the score of reversibility, we will once again count cases; here we find that 659 (91.8%) of the rules are reversible, 17 (2.4%) work only in the English-Swedish direction, and 42 (5.8%) only in the Swedish-English direction. These also seem to be fairly good figures. 4.3 Compositionality As in any rule-based system, "compositionality" corresponds to the extent to which it is necessary to provide special mechanisms to cover cases of ir- regular interactions between rules. As far as we know, there is no accepted benchmark for testing 165 compositionality of transfer; what we have done, as a first step in this direction, is to select six com- mon types of complex transfer, and eleven com- mon contexts in which they can occur. These are summarized in tables 1 and 2 respectively. Each complex transfer type is represented by a sample rule, as shown in table 1; the question is the ex- tent to which the complex transfer rules continue to function in the different contexts (table 2). To test transfer compositionality properly, it is not sufficient simply to note which rule/context combinations are handled correctly; after all, it is always possible to create a completely ad hoc so- lution by simply adding one transfer rule for each combination. The problem must rather be posed in the following terms: if there is a single rule for each complex transfer type, and a number of rules for each context, how many extra rules must be added to cover special combinations? It is this issue we will address. The actual results of the tests were as follows. There were 124 meaningful combinations (some constructions could not be passivized); in 103 of these, transfer was perfectly compositional, and no extra rule was needed. For example, the English sentence for the combination "Verb to adjective + WH-question" is How much does John owe Mary. The corresponding Swedish sentence is Hut my- cket dr John skyldig Mary? ("How much is John indebted-to Mary?"), and the two QLFs areS: [uhq, [pres, [owe_have_to_pay, q_term(<t=quant,n=sing>,A,[event,A]), a_term(<t=ref,p=name>, B,[name_of,B,john]), q_term(<t=quant,l=wh>,C,[quantity,C]), a_term(<t=ref,p=name>, D,[name_of,D,mary])]]] [whq, [pro8 ent, [Va.T a, q_t erm(<t =quanE, n=sing>, A, [state. A] ), [skyldiE_nsn_nst, a_t erm (<t =ref, p=name>, B, [name_o~, B, j ohn3 ), a_term(<t=ref, p=name>, C, [name_of, C,mary] ), q_t erm(<t =quant, l=wh> ,D, [quantity, D] )] ] ] ] It should be evident that the complex transfer rule defining the equivalence between owe and yarn skyldig, transC[owe_have_to_pay, q_termC<t=quant,n=sing>,A,[event,A]), tr(ag),tr(sum),tr(obj)] [vara, q_term(<t=quant,n=sing>,A,[state,A]), [skyldig_ngn_ngt, trCag),trCobj),tr(sum)]]). is quite unaffected by being used in the context of a Wit-question. Of the remaining 21 rule/context/direction triples, seven failed for basically uninteresting rea- sons: the combination "Perfect tense + Passive- to-active" did not generate in English, and the six sentences with the object-raising rule all failed in the Swedish-English direction due to the transfer component's current inability to create a function- application from a closed form. The final fourteen failures are significant from our point of view, and it is interesting to note that all of them resulted from mismatches in the scope of tense and nega- tion operators. The question now becomes that of ascertaining the generality of the extra rules that need to be added to solve these fourteen unwanted interac- tions. Analysis showed that it was possible to add 26 extra rules (two of which were relevant here), which reordered the scopes of tense, nega- tion and modifiers, and accounted for the scope differences between the English and Swedish QLFs arising from the general divergences in word-order and negation of main verbs. These solved ten of the outstanding cases. For example, the com- bination "Different particles + Negated" is John doesn't like Mary in English and John tycker inte om Mary (lit.: "John thinks not about Mary") in Swedish; the QLF-pair is: [pres p [not, [like, q_t erm(<t=quant ,n=sing>, A, [event, A] ), a_term ( <t=ref, p=name>, B, [name_o~, B, j ohn] ), a_termC<t=ref, p=name>, B, [~ame_of, B ,mary] )] ] ] 2 ~r is the present tense of ~ara. 166 [not, [present, [tycka_om, q_t erm(<t =quant, n=s ing>, A, [event, A] ), a_t erm(<t =ref, p=name>, B, [name_of, B, john] ), a_term(<t=ref, p=name>, B, [name_o:f, B, mary] ) ] ] ] The extra rule here, trans( [pres, [not,tr(body)]] == [not, [present, tr (body)] ] ). reorders the scopes of the negation and present- tense operators, but does not need to access the interior structure of the QLF (the "body" vari- able); this turns out to be the case for most inter- actions of negation, VP-modification and complex transfer. It is thus not surprising that a small number of similar rules covers most of the cases. The four bad interactions left all involved the English verb to be; these were the combinations "Passive to active ÷ VP modifier" and "Idiomatic use of PP q- negation", which failed to transfer in either direction. Here, there is no general solu- tion involving the addition of a small number of extra rules, since the problem is caused by an oc- currence of to be on the English side that is not matched by an occurrence of the corresponding Swedish word on the other. The solution must rather be to add an extra rule for each complex fransfer rule in the relevan~ class to cover the bad interaction. To solve the specific examples in the test set, two extra rules were thus required. Summarizing the picture, the tests revealed that all bad interactions between the transfer rules and contexts shown here could be removed by adding four extra rules to cover the 124 possible interac- tions. In a general perspective (viewing the rules as representatives of their respective classes), the rule-interaction problems exemplified by the con- crete collisions were solved by adding • 26 general rules to cover certain standard scope mismatches caused by verb-inversion and negation. • two extra rules (one for present and one for past tense) for each complex transfer rule of either the "Idiomatic use of PP" or "Active to Passive" types, to cover idiosyncratic in- teractions of these with negation and VP- modification respectively. We view these results as very promising: there were few bad interactions, and those that ex- isted were of a regular nature that could be coun- teracted without fear of further unwelcome side- effects. This gives good grounds for hoping that the system could be scaled up to a practically use- ful size without suffering the usual fate of drown- ing in a sea of ad hoc fixes. 5 IMPLEMENTATION STATUS The current implementation includes analysis, transfer, and generation modules, sizable gram- mars with morphological, syntactic and semantic rules for English and Swedish, and an experimen- tal set of transfer rules for this language pair. Rel- ative to the size of the grammars, the lexicons are still small (approximately 2000 and 1000 words re- spectively). About 250 entries for each language have been added for a specific domain (car hire), which makes possible moderately unconstrained conversation on this topic; the system, including the facilities for interactive resolution of trans- lation problems, has been tested on a corpus of about 400 sentences relating to the domain. For short sentences typical of the car hire domain, me- dian total processing times for analysis, transfer and generation are around ten seconds when run- ning under Quintus Prolog on a SUN SPARCst~- tion 2. We are currently investigating a different QLF representation of Iense, aspect and modality which should increase the transfer compositionality for the operator cases we have discussed in this pa- per, as well as allowing more flexible resolution of temporal relations in applications other than translation. ACKNOWLEDGMENTS The work reported here was funded by the Swedish Institute of Computer Science, and the greater part of it was carried out while the third author was employed there. We would like to thank Steve Pulman for many helpful discussions, especially with regard to the problems encoun- tered in adapting the English grammar to Swedish. 167 REFERENCES Alldn, Sture (ed.) (1970) Frequency Dictionary of Present-Day Swedish, Almqvist & Wiksell, Stockholm. Alshawi, Hiyan and Jan van Eijck (1989) "Logical Forms in the Core Language Engine". $Tth Annual Meeting of the Association for Com- putational Linguistics, Vancouver, British Columbia, pp. 25-32. Alshawi, Hiyan (1990) "Resolving Quasi Logical Forms". Computational Linguistics, Vol. 16, pp. 133-144. Alshawi, Hiyan, ed. (to appear 1991). The Core Language Engine. Cambridge, Mas- sachusetts: The MIT Press. Kaplan, Ronald M., Klaus Netter, Jiirgen Wedekind and AnnieZaenen (1989) '¢l~ransla - tion by Structural Correspondences", Fourth Conference of the European Chapter of the Association for Computational Linguistics, Manchester, pp. 272-281. Isabelle, Pierre and Elliot Macklovitch (1986) "Transfer and MT Modularity", Eleventh International Conference on Computational Linguistics (COLING-86), Bonn, pp. 115- 117. Landsbergen, Jan (1986) '~lsomorphic grammars and their use in the Rosetta translation sys- tem", in M. King (ed), Machine Translation Today: the State of the Art, Edinburgh Uni- versity Press, Edinburgh. McCord, Michael C. (1988) '% Multi-Target Ma- chine Translation System", Proceedings of the International Conference on Fifth Generation Computer Systems, Tokyo, pp. 1141-1149. McCord, Michael C. (1989) "Design of LMT: a Prolog-based Machine Translation System", Computational Linguistics, Vol. 15, pp. 33- 52. Miike, Seiji, Koichi Hasebe, Harold Somers, and Shin-ya Amano (1988) "Experiences with an on-line translating dialogue system", 26th Annual Meeting of the Association for Com- putational Linguistics, State University of New York at Buffalo, Buffalo, New York, pp. 155-162. Moran, Douglas B. (1988). "Quantifier Scoping in the SRI Core Language Engine", 26th Annual Meeting of the Association for Computational Linguistics, State University of New York at Buffalo, New York, pp. 33-40. Nagao, Makoto, and Jun-ichi Tsujii (1986) "The Transfer Phase of the Mu Machine Translation System", Eleventh International Conference on Computational Linguistics (COLING-86), Bonn, pp. 97-103. Sadler, Louisa, Inn Crookston, Douglas Arnold and Andrew Way (1990) "LFG and Trans- lation", Third International Conference on Theoretical and Methodological Issues in Ma. chine Translation, Linguistics Research Cen- ter, Austin, Texas. Shieber, Stuart M., Gertjan van Noord, Fernando C.N. Pereira and Robert C. Moore (1990) "Semantic-Head-Driven Generation", Com- putational Linguistics, Vol. 16, pp. 30-43. Thurmair, Gregor (1990) "Complex lexical trans- fer in METAL", Third International Confer- ence on Theoretical and Methodological Issues in Machine Translation, Linguistics Research Center, Austin, Texas. 168
1991
21
ALIGNING SENTENCES IN PARALLEL CORPORA Peter F. Brown, Jennifer C. Lai, a, nd Robert L. Mercer IBM Thomas J. Watson Research Center P.O. Box 704 Yorktown Heights, NY 10598 ABSTRACT In this paper we describe a statistical tech- nique for aligning sentences with their translations in two parallel corpora. In addition to certain anchor points that are available in our da.ta, the only information about the sentences that we use for calculating alignments is the number of tokens that they contain. Because we make no use of the lexical details of the sentence, the alignment com- putation is fast and therefore practical for appli- cation to very large collections of text. We have used this technique to align several million sen- tences in the English-French Hans~trd corpora and have achieved an accuracy in excess of 99% in a random selected set of 1000 sentence pairs that we checked by hand. We show that even without the benefit of anchor points the correlation between the lengths of aligned sentences is strong enough that we should expect to achieve an accuracy of between 96% and 97%. Thus, the technique may be applicable to a wider variety of texts than we have yet tried. INTRODUCTION Recent work by Brown et al., [Brown et al., 1988, Brown et al., 1990] has quickened anew the long dormant idea of using statistical techniques to carry out machine translation from one natural language to another. The lynchpin of their approach is a. large collection of pairs of sentences that. are mutual transla- tions. Beyond providing grist to the sta.tisti- cal mill, such pairs of sentences are valuable to researchers in bilingual lexicography [I(la.- va.ns and Tzoukerma.nn, 1990, Warwick and Russell, 1990] and may be usefifl in other ap- proaches to machine translation [Sadler, 1989]. In this paper, we consider the problem of extra.cting from pa.raJlel French and F, nglish corpora pairs sentences that are translations of one another. The task is not trivial because at times a single sentence in one language is translated as two or more sentences in the other language. At other times a sentence, or even a whole passage, may be missing from one or the other of the corpora. If a person is given two parallel texts and asked to match up the sentences in them, it is na.tural for him to look at the words in the sen- tences. Elaborating this intuitively appealing insight, researchers at Xerox and at ISSCO [Kay, 1991, Catizone et al., 1989] have devel- oped alignment Mgodthms that pair sentences according to the words that they contain. Any such algorithm is necessarily slow and, despite the potential for highly accurate alignment, may be unsuitable for very large collections of text. Our algorithm makes no use of the lexical details of the corpora, but deals only with the number of words in each sentence. Although we have used it only to align paral- lel French and English corpora from the pro- ceedings of the Canadian Parliament, we ex- pect that our technique wouhl work on other French and English corpora and even on other pairs of languages. The work of Gale and Church , [Gale and Church, 1991], who use a very similar method but measure sentence lengths in characters rather than in words, supports this promise of wider applica.bility. TIIE HANSARD CORPORA Brown el al., [Brown et al., 1990] describe the process by which the proceedings of the Ca.nadian Parliament are recorded. In Canada, these proceedings are re[erred to as tta.nsards. 169 Our Hansard corpora consist of the llansards from 1973 through 1986. There are two files for each session of parliament: one English and one French. After converting the obscure text markup language of the raw data. to TEX , we combined all of the English files into a sin- gle, large English corpus and all of the French files into a single, large French corpus. We then segmented the text of each corpus into tokens and combined the tokens into groups that we call sentences. Generally, these con- form to the grade-school notion of a sentence: they begin with a capital letter, contain a. verb, and end with some type of sentence-final punctuation. Occasionally, they fall short of this ideal and so each corpus contains a num- ber of sentence fragments and other groupings of words that we nonetheless refer to as sen- tences. With this broad interpretation, the English corpus contains 85,016,286 tokens in 3,510,744 sentences, and the French corpus contains 97,857,452 tokens in 3,690,425 sen- tences. The average English sentence has 24.2 tokens, while the average French sentence is about 9.5% longer with 26.5 tokens. The left-hand side of Figure 1 shows the raw data for a portion of the English corpus, and the right-hand side shows the same por- tion after we converted it to TEX and divided it up into sentences. The sentence numbers do not advance regularly because we have edited the sample in order to display a variety of phe- nolnena. In addition to a verbatim record of the proceedings and its translation, the ttansards include session numbers, names of speakers, time stamps, question numbers, and indica- tions of the original language in which each speech was delivered. We retain this auxiliary information in the form of comments sprin- kled throughout the text. Each comment has the form \SCM{} ... \ECM{} as shown on the right-hand side of Figure 1. ]n ad- dition to these comments, which encode in- formation explicitly present in the data, we inserted Paragraph comments as suggested by the space command of which we see aa exam- ple in the eighth line on the left-hand side of Figure 1. We mark the beginning of a parliamentary session with a Document comment as shown in Sentence 1 on the right-hand side of Fig- ure 1. Usually, when a member addresses the parliament, his name is recorded and we en- code it in an Author comment. We see an ex- ample of this in Sentence 4. If the president speaks, he is referred to in the English cor- pus as Mr. Speaker and in the French corpus as M. le Prdsideut. If several members speak at once, a shockingly regular occurrence, they are referred to as Some Hon. Members in the English and as Des Voix in the French. Times are recorded either ~ exact times on a. 24-hour basis as in $entencc 8], or as inexact times of which there are two forms: Time = Later, and Time = Recess. These are rendered in French as Time = Plus Tard and Time = Re- cess. Other types of comments that appear are shown in Table 1. ALIGNING ANCHOR POINTS After examining the Hansard corpora, we realized that the comments laced throughout would serve as uscflll anchor points in any alignment process. We divide the comments into major and minor anchors as follows. The comments Author = Mr. Speaker, Author = ill. le Pr(sident, Author = Some Hon. Mem- bers, and Author = Des Voix are called minor anchors. All other comments are called major anchors with the exception of the Paragraph comment which is not treated as an anchor at all. The minor anchors are much more com- mon than any particular major anchor, mak- ing an alignment based on them less robust against deletions than one based on the ma- jor anchors. Therefore, we have carried out the alignment of anchor points in two passes, first aligning the major anchors and then the minor anchors. Usually, the major anchors appear in both languages. Sometimes, however, through inat- tentlon on the part of the translator or other misa.dvel~ture, the tla.me of a speaker may be garbled or a comment may be omitted. In the first alignment pass, we assign to alignments 170 /*START_COMMENT* Beginning file = 048 101 H002-108 script A *END_COMMENT*/ .TB 029 060 090 099 .PL 060 .LL 120 .NF The House met at 2 p.m. .SP *boMr. Donald MacInnis (Cape Breton -East Richmond):*ro Mr. Speaker, I rise on a question of privilege af- fecting the rights and prerogatives of parliamentary committees and one which reflects on the word of two ministers. .SP *boMr. Speaker: *roThe hon. member's motion is proposed to the House under the terms of Standing Order 43. Is there unanimous consent? .SP *boSome hon. Members: *roAgreed. s*itText*ro) Question No. 17--*boMr. Mazankowski: *to I. For the period April I, 1973 to January 31, 1974, what amount of money was expended on the operation and maintenance of the Prime Minister's residence at Harrington Lake, Quebec? .SP (1415) s*itLater:*ro) .SP *boMr. Cossitt:*ro Mr. Speaker, I rise on a point of order to ask for clarification by the parliamentary secretary. 1. \SCM{} Document = 048 101 H002-108 script A \ECM{) 2. The House met at 2 p.m. 3. \SCM{} Paragraph \ECM{} 4. \SCM{} Author = Mr. Donald MacInnis (Cape Breton-East Richmond) \ECM{} 5. Mr. Speaker, I rise on a question of privilege affecting the rights and prerogatives of parliamentary committees and one which reflects on the word of two ministers. 21. \SCM{} Paragraph \ECM{} 22. \SCM{} Author = Mr. Speaker \ECM{} 23. The hon. member's motion is proposed to the House under the terms of Standing Order 43. 44. Is there unanimous consent? 45. \SCM{} Paragraph \ECM{) 46. \SCM{-} Author = Some hon. Members \ECM{} 47. Agreed. 61. \SCM{} Source = Text \ECM{} 62. \SCM{} Question = 17 \ECM{} 63. \SCM{} Author = Mr. Mazankowski \ECMO 64. I. 65. For the period April I, 1973 to January 31, 1974, .hat amount of money was expended on the operation and maintenance of the Prime Minister's residence at Harrington Lake, Quebec? 66. \SCM{} Paragraph \ECN{} 81. \SCM{) Time = (1415) \ECM{} 82. \SCM{) Time = Later \ECM{) 83. \SCM{} Paragraph \ECM{} 84. \SCM{} Author = Mr. Cossitt \ECM{} 85. Mr. Speaker, I rise on a point of order to ask for clarification by the parliamentary secretary. Figure 1: A sample of text before and after cleanup 171 a cost that favors exact matches and penalizes omissions or garbled matches. Thus, for ex- ample, we assign a cost of 0 to the pair Time = Later and Time = Plus Tard, but a cost of 10 to the pair Time = Later and Author = Mr. Bateman. We set the cost of a dele- tion at 5. For two names, we set the cost by counting the number of insertions, deletions, and substitutions necessary to transform one name, letter by letter, into the other. This value is then reduced to the range 0 to 10. Given the costs described above, it is a standard problem in dynamic programming to find that alignment of the major anchors in the two corpora with the least total cost [Bellman, 1957]. In theory, the time and space required to find this alignment grow as the product of the lengths of the two sequences to be aligned. In practice, however, by using thresholds and the partial traceback technique described by Brown, Spohrer, Hochschild, and Baker , [Brown et al., 1982], the time required can be made linear in the length of the se- quences, and the space can be made constant. Even so, the computational demand is severe since, in places, the two corpora are out of alignment by as many as 90,000 sentences ow- ing to mislabelled or missing files. This first pass renders the data as a se- quence of sections between aligned major an- chors. In the second pass, we accept or reject each section in turn according to the popula- tion of minor anchors that it contains. Specifi- cally, we accept a section provided that, within the section, both corpora contain the same number of minor anchors in the same order. Otherwise, we reject the section. Altogether, we reject about 10% of the data in each cor- pus. The minor anchors serve to divide the remaining sections into subsections thai. range in size from one sentence to several thousand sentences and average about ten sentences. ALIGNING SENTENCES AND PARAGRAPH BOUNDARIES We turn now to the question of aligning the individual sentences in a subsection be- tween minor anchors. Since the number of English Source = English Source = Translation Source = Text Source = List Item Source = Question Source = Answer Fren(;h Source = Traduction Source = Francais Source = Texte Source = List Item Source = Question Source = Reponse Table 1: Examples of comments sentences in the French corpus differs from the number in the English corpus, it is clear that they cannot be in one-to-one correspondence throughout. Visual inspection of the two cor- pora quickly reveals that although roughly 90% of the English sentences correspond to single French sentences, there are many instances where a single sentence in one corpus is rep- resented by two consecutive sentences in the other. Rarer, but still present, are examples of sentences that appear in one corpus but leave no trace in the other. If one is moder- ately well acquainted with both English and French, it is a simple matter to decide how the sentences should be aligned. Unfortunately, the sizes of our corpora make it impractical for us to obtain a complete set of alignments by hand. Rather, we must necessarily employ some automatic scheme. It is not surprising and further inspection verifies that tile number of tokens in sentences that are translations of one another are corre- lated. We looked, therefore, at the possibility of obtaining alignments solely on the basis of sentence lengths in tokens. From this point of view, each corl)us is simply a sequence of sen- tence lengths punctuated by occasional para- graph markers. Figure 2 shows the initial por- tion of such a pair of corpora. We have circled groups of sentence lengths to show the cor- rect alignment. We call each of the groupings a bead. In this example, we have an el-bead followed by an eft-bead followed by an e-bead followed by a ¶~¶l-bead. An alignment, then, is simply a sequence of beads that accounts for the observed sequences of sentence lengths and paragraph markers. We imagine the sen- tences in a subsection to have been generated by a pa.ir of random processes, the first pro- 172 Figure 2: Division of aligned corpora into beads Bead e / ,f ee/ eft ¶! ¶o¶t Text one English sentence one French sentence one English and one French sentence two English and one French sentence one English and two French sentences one English paragraph one French paragraph one English and one French paragraph Table 2: Alignment Beads ducing a sequence of beads and the second choosing the lengths of the sentences in each bead. Figure 3 shows the two-state Markov model that we use for generating beads. -We assume that a single sentence in one language lines up with zero, one, or two sentences in the other and that paragraph markers may be deleted. Thus, we allow any of the eight beads shown in Table 2. We also assume that Pr (e) = Pr (f), Pr (eft)= er (ee/), and Pr (¶¢) = Pr(¶t). BEAD ...... s-L-°--P- ....... ;!:::O Figure 3: Finite state model for generating beads Given a bead, we determine the lengths of the sentences it contains as follows. We a.s- sume the probability of an English sentence of length g~ given an e-bead to be the same as the probability of an English sentence of length ee in the text as a whole. We denote this probability by Pr(ee). Similarly, we as- sume the probability of a French sentence of length g! given an f-bead to be Pr (gY)" For an el-bead, we assume that the English sentence has length e, with probability Pr (~e) and that log of the ratio of length of the French sen- tence to the length of the English sentence is uormMly distributed with mean /t and vari- ance a 2. Thus, if r = log(gt/ge), we assume that er(ts[e, ) = c exp[-(r- (1) with 0¢ chosen so that the sum of Pr(tllt, ) over positive values of gI is equal to unity. For an eel-bead, we assume that each of the En- glish sentences is drawn independently from the distribution Pr(t.) and that the log of the ratio of the length of the French sentence to the sum of the lengths of the English sen- tences is normally distributed with the same mean and variance as for an el-bead. Finally, for an eft-bead, we assume that the length of the English sentence is drawn from the distri- bution Pr (g,) and that the log of the ratio of the sum of the lengths of the French sentences to the length of the English sentence is nor- mally distributed asbefore. Then, given the sum of the lengths of the French sentences, we assume that tile probability of a particular pair of lengths,/~11 and ~12, is proportional to Vr (ef,) Pr (~S~) . Together, these two random processes form a hidden Markov model [Baum, 1972] for the generation of aligned pairs of corpora.. We de- termined the distributions, Pr (g,) and Pr (aS), front the relative frequencies of various sen- tence lengths in our data. Figure 4 shows for each language a. histogram of these for sen- tences with fewer than 81 tokens. Except for lengths 2 and 4, which include a large num- ber of formulaic sentences in both the French and the English, the distributions are very smooth. For short sentences, the relative frequency is a reliable estimate of the corresponding prob- ability since for both French and English we have more than 100 sentences of each length less tha.n 8]. We estimated the probabilities 173 I 80 mentenee length 1 80 .entenea length Figure 4: Histograms of French (top) and English (bottom) sentence lengths 174 of greater lengths by fitting the observed fre- quencies of longer sentences to the tail of a Poisson distribution. We determined M1 of the other parameters by applying the EM algorithm to a large sam- pie of text [Baum, 1972, Dempster et al., 1977]. The resulting values are shown in Table 3. From these parameters, we can see that 91% of the English sentences and 98% of the En- glish paragraph markers line up one-to-one with their French counterparts. A random variable z, the log of which is normMly dis- tributed with mean # and variance o ~, has mean value exp(/t + a2/2). We can also see, therefore, that the total length of the French text in an el-, eel-, or eft-bead should be about 9.8% greater on average than the total length of the corresponding English text. Since most sentences belong to el-beads, this is close to the value of 9.5% given in Section 2 for the amount by which the length of the average French sentences exceeds that of the average English sentence. We can compute from the parameters in Table 3 that the entropy of the bead produc- tion process is 1.26 bits per sentence. Us- ing the parameters # and (r 2, we can combine the observed distribution of English sentence lengths shown in Figure 4 with the conditional distribution of French sentence lengths given English sentence lengths in Equation (1) to obtain the joint distribution of French and English sentences lengths in el-, eel-, and eft- beads. From this joint distribution, we can compute that the mutual information between French and English sentence lengths in these beads is 1.85 bits per sentence. We see there- fore that, even in the absence of the anchor points produced by the first two pa.sses, the correla.tion in sentence lengths is strong enough to allow alignment with an error rate that is asymptotically less than 100%. lh;arten- ing though such a result may be to the theo- retician, this is a sufficiently coarse bound on the error rate to warrant further study. Ac- cordingly, we wrote a program to Simulate the alignment process that we had in mind. Using Pr(e¢), Pr((¢), and the parameters from Ta- Parameter Estimate er (e), Pr(/) .007 Pr (e/) .690 Pr (eel), Pr (eft) .020 Pr (¶~), Pr (¶f) .005 It. .072 tr 2 .043 Table 3: P~rameter estimates ble 3, we generated an artificial pair of aligned corpora. We then determined the most prob- able alignment for the data. We :recorded the fraction of el-beads in the most probable alignment that did not correspond to el-beads in the true Mignment as the error rate for the process. We repeated this process many thou- sands of times and found that we could ex- pect an error rate of about 0.9% given the frequency of anchor points from the first two pa,sses. By varying the parameters of the hidden Markov model, we explored the effect of an- chor points and paragraph ma.rkers on the ac- curacy of alignment. We found that with para- graph markers but no ~tnchor points, we could expect an error rate of 2.0%, with anchor points but no l)~tra.graph markers, we could expect an error rate of 2.3%, and with neither anchor points nor pa.ragraph markers, we could ex- pect an error rate of 3.2%. Thus, while anchor points and paragraph markers are important, alignment is still feasible without them. This is promising since it suggests that one may be able to apply the same technique to data where frequent anchor points are not avail- able. RESULTS We aplflied the alignment algorithm of Sec- t.ions 3 and 4 to the Ca.na.dian Hansa.rd data described in Section 2. The job ran for l0 clays on au IBM Model 3090 mainframe un- der an operating system that permitted ac- cess to 16 mega.bytes of virtual memory. The most probable alignment contained 2,869,041 el-beads. Some of our colleagues helped us 175 And love and kisses to you, too. ... mugwumps who sit on the fence with their mugs on one side and their wumps on the other side and do not know which side to come down on. At first reading, she may have. Pareillelnent. ... en voulant m&lager la ch~vre et le choux ils n'arrivent 1)as k prendre patti. Elle semble en effet avoir un grief tout a fait valable, du moins au premier abord. Table 4: Unusual but correct alignments examine a random sample of 1000 of these beads, and we found 6 in which sentences were not translations of one another. This is con- sistent with the expected error rate ol 0.9% mentioned above. In some cases, the algo- rithm correctly aligns sentences with very dif- ferent lengths. Table 4 shows some interesting examples of this. REFERENCES [Baum, 1972] Baum, L. (1972). An inequality and associated maximization technique in statistical estimation of probabilistic func- tions of a Markov process. Inequalities, 3:1- 8. [Bellman, 1957] Bellman, R. (1957). Dy- namic Programming. Princeton University Press, Princeton N.J. [Brown et al., 1982] Brown, P., Spohrer, J., Hochschild, P., and Baker, J. (1982). Par- tial traceback and dynamic programming. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, pages 1629-1632, Paris, France. [Brown et ai., 1990] Brown, P. F., Cocke, J., DellaPietra, S. A., DellaPietra, V. J., Je- linek, F., Lafferty, J. D., Mercer, R. L., and Roossin, P. S. (1990). A statisticM ap- proach to machine translation. Computa- tional Linguistics, 16(2):79-85. [Brown et al., 1988] Brown, P. F., Cocke, J., DellaPietra, S. A., DellaPietra., V. J., .le- linek, F., Mercer, R. L., and Roossin, P. S. (1988). A statistical approach to language translation. In Proceedings of the I2th In- ternational Conference on Computational Linguisticsl Budapest, Hungary. [Catizone et al., 1989] Catizone, R., Russell, G., and Warwick, S. (1989). Deriving trans- lation data [rom bilingual texts. In Proceed- ings of the First International Acquisition Workshop, Detroit, Michigan. [Dempster et al., ]977] Dempster, A., Laird, N., and Rubin, D. (1977). Maximum likeli- hood from incomplete data via the EM al- gorithm. Journal of the Royal Statistical Society, 39(B):1-38. [Gale and Church, 1991] Gale, W. A. and Church, K. W. (1991). A program for align- ing sentences in bilingual corpora. In Pro- ceedings of the 2gth Annual Meeting of the A ssociation for Computational Linguistics, Berkeley, California. [Kay, ]991] Kay, M. (1991). Text-translation alignment. In ACII/ALLC '91: "Mak- in.q Connections" Conference Handbook, Tempe, Arizona. [Klavans and Tzoukermann, 1990] Kiavans, .l. and Tzoukermann, E. (1990). The bicord system. ]n COLING-90, pages 174-179, Ilelsinki, Finland. [Sadler, 19~9] Sadler, V. (1989). The Bilin- gual Knowledge Bank- A New Conceptual Basis for MT. BSO/Research, Utrecht. [Warwick and Russell, 1990] Wa.rwick, S. and Russell, G. (1990). Bilingual concordancing and bilingnM lexicography. In EURALEX 4th International Congress, M~ilaga, Spain. 176
1991
22
A PROGRAM FOR ALIGNING SENTENCES IN BILINGUAL CORPORA William A. Gale Kenneth W. Church AT&T Bell Laboratories 600 Mountain Avenue Murray Hill, NJ, 07974 ABSTRACT Researchers in both machine Iranslation (e.g., Brown et al., 1990) and bilingual lexicography (e.g., Klavans and Tzoukermann, 1990) have recently become interested in studying parallel texts, texts such as the Canadian Hansards (parliamentary proceedings) which are available in multiple languages (French and English). This paper describes a method for aligning sentences in these parallel texts, based on a simple statistical model of character lengths. The method was developed and tested on a small trilingual sample of Swiss economic reports. A much larger sample of 90 million words of Canadian Hansards has been aligned and donated to the ACL/DCI. 1. Introduction Researchers in both machine lranslation (e.g., Brown et al, 1990) and bilingual lexicography (e.g., Klavans and Tzoukermann, 1990) have recently become interested in studying bilingual corpora, bodies of text such as the Canadian I-lansards (parliamentary debates) which are available in multiple languages (such as French and English). The sentence alignment task is to identify correspondences between sentences in one language and sentences in the other language. This task is a first step toward the more ambitious task finding correspondances among words. I The input is a pair of texts such as Table 1. 1. In statistics, string matching problems are divided into two classes: alignment problems and correspondance problems. Crossing dependencies are possible in the latter, but not in the former. Table 1: Input to Alignment Program English According to our survey, 1988 sales of mineral water and soft drinks were much higher than in 1987, reflecting the growing poptdm'ity of these products. Cola drink manufacturers in particular achieved above-average growth rates. The higher turnover was largely due to an increase in the sales volume. Employment and investment levels also climbed. Following a two-year Iransitional period, the new Foodstuffs Ordinance for Mineral Water came into effect on April 1, 1988. Specifically, it contains more stringent requirements regarding quality consistency and purity guarantees. French Quant aux eaux rain&ales et aux limonades, elles rencontrent toujours plus d'adeptes. En effet, notre sondage fait ressortir des ventes nettement SUl~rieures h celles de 1987, pour les boissons base de cola notamment. La progression des chiffres d'affaires r~sulte en grande partie de l'accroissement du volume des ventes. L'emploi et les investissements ont 8galement augmentS. La nouvelle ordonnance f&16rale sur les denr6es alimentaires concernant entre autres les eaux min6rales, entree en vigueur le ler avril 1988 aprbs une p6riode transitoire de deux ans, exige surtout une plus grande constance dans la qualit~ et une garantie de la puret& The output identifies the alignment between sentences. Most English sentences match exactly one French sentence, but it is possible for an English sentence to match two or more French sentences. The first two English sentences (below) illustrate a particularly hard case where two English sentences align to two French sentences. No smaller alignments are possible because the clause "... sales ... were higher..." in 177 the first English sentence corresponds to (part of) the second French sentence. The next two alignments below illustrate the more typical case where one English sentence aligns with exactly one French sentence. The final alignment matches two English sentences to a single French sentence. These alignments agreed with the results produced by a human judge. Table 2: Output from Alignment Program English French According to our survey, 1988 sales of mineral water and soft drinks were much higher than in 1987, reflecting the growing popularity of these products. Cola drink manufacturers in particular achieved above-average growth rates. Quant aux eaux mintrales et aux limonades, elles renconlrent toujours plus d'adeptes. En effet, notre sondage fait ressortir des ventes nettement SUlX~rieures A celles de 1987, pour les boissons A base de cola notamment. The higher turnover was largely due to an increase in the sales volume. La progression des chiffres d'affaires r#sulte en grande partie de l'accroissement du volume des ventes. Employment and investment levels also climbed. L'emploi et les investissements ont #galement augmenUf. Following a two-year transitional period, the new Foodstuffs Ordinance for Mineral Water came into effect on April 1, 1988. Specifically, it contains more stringent requirements regarding quality consistency and purity guarantees. La nonvelle ordonnance f&l&ale sur les denrtes alimentaires concernant entre autres les eaux mindrales, entree en viguenr le ler avril 1988 apr~ une lxfriode tmmitoire de deux ans, exige surtout une plus grande constance darts la qualit~ et une garantie de la purett. Aligning sentences is just a first step toward constructing a probabilistic dictionary (Table 3) for use in aligning words in machine translation (Brown et al., 1990), or for constructing a bilingual concordance (Table 4) for use in lexicography (Klavans and Tzoukermann, 1990). Table 3: An Entry in a Probabilistic Dictionary (from Brown et al., 1990) English French Prob(French ] English) the le 0.610 the la 0.178 the 1' 0.083 the les 0.023 the ce 0.013 the il 0.012 the de 0.009 the A 0.007 the clue 0.007 Table 4: A Bilingual Concordance bank/banque ("money" sense) and the governor of the et le gouvemeur de la 800 per cent in one week through % ca une semaine ~ cause d' ut~ bank/banc ("place" sense) bank of canada have fwxluanfly bcaque du canada ont fr&lnemm bank action. SENT there banque. SENT voil~ such was the case in the georges ats-tmis et lc canada it Wolx~ du he said the nose and tail of the _,~M__~ lcs extn~tta du bank issue which was settled betw banc de george. bank were surrendered by banc. SENT~ fair Although there has been some previous work on the sentence alignment, e.g., (Brown, Lai, and Mercer, 1991), (Kay and Rtscheisen, 1988), (Catizone et al., to appear), the alignment task remains a significant obstacle preventing many potential users from reaping many of the benefits of bilingual corpora, because the proposed solutions are often unavailable, unreliable, and/or computationally prohibitive. The align program is based on a very simple statistical model of character lengths. The model makes use of the fact that longer sentences in one language tend to be translated into longer sentences in the other language, and that shorter sentences tend to be translated into shorter sentences. A probabilistic score is assigned to each pair of proposed sentence pairs, based on the ratio of lengths of the two sentences (in characters) and the variance of this ratio. This probabilistic score is used in a dynamic programming framework in order to find the maximum likelihood alignment of sentences. 178 It is remarkable that such a simple approach can work as well as it does. An evaluation was performed based on a trilingual corpus of 15 economic reports issued by the Union Bank of Switzerland (UBS) in English, French and German (N = 14,680 words, 725 sentences, and 188 paragraphs in English and corresponding numbers in the other two languages). The method correctly aligned all but 4% of the sentences. Moreover, it is possible to extract a large subcorpus which has a much smaller error rate. By selecting the best scoring 80% of the alignments, the error rate is reduced from 4% to 0.7%. There were roughly the same number of errors in each of the English-French and English- German alignments, suggesting that the method may be fairly language independent. We believe that the error rate is considerably lower in the Canadian Hansards because the translations are more literal. 2. A Dynamic Programming Framework Now, let us consider how sentences can be aligned within a paragraph. The program makes use of the fact that longer sentences in one language tend to be translated into longer sentences in the other language, and that shorter sentences tend to be translated into shorter sentences. 2 A probabilistic score is assigned to each proposed pair of sentences, based on the ratio of lengths of the two sentences (in characters) and the variance of this We will have little to say about how sentence boanderies am identified. Identifying sentence boundaries is not always as easy as it might appear for masons described in Libennan and Church (to appear). It would be much easier if periods were always used to mark sentence boundaries, but unfortunately, many periods have other purposes. In the Brown Corpus, for example, only 90% of the periods am used to mark seutence boundaries; the remaining 10% appear in nmnerical expressions, abbreviations and so forth. In the Wall Street Journal, there is even more discussion of dollar amotmts and percentages, as well as more use of abbreviated titles such as Mr.; consequently, only 53% of the periods in the the Wall Street Journal are used to identify sentence boundaries. For the UBS data, a simple set of heuristics were used to identify sentences boundaries. The dataset was sufficiently small that it was possible to correct the reznaining mistakes by hand. For a larger dataset, such as the Canadian Hansards, it was not possible to check the results by hand. We used the same procedure which is used in (Church, 1988). This procedure was developed by Kathryn Baker (private communication). ratio. This probabilistic score is used in a dynamic programming framework in order to find the maximum likelihood alignment of sentences. We were led to this approach after noting that the lengths (in characters) of English and German paragraphs are highly correlated (.991), as illustrated in the following figure. Paragraph Lengths are Highly Correlated 0 Q Qb . .'-.- .,¢... o * f~°o " • Figure 1. The hodzontal axis shows the length of English paragraphs, while the vertical scale shows the lengths of the corresponding German paragraphs. Note that the correlation is quite large (.991). Dynamic programming is often used to align two sequences of symbols in a variety of settings, such as genetic code sequences from different species, speech sequences from different speakers, gas chromatograph sequences from different compounds, and geologic sequences from different locations (Sankoff and Kruskal, 1983). We could expect these matching techniques to be useful, as long as the order of the sentences does not differ too radically between the two languages. Details of the alignment techniques differ considerably from one application to another, but all use a distance measure to compare two individual elements within the sequences, and a dynamic programming algorithm to minimize the total distances between aligned elements within two sequences. We have found that the sentence alignment problem fits fairly well into this framework. 179 3. The Distance Measure It is convenient for the distance measure to be based on a probabilistic model so that information can be combined in a consistent way. Our distance measure is an estimate of -log Prob(match[8), where 8 depends on !1 and 12, the lengths of the two portions of text under consideration. The log is introduced here so that adding distances will produce desirable results. This distance measure is based on the assumption that each character in one language, L 1, gives rise to a random number of characters in the other language, L2. We assume these random variables are independent and identically distributed with a normal distribution. The model is then specified by the mean, c, and variance, s 2, of this distribution, c is the expected number of characters in L2 per character in L1, and s 2 is the variance of the number of characters in L2 per character in LI. We define 8 to be (12-11 c)l~s 2 so that it has a normal distribution with mean zero and variance one (at least when the two portions of text under consideration actually do happen to be translations of one another). The parameters c and s 2 are determined empirically from the UBS data. We could estimate c by counting the number of characters in German paragraphs then dividing by the number of characters in corresponding English paragraphs. We obtain 81105173481 = 1.1. The same calculation on French and English paragraphs yields c = 72302/68450 = 1.06 as the expected number of French characters per English characters. As will be explained later, performance does not seem to very sensitive to these precise language dependent quantities, and therefore we simply assume c = 1, which simplifies the program considerably. The model assumes that s 2 is proportional to length. The constant of proportionality is determined by the slope of a robust regression. The result for English-German is s 2 = 7.3, and for English-French is s 2 = 5.6. Again, we have found that the difference in the two slopes is not too important. Therefore, we can combine the data across languages, and adopt the simpler language independent estimate s 2 = 6.8, which is what is actually used in the program. We now appeal to Bayes Theorem to estimate Prob (match l 8) as a constant times Prob(81match) Prob(match). The constant can be ignored since it will be the same for all proposed matches. The conditional probability Prob(8[match) can be estimated by Prob(Slmatch) = 2 (1 - Prob(lSI)) where Prob([SI) is the probability that a random variable, z, with a standardized (mean zero, variance one) normal distribution, has magnitude at least as large as 18 [ The program computes 8 directly from the lengths of the two portions of text, Ii and 12, and the two parameters, c and s 2. That is, 8 = (12 - It c)l~f-~l s 2. Then, Prob([81) is computed by integrating a standard normal distribution (with mean zero and variance 1). Many statistics textbooks include a table for computing this. The prior probability of a match, Prob(match), is fit with the values in Table 5 (below), which were determined from the UBS data. We have found that a sentence in one language normally matches exactly one sentence in the other language (1-1), three additional possibilities are also considered: 1-0 (including 0-I), 2-I (including I-2), and 2-2. Table 5 shows all four possibilities. Table 5: Prob(mateh) Category Frequency Prob(match) 1-1 1167 0.89 1-0 or 0-1 13 0.0099 2-1 or 1-2 117 0.089 2-2 15 0.011 1312 1.00 This completes the discussion of the distance measure. Prob(matchlS) is computed as an (irrelevant) constant times Prob(Slmatch) Prob(match). Prob(match) is computed using the values in Table 5. Prob(Slmatch) is computed by assuming that Prob(5]match) = 2 (1 - erob(151)), where Prob (J 5 I) has a standard normal distribution. We first calculate 8 as (12 - 11 c)/~[-~1 s 2 and then erob(181) is computed by integrating a standard normal distribution. The distance function two side distance is defined in a general way to al]-ow for insertions, 180 deletion, substitution, etc. The function takes four argnments: xl, Yl, x2, Y2. 1. Let two_side_distance(x1, Yl ; 0, 0) be the cost of substituting xl with y 1, 2. two side_distance(xl, 0; 0, 0) be the cost of deleting Xl, 3. two_sidedistance(O, Yl ; 0, 0) be the cost of insertion of yl, 4. two side_distance(xl, Yl ; xg., O) be the cost of contracting xl and x2 to yl, 5. two_sidedistance(xl, Yl ; 0, Y2) be the cost of expanding xl to Y 1 and yg, and 6. two sidedistance(xl, Yl ; x2, yg.) be the cost of merging Xl and xg. and matching with y i and yg.. 4. The Dynamic Programming Algorithm The algorithm is summarized in the following recursion equation. Let si, i= 1...I, be the sentences of one language, and t j, j= 1 .-- J, be the translations of those sentences in the other language. Let d be the distance function (two_side_distance) described in the previous section, and let D(i,j) be the minimum distance between sentences sl. •" si and their translations tl, "" tj, under the maximum likelihood alignment. D(i,j) is computed recursively, where the recurrence minimizes over six cases (substitution, deletion, insertion, contraction, expansion and merger) which, in effect, impose a set of slope constraints. That is, DO,j) is calculated by the following recurrence with the initial condition D(i, j) = O. D(i, j) = min. D(i, j-l) + d(0, ty; 0, 0) D(i-l, j) + d(si, O; 0,0) D(i-1, j-l) + d(si, t); 0, 0) !D(i-1, j-2) + d(si, t:; O, tj-1) !D(i-2, j-l) + d(si, Ij; Si-l, O) !D(i-2, j-2) + d(si, tj; si-1, tj-1) 5. Evaluation To evaluate align, its results were compared with a human alignment. All of the UBS sentences were aligned by a primary judge, a native speaker of English with a reading knowledge of French and German. Two additional judges, a native speaker of French and a native speaker of German, respectively, were used to check the primary judge on 43 of the more difficult paragraphs having 230 sentences (out of 118 total paragraphs with 725 sentences). Both of the additional judges were also fluent in English, having spent the last few years living and working in the United States, though they were both more comfortable with their native language than with English. The materials were prepared in order to make the task somewhat less tedious for the judges. Each paragraph was printed in three columns, one for each of the three languages: English, French and German. Blank lines were inserted between sentences. The judges were asked to draw lines between matching sentences. The judges were also permitted to draw a line between a sentence and "null" if they thought that the sentence was not translated. For the purposed of this evaluation, two sentences were defined to "match" if they shared a common clause. (In a few cases, a pair of sentences shared only a phrase or a word, rather than a clause; these sentences did not count as a "match" for the purposes of this experiment.) After checking the primary judge with the other two judges, it was decided that the primary judge's results were sufficiently reliable that they could be used as a standard for evaluating the program. The primary judge made only two mistakes on the 43 hard paragraphs (one French mistake and one German mistake), whereas the program made 44 errors on the same materials. Since the primary judge's error rate is so much lower than that of the program, it was decided that we needn't be concerned with the primary judge's error rate. If the program and the judge disagree, we can assume that the program is probably wrong. The 43 "hard" paragraphs were selected by looking for sentences that mapped to something other than themselves after going through both German and French. Specifically, for each English sentence, we attempted to find the 181 corresponding German sentences, and then for each of them, we attempted to find the corresponding French sentences, and then we attempted to find the corresponding English sentences, which should hopefully get us back to where we started. The 43 paragraphs included all sentences in which this process could not be completed around the loop. This relatively small group of paragraphs (23 percent of all paragraphs) contained a relatively large fraction of the program's errors (82 percent). Thus, there does seem to be some verification that this trilingual criterion does in fact succeed in distinguishing more difficult paragraphs from less difficult ones. There are three pairs of languages: English- German, English-French and French-German. We will report just the first two. (The third pair is probably dependent on the first two.) Errors are reported with respect to the judge's responses. That is, for each of the "matches" that the primary judge found, we report the program as correct ff it found the "match" and incorrect ff it didn't This convention allows us to compare performance across different algorithms in a straightforward fashion. The program made 36 errors out of 621 total alignments (5.8%) for English-French, and 19 errors out of 695 (2.7%) alignments for English- German. Overall, there were 55 errors out of a total of 1316 alignments (4.2%). handled correctly. In addition, when the algorithm assigns a sentence to the 1-0 category, it is also always wrong. Clearly, more work is needed to deal with the 1-0 category. It may be necessary to consider language-specific methods in order to deal adequately with this case. We observe that the score is a good predictor of performance, and therefore the score can be used to extract a large subcorpus which has a much smaller error rate. By selecting the best scoring 80% of the alignments, the error rate can be reduced from 4% to 0.7%. In general, we can trade off the size of the subcorpus and the accuracy by setting a threshold, and rejecting alignments with a score above this threshold. Figure 2 examines this trade-off in more detail. Table 6: Complex Matches are More Difficult category English-French English-German total N err % N err % N err % l-0or0-1 1-1 2-1 or 1-2 2-2 3-1 or !-3 3-2 or 2-3 8 8 100 542 14 2.6 59 8 14 9 3 33 1 1 100 1 1 100 5 5 100 625 9 1.4 58 2 3.4 6 2 33 1 1 100 0 0 0 13 13 100 1167 23 2.0 117 10 9 15 5 33 2 2 100 1 1 100 Table 6 breaks down the errors by category, illustrating that complex matches are more difficulL I-I alignments are by far the easiest. The 2-I alignments, which come next, have four times the error rate for I-I. The 2-2 alignments are harder still, but a majority of the alignments are found. The 3-I and 3-2 alignments arc not even considered by the algorithm, so naturally all three are counted as errors. The most embarrassing category is I-0, which was never 182 Extracting a Subcorpus with Lower Error Rate ~r e~ it o ................................................... --o.o i / | i i 20 40 60 B0 t00 p~mnt o( nmtminod aF~nrrmnts Figure 2. The fact that the score is such a good predictor of performance can be used to extract a large subcorpus which has a much smaller error rate. In general, we can trade-off the size of the subcorpus and the accuracy by-setting a threshold, and rejecting alignments with a score above this threshold. The horizontal axis shows the size of the subcorpus, and the vertical axis shows the corresponding error rate. An error rate of about 2/3% can be obtained by selecting a threshold that would retain approximately 80% of the corpus. Less formal tests of the error rate in the Hansards suggest that the overall error rate is about 2%, while the error rate for the easy 80% of the sentences is about 0.4%. Apparently the Hansard translations are more literal than the UBS reports. It took 20 hours of real time on a sun 4 to align 367 days of Hansards, or 3.3 minutes per Hansard-day. The 367 days of Hansards contain about 890,000 sentences or about 37 million "words" (tokens). About half of the computer time is spent identifying tokens, sentences, and paragraphs, while the other half of the time is spent in the align program itself. 6. Measuring Length In Terms Of Words Rather than Characters It is interesting to consider what happens if we change our definition of length to count words rather than characters. It might seem that words are a more natural linguistic unit than characters 183 (Brown, Lai and Mercer, 1991). However, we have found that words do not perform nearly as well as characters. In fact, the "words" variation increases the number of errors dramatically (from 36 to 50 for English-French and from 19 to 35 for English-German). The total errors were thereby increased from 55 to 85, or from 4.2% to 6.5%. We believe that characters are better because there are more of them, and therefore there is less uncertainty. On the average, the~re are 117 characters per sentence (including white space) and only 17 words per sentence. Recall that we have modeled variance as proportional to sentence length, V = s 2 I. Using the character data, we found previously that s 2= 6.5. The same argument applied to words yields s 2 = 1.9. For comparison sake, it is useful to consider the ratio of ~/(V(m))lm (or equivalently, sl~m), where m is the mean sentence length. We obtain ff(m)lm ratios of 0.22 for characters and 0.33 for words, indicating that characters are less noisy than words, and are therefore more suitable for use in align. 7. Conclusions This paper has proposed a method for aligning sentences in a bilingual corpus, based on a simple probabilistic model, described in Section 3. The model was motivated by the observation that longer regions of text tend to have longer translations, and that shorter regions of text tend to have shorter translations. In particular, we found that the correlation between the length of a paragraph in characters and the length of its translation was extremely high (0.991). This high correlation suggests that length might be a strong clue for sentence alignment. Although this method is extremely simple, it is also quite accurate. Overall, there was a 4.2% error rate on 1316 alignments, averaged over both English-French and English-German data. In addition, we find that the probability score is a good predictor of accuracy, and consequently, it is possible to select a subset of 80% of the alignments with a much smaller error rate of only 0.7%. The method is also fairly language-independent- Both English-French and English-German data were processed using the same parameters. If necessary, it is possible to fit the six parameters in the model with language-specific values, though, thus far, we have not found it necessary (or even helpful) to do so. We have examined a number of variations. In particular, we found that it is better to use characters rather than words in counting sentence length. Apparently, the performance is better with characters because there is less variability in the ratios of sentence lengths so measured. Using words as units increases the error rate by half, from 4.2% to 6.5%. In the future, we would hope to extend the method to make use of lexical constraints. However, it is remarkable just how well we can do without such constraints. We might advocate the simple character length alignment procedure as a useful first pass, even to those who advocate the use of lexical constraints. The character length procedure might complement a lexical conslraint approach quite well, since it is quick but has some errors while a lexical approach is probably slower, though possibly more accurate. One might go with the character length procedure when the distance scores are small, and back off to a lexical approach as necessary. Church, K., "A Stochastic Parts Program and Noun Phrase Parser for Unrestricted Text," Second Conference on Applied Natural Language Processing, Austin, Texas, 1988. Klavans, J., and E. Tzoukermann, (1990), "The BICORD System," COLING-90, pp 174- 179. Kay, M. and M. R6scheisen, (1988) "Text- Translation Alignment," unpublished ms., Xerox Palo Alto Research Center. Liberman, M., and K. Church, (to appear), "'Text Analysis and Word Pronunciation in Text- to-Speech Synthesis," in Fund, S., and Sondhi, M. (eds.), Advances in Speech Signal Processing. ACKNOWLEDGEMENTS We thank Susanne Wolff and and Evelyne Tzoukermann for their pains in aligning sentences. Susan Warwick provided us with the UBS trilingual corpus and posed the Ixoblem addressed here. REFERENCES Brown, P., J. Cocke, S. Della Pietra, V. Della Pietra, F. Jelinek, J. Lafferty, R. Mercer, and P. Roossin, (1990) "A Statistical Approach to Machine Translation," Computational Linguistics, v 16, pp 79-85. Brown, P., J. Lai, and R. Mercer, (1991) "Aligning Sentences in Parallel Corpora,'" ACL Conference, Berkeley. Catizone, R., G. Russell, and S. Warwick, (to appear) "Deriving Translation Data from Bilingual Texts," in Zernik (ed), Lexical Acquisition: Using on-line Resources to Build a Lexicon, Lawrence Erlbaum. 184
1991
23
EXPERIMENTS AND PROSPECTS OF EXAMPLE-BASED MACHINE TRANSLATION Eiichiro SUMITA* and Hitoshi HDA ATR Interpreting Telephony Research Laboratories Sanpeidani, Inuidani, Seika-cho Souraku-gun, Kyoto 619-02, JAPAN ABSTRACT EBMT (Example-Based Machine Translation) is proposed. EBMT retrieves similar examples (pairs of source phrases, sentences, or texts and their translations) from a d~t.hase of examples, adapting the examples to translate a new input. EBMT has the following features: (1) It is easily upgraded simply by inputting appropriate examples to the database; (2) It assigns a reliability factcr to the translation result; (3) It is acoelerated effectively by both indexing and parallel computing; (4) It is robust because of best-match reasoning; ~d (5) It well utilizes translator expertise. A prototype system has been implemented to deal with a difficult translation problem for conventional Rule-Based Machine Translation (RBMT), i.e., translating Japanese noun phrases of the form "N~ no N2" into English. The system has achieved about a 78% success rate on average. This paper explains the basic idea of EBMT, illustrates the experiment in detail, explains the broad applicability of EBMT to several difficult translation problems for RBMT and discusses the advantages of integrating EBMT with RBMT. 1 INTRODUCTION Machine Translation requires handcmt~ and complicated large-scale knowledge (Nirenburg 1987). Conventional machine translation systems use rules as the knowledge. This framework is called Rule-Based Machine Translation (RBMT). It is difficult to scale up from a toy program to a practical system because of the problem of building such a lurge-scale rule-base. It is also difficult to improve translation performance because the effect of adding a new rule is hard to anticipate, and because translation using a large-scule rule-based system is time-consuming. Moreover, it is difficult to make use of situational or domain-specific information for translation. their translations) has been implemented as the knowledge (Nagao 1984; Sumita and Tsutsumi 1988; Sato and Nagao 1989; Sadler 1989a; Sumita et al. 1990a, b). The translation mechanism retrieves similar examples from the database, adapting the examples to Wanslate the new source text. This framework is called Example-Based Machine Translation (EBMT). This paper focuses on ATR's linguistic database of spoken Japanese with English translations. The corpus contains conversations about international conference registration (Ogura et al. 1989). Results of this study indicate that EBMT is a breakthrough in MT technology. Our pilot EBMT system translates Japanese noun phrases of the form '~1 x no N2" into English noun phrases. About a 78% success rate on average has been achieved in the experiment, which i s considered to outperform RBMT. This rate cm be improved as discussed below. Section 2 explains the basic idea of EBMT. Section 3 discusses the broad applicability of EBMT and the advantages of integrating it with RBMT. Sections 4 and 5 give a rationale for section 3, i.e., section 4 illustrates the experiment of translating noun phrases of the form "Nt no N2" in detail, and section 5 studies other phenomena through actual dam from our corpus. Section 6 concludes this paper with detailed comparisons between RBMT and EBMT. 2 BASIC IDEA OF EBMT 2.1 BASIC FLOW In this section, the basic idea of EBMT, which is general and applicable to many phenomena dealt with by machine translation, is shown. In order to conquer these problems in machine translation, a database of examples (pairs of source phrases, sentences, or texts and * Currently with Kyoto University Figure 1 shows the basic flow of EBMT using translation of "kireru"[cut/be sharp]. From here on, the literal English translations are bracketed. (1) and (2) me examples (pairs of Japanese sentences and their English 185 translations) in the database. Examples similar to the Japanese input sentence are retrieved in the following manner. Syntactically, the input is similar to Japanese sentences (1) and (2). However, semantically, "kachou" [chief] is far from "houchou" [kitchen knife]. But, "kachou" [chief] is semantically similar to "kanojo" [she] in that both are people. In other words, the input is similar to example sentence (2). By mimicking the similar example (2), we finally get "The chief is sharp". Although it is possible to obtain the same result by a word selection rule using fme-tuned semantic restriction, note that translation here is obtained by retrieving similar examples to the input. • Example Database (data for "kireru'[cut / be sharp]) (1) houchou wa klrsru -> The kitchen knife cuts. (2) kanojo wa kireru -> She Is sharp. • Input kachouwa klreru o>? • Retrieval of similar examples (Syntax) Input = (1), (2) (Semantics) kachou/== houehou kachou ,= kanojo (Total) Input == (2) • OUt0Ut -> The chief Is ~ h a r D, Figure I Mimicking Similar Examples 2.2 DISTANCE Retrieving similar examples to the input is done by measuring the distance of the input to each of examples. The smaller a distance is, the more similar the example is to the input. To define the best distance metric is a problem of EBMT not yet completely solved. However, one possible definition is shown in section 4.2.2. From similar examples retrieved, EBMT generates the most likely translation with a reliability factor based on distance and frequency. If there is no similar example within the given threshold, EBMT tells the user that it cannot translate the input. 3 BROAD APPLICABILITY AND INTEGRATION 3.1 BROAD APPLICABILITY EBMT is applicable to many linguistic phenomena that are regarded as difficult to translate in conventional RBMT. Some are well-known among researchers of natural language processing and others have recently been given a great deal of attention. When one of the following conditions holds true for a linguistic phenomenon, RBMT is less suitable than EBMT. (Ca) Translation rule formation is difficult. (Cb) The general rule cannot accurately describe phenomena because it represents a special case, e.g., idioms. (Cc) Translation cannot be made in a compositional way from target words (Nagao 1984; Nitta 1986; Sadler 1989b). This is a list (not exhaustive) of phenomena in J-E translation that are suitable for EBMT: • optional cases with a case particle ( "- de", "~ hi",...) • subordinate conjunction ("- ba -", "~ nagara -", "~ tara -",...,"- baai ~",...) • noun phrases of the form '~1 no N2" • sentences of the form "N~ wa N 2 da" • sentences lacking the main verb (eg. sentences of the form "~ o-negaishimasu") • fragmental expressions Chai", "sou-desu", "wakarimashita",...) (Furuse et al. 1990) • modality represented by the sentence ending C-tainodesuga", "~seteitadakimasu", ...) (Furuse et al. 1990) • simple sentences (Sato and Nagao 1989) This paper discusses a detailed experiment for "N~ no N2" in section 4 and prospects for other phenomena, "N1 wa N2 da" and "~ o-negaishimasu" in section 5. Similar phenomena in other language pairs can be found. For example, in Spanish to English translation, the Spanish preposition "de", with its broad usage like Japanese "no", is also effectively Iranslated by EBMT. Likewise, in German to English translation, the German complex noun is also effectively translated by EBMT. 3.2 INTEGRATION It is not yet clear whether EBMT can or should deal with the whole process of translation. We assume that there are many kinds of phenomena. Some are suitable for EBMT, while others are suitable for RBMT. Integrating EBMT with RBMT i s expected to be useful. It would be more acceptable for users if RBMT were first introduced as a base system, and then incrementally have its translation performance improved by attaching EBMT components. This is in the line with the proposal in Nagao (1984). Subsequently, we proposed a practical method of integration in 186 previous papers (Sumita et al. 1990a, b). 4 EBMT FOR "N x no Nz" 4.1 THE PROBLEM "N~ no N2" is a common Japanese noun phrase form. "no" in the "Nt no Nz" is a Japanese adnominal particle. There are other variants, including "deno", "karano", "madeno" and so on. Roughly speaking, Japanese noun phrases of the form "N~ no N2" correspond to English noun phrases of the form "N2 of N:" as shown in the examples at the top of Figure 2. Japanese English youka n o gogo the afternoon o f the 8th kaigi no mokuteki the object o f the conference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . kaigi n o sankaryou the application fee for the conf. ?the application fee o fthe conf. kyoutodenokaigi theconf, in Kyoto .'/the conf. o f Kyoto isshukan no kyuka a week' s holiday ?the holiday o f a week mittsu no hoteru three hotels *hotels o fthree Figure 2 Variations in Translation of "N1 no N2" However, "N2 of Nt" does not always provide a natural translation as shown in the lower examples in Figure 2. Some translations are too broad in meaning to interpret, others axe almost ungrammatical. For example, the fourth one, "the conference of Kyoto", could be misconstrued as "the conference about Kyoto", and the last one, "hotels of three", is not English. Natural translations often require prepositions other than "of", or no preposition at all. In only about one-fifth of "N~ no N2" occurrences in our domain, "N2 of Nt" would be the most appropriate English translation. We cannot use any particular preposition as an effecdve de.fault value. No rules for selecting the most appropriate translation for "N~ no N2" have yet been found. In other words, the condition (Ca) in section 3.1 holds. Selecting the translation for '~1~ no N2" is still an important and complicated problem in J-E translation. In contrast with the preceding research analyzing "NI no N2" (Shimazu et al. 1987; Hirai and Kitahashi 1986), deep semantic analysis is avoided because it is assumed that translations appropriate for given domain can be obtained using domain-specific examples (pairs of source md target expressions). EBMT has the advantage that it can directly return a translation by adapting examples without reasoning through a long chain of rules. 4.2 IMPLEMENTATION 4.2.1 OVERVIEW The EBMT system consists of two databases: an example database and a thesaurus; and also three translation modules: analysis, example-based transfer, and generation (Figure 3). Examples (pairs of source phrases and their translations) are extracted from ATR's linguistic database of spoken Japanese with English translations. The corpus contains conversations about registering for an international conference (Ogura 1989). Example Database (1) Analysis I (2) Example-Based Transfer Thesaurus I (3) Generation I Figure 3 System Configuration The thesaurus is used in calculating the semantic distance between the content words in the input and those in the examples. It is composed of a hierarchical structure in accordance with the thesaurus of everyday Japanese written by Ohno and Hamanishi (1984). Analysis kyouto deno kaigi Example-Based Transfer d Japanese English 0.4 toukyou deno taizai the stay in Tokyo 0.4 honkon deno taizai the stay in Hongkong 0.4 toukyou deno go-taizai the stay in Tokyo 1.0 oosaka no kaigi the conf. in Osaka 1.0 toukyou no kaigi the conf. in Tokyo Generation the conf. in Kyoto Figure 4 Translation Procedure Figure 4 illustrates the translation procedure with an actual sample. First, morphological analysis is performed for the input phrase,"kyouto[Kyoto] deno kaigi [conference]". In this case, syntactical 187 analysis is not necessary. Second, similar examples are retrieved from the database. The top five similar examples are shown. Note that the top three examples have the same distance and that they are all translated with "in". Third, using this rationale, EBMT generates "the conference in Kyoto". 4.2.2 DISTANCE CALCULATION The distance metric used when retrieving examples is essential and is explained hem in detail. we suppose that the input and examples (I, E) in the d~tAl~ase ~ r~ted in the same data structure, i.e., the list of words' syntactic and semantic attribute values (refeaxed to as and I~, E~) for each phrase. The attributes of the current target, "Nt no N2" , 8~ as follows: 1) for the nouns "NI" and "N2": the lexical subcategory of the noun, the existence of a prefix or suffix, and its semantic code in the thesaurus; 2) for the adnominal particle "no": the kinds of variants, "deno", "karano", "madeno" and so on. Here, for simplicity, only the semantic code and the kind of adnominal a=e considered. Distances ae calculated using the following two expressions (Sumita et al. 1990a, b): (1) d(I,E)=•d(li,Ei) "w i i (2) wi=,~// ~. ( freq. of t. p. when Ei=li ) 2 t.p. The attribute distance, d(li, E.~ end the weight of attribute, w~ are explained in the following sections. Each Iranslation pattern (t.p.) is abstracted from an example md is stored with the example in the example d~mhase [see Figure 6]. (a) ATTRIBUTE DISTANCE For the attribute of the adnominal particle "no", the distance is 0 or 1 depending on whether or not they match exactly, for example, d("deno","deno") = 0 and d("deno", "no") = 1. For semantic attributes, however, the distance varies between 0 and 1. Semantic distance d(0 < d < 1)is determined by the Most Specific Common Abstractlon(MSCA) (Kolodner and Riesbeck 1989) obtained from the thesaurus abstraction hierarchy. When the thesaurus is (n+l) layered, (k/n) is assigned to the concepts in the k-th layer from the bottom. For example, as shown with the broken line in Figure 5, the MSCACkaigi '' [conference], "taizai" [stay]) is "koudou" [actions] and the distance is 2/3. Of course, 0 is assigned when the MSCA is the bottom class, for instance, MSCACkyouto"[Kyoto], "toukyou" [Tokyo])= "timei"[placc], or when nouns are identical ( MSCA(N, N) for any N). Thesaurus Root [actions] (1/3) oural omings goings] setsumei tions] (o) I kaisetsu [commen- tary] //.,. ",\ [ taizai I I hatchaku I [stays] II [arrivals & [meetingslJ J J Jdepartures', II :o) , ili i kaigi taizai touchaku [conference] [stay] [arrive] Figure 5 Thesaurus(portion) (b) WEIGHT OF ATTRIBUTE The weight of the attribute is the degree to which the attribute influences the selection of the translation pattern(t.p.). We adopt the expression (2) used by Stanfill and Waltz (1986) for memory-based reasoning, to implement the intuition. t.p. freq. B in A 12/27 AB 4/27 B from A 2/27 BA 2/27 BtoA 1/27 (E l=timei) [place/ t.p. freq. B in A 313 (E2=deno) [in/ t.p. freq. B 9/24 AB 9/24 B in A 2/24 A's B 1124 BonA 1/24 (E3=soudan) [meetings] Figure 6 Weight of the i-th attribute 188 In Figure 6, all the examples whose E2 = "deno" aze translated with the same preposition, "in". This implies that when El= "deno", E2 is an attribute which heavily influences the selection of the translation pattern. In contrast to this, the translation patterns of examples whose E1 = "timei"[place], =e varied. This implies that when E1 -- "timei"[place], E~is an attribute which is less influential on the selection of the translation pattern. According to the expression (2), weights for attributes, E~, E2 and E3me as follows: W1=,~(12/27) 2+(4127 ) 2+...+(1/27)2 = 0.49 W2=,,~(3/3) 2 = 1.0 w3=,~(9/24 ) 2+(9124 ) 2+. ..+(1/24) 2 ,= 0.54 (C) TOTAL DISTANCE The distance between the input and the first example shown in Figure 4 is calculated using the weights in section 4.2.2 Co), attribute distances as explained in section 4.2.2 (a) and expression (1) at the beginning of section 4.2.2. d( "kyouto'[Kyoto] "deno'[in] "kaigi'[ conference], "toukyou'[Tokyo] "deno'[in] "taizai'[stay]) ,= d('kyouto','toukyou" )*0.49+ d('deno",'deno')*1.0+ d('kaigi", "taizai')*0.54 = 0"0.49+0"1.0+2/3"0.54 = 0.4 4.3 EXPERIMENTS The current number of words in the corpus is about 300,000 and the number of examples is 2,550. The collection of examples from another domain is in progress. 4.3.1 JACKKNIFE TEST In ~ to roughly estimate translation performance, a jackknife experiment was conducted. We partitioned the example database(2,550) in groups of one hundred, then used one set as input(100) and translated them with the rest as an example database (2,450). This was repeated 25 times. Figure 7 shows that the average success rate is 78%, the minimum 70% and the maximum 89% [see section 4.3.4]. It is difficult to fairly compare this result with the success rate of the existing MT system. However, it is believed that current conventional systems can at best output the most common translation pattern, for example, "B of A", as the default. In this case, the average success rate may only be about 20%. success(%) MAXIMUM(89%) 100 80 ~ ~ _ ,.. 60 AVERAGE(78%) MINIMUM(70%) 0 I I 1 11 21 test number Figure 7 Result of Jackknife Test 40 20 4.3.2 SUCCESS RATE PER NUMBER OF EXAMPLES Figure 8 shows the relationship between the success rate and the number of examples. Of the twenty-five cases in the previous jackknife test, three are shown: maximum, average, and minimum. This graph shows that, in general, the more examples we have, the better the quality [see section 4.3.4]. success(%) MAXIMUM 80 t| /J~~~,~'~'~''--/-- .,, ~s' AVERAGE 70 . _. - . . . ' ' ' 50 1 11 21 no. of examples (x 100) Figure 8 Success Rate per No. of Examples 189 4.3.3 SUCCESS RATE PER DISTANCE Figure 9 shows the relationship between the success rate and the distance between the input and the most similar examples retrieved. This graph shows that in general, the smaller the distance, the better the quality. In other words, EBMT assigns the distance between the input and the retrieved examples us a reliability factor. SUCCESS 0.9 r 1592/1790 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 23137 100 / 169 • 19/33 • = • 35167 951162 • 74/148 8/24 7/14 3/56 E• I I I I I 0 0.2 0.4 0.6 0.8 1 distance Figure 9 Success Rate per Distance 4.3.4 SUCCESSES AND FAILURES The following represents successful results: (1) the noun phrase "kyouto-eki [Kyoto-station] no o-mise [store]" is wansta_!ed according to the translation pattern "B at A" while the similar noun phrase, "kyouto[Kyoto] no shiten [branch]" is translated according to the translation pattern "13 in A"; (2) the noun phrase of the form "N~ no hou" is translated according to the translation pattern "A", in other words, the second noun is omitted. We ~e now studying the results carefully ~d are striving to improve the success rate. (a) About half of the failures are caused by a lack of similar examples. They are easily solved by adding appropriate examples. Co) The rest are caused by the existence of similar examples: (1) equivalent but different examples are retrieved, for instance, those of the form, "B of A" and "AB" for "rolm-gatsu [June] no futsu-ka [second]". This is one of the main reasons the graphs (Figure 7 and 8) show an up-and-down pattern. They can be regarded as a correct translation or the distance calculation may be changed to handle the problem; (2) Because the current distance calculation is inadequate, dissimilar examples are retrieved. 5 PHENOMENA OTHER THAN "N 1 no Nz" This section studies the phenomena, "N1 wa N2 da" and "- o-negaishimasu" with the same corpus used in the previous section. 5.1 "N x wa N~ da" A sentence of the form "N] wa N2 da" is called a "da" sentence. Here "N{' and '~2" ~e nouns, "wa" is a topical particle, and "da" is a kind of verb which, roughly speaking, is the English copula "be". The correspondences between "da" sentences and the English equivalents are exemplified in Figure 10. Mainly, "N~ wa N2 da" corresvonds to '~ be Nz" like (a-l) - (a-4). However, sentences like (b) - (e) cannot be translated according to the translation pattern ,N~ be N2". In example (d), there is no Japanese counterpart of "payment should be made- by". The English sentence has a modal, passive voice, the verb make, and its object, payment, while the Japanese sentence has no such correspondences. This translation cannot be made in a compositional way from the target words which ale selected from a normal dictionary. It is difficult to formulate rules for the translation and to explain how the translation is made. The conditions (Ca) and (Co) in section 3.1 hold true. Conventional approaches lead to the understanding of"da" sentences using contextual and exwa-linguistic information. However, many translations exist that are the result of human translators' understanding. Translation can be made by mimicking such similar examples. Example (e) is special, i.e., idiomatic. The condition (Co) in section 3.1 holds. (a) NI be N= watashi[I] kochira[this] denwa-bango[tel-no.] sanka-hi[fee] (b) N, cost N= yokoushuu[proc.] N, jonson[Johnson] jim ukyoku[secretariat] 06-951-0866106-951-0866] 85,000-en[85,000 yen] 30,000-en[30,000 yen] (c) for N,, the fee is N= kigyou[companies] 85,000-en[85,000 yen] (d) payment should be made by N= hiyou[fee] ginnkou-furikomi [bank-transfer] (e) the conference will end on N= saishuu-bi[final day] 10qatsu12nichi[Oct. 12th] Figure 1 0 Examples of "N1 wa N2da" The distribution of N] and N2 in the examples 190 of our corpus vary for each case. Attention should be given to 2-tuples of nouns, (N1, N2). N2s of (a-4), (13) and (c) are similar, i.e., both mean "prices". However N~s are not similar to each other. Nls of (a-4) and (d) ~e similar, i.e., both mean "fee". However, the N2s ~e not similar to each other. Thus, EBMT is applicable. 5.2 "~ o-negaishimasu" Figure 11 exemplifies the conespondences between sentences of the form "~ o-negaishimasu" and the English equivalents. (a) may I speak to N (b) please give me N (c) please pay by N (d) yes, please (e) thank You Figure 11 jim ukyoku[secretariat] o o-negaishlmasu go-ju usyo[add ress] o... genkin[cash] de... hal... voroshiku... Examples of "~ o-negaishimasu" Translations in examples (b) and (c) are possible by finding substitutes in Japanese for give me and pay by, respectively. The conditions (Ca) and (Cc) in section 3.1 hold. Usually, this kind of supplement is done by contextual analysis. However, the connection between the missing elements and the noun in the examples is strong enough to reuse, because it is the product of a combination of translator expertise and domain specific restriction. Examples (a), (d) and (e) are idiomatic expressions. The condition (Cb) holds. The distribution of the noun and the particle in the examples of our corpus varies for each case in the same way as in the "da" sentence. EBMT is applicable. 6 CONCLUDING REMARKS Example-Based Machine Translation (EBMT) has been proposed. EBMT retrieves similar examples (pairs of source and target expressions), adapting them to translate a new source text. The feasibility of EBMT has been shown by implementing a system which translates Japanese noun phrases of the form '~1 no N2" into English noun phrases. The result of the experiment was encouraging. Bnaed applicability of EBMT was shown by studying the d~m from the text corpus. The advantages of integrating EBMT with RBMT were also discussed. The system has been written in Common Lisp, and is running on a Genera 7.2 Symbolics Lisp Machine at ATR. (1) IMPROVEMENT The more elaborate the RBMT becomes, the less expandable it is. Considerably complex rules concerning semantics, context, and the real world, are required in machine translation. This is the notorious AI bottleneck: not only is it difficult to add a new rule to the database of rules that are mutually dependent, but it is also difficult to build such a rule database itself. Moreover, computation using this huge and complex rule database is so slow that it forces a developer to abandon efforts to improve the system. RBMT is not easily upgraded. However, EBMT has no rules, and the use of examples is relatively localized. Improvement is effected simply by inputting appropriate examples into the database. EBMT is easily upgraded, which the experiment in section 4.3.2 has shown: the more examples we have, the better the quality. (2) RELIABILITY FACTOR One of the main reasons users dislike RBMT systems is the so-called "poisoned cookie" problem. RBMT has no device to compute the reliability of the result. In other words, users of RBMT cannot trust any RBMT translation, because it may be wrong without any such indication from system. Consider the case where all translation processes have been completed successfully, yet, the result is incorrect. In EBMT, a reliability factor is assigned to the translation result according to the distance between the input and the similar examples found [see the experiment in section 4.3.3]. In addition to this, retrieved examples that are similar to the input convince users that the translation is accurate. (3) TRANSLATION SPEED RBMT translates slowly in general because it is really a large-scale rule-based system, which consists of analysis, transfer, and generation modules using syntactic rules, semantic restrictions, structural transfer rules, word selections, generation rules, and so on. For example, the Mu system has about 2,000 rewriting and word selection rules for about 70,000 lexical items (Nagao et al. 1986). As recently pointed out (Furuse et al. 1990), conventional RBMT systems have been biased toward syntactic, semantic, and contextual analysis, which consumes considerable computing time. However, such deep analysis is not always necessary or useful for translation. In contrast with this, deep semantic analysis is avoided in EBMT because it is assumed that translations appropriate for given domain can be obtained using domain-specific examples (pairs of source and target expressions). EBMT directly returns a translation without reasoning through a long chain of rules [see 191 sections 2 and 4]. There is fear that retrieval from a large-scale example database will prove too slow. However, it can be accelerated effectively by both indexing (Sumita and Tsutsumi 1988) and parallel computing (Sumita and Iida 1991). These processes multiply acceleration. Consequently, the computation of EBMT is acceptably efficient. (4) ROBUSTNESS RBMT works on exact-match reasoning. It fails to translate when it has no knowledge that matches the input exactly. However, EBMT works on best-match reasoning. It intrinsically translates in a fail-safe way [see sections 2 and 4]. (5) TRANSLATORS EXPERTISE Formulating linguistic rules for RBMT is a difficult job and requires a linguistically trained staff. Moreover, linguistics does not deal with all phenomena occurring in real text (Nagao 1988). However, examples necessary for EBMT ~Ee easy to obtain because a large number of texts and their translations are available. These are realization of translator expertise, which deals with all real phenomena. Moreover, as electronic publishing increases, more and more texts will be machine-readable (Sadler 1989b). EBMT is intrinsically biased toward a sublanguage: strictly speaking, toward an example database. This is a good feature because it provides a way of automatically tuning itself to a sublanguage. REFERENCES Furuse~ O., Sumita, E. and Iida, H. 1990 "A Method for Realizing Transfer-Driven Machine Translation", Reprint of W(~L 80-8, IPSJ, (in Japanese). Hirei, M. and Kitahashi, T. 1986, "A Semantic Classification of Noun Modifications in Japanese Sentences and Their Analysis", Reprint of WGNL 58-1, IPSJ, (in Japanese). Kolodner, J. and Riesbeek, C. 1989 "Case-Based Reasoning", Tutorial Textbook of 11 th UCAI. Nagao, M. 1984 "A Framework of a Mechanical Translation Between Japanese and English by Analogy Principle", in A. Elithom and R. Banerji (ed.), Artificial and Human Intelligence, North-Holland, 173-180. Nagao, M. ,Tsujii, J. , Nakamura, J. 1986 "Machine Translation from Japanese into English", Proceedings of the IFI~.F., 74, 7. Nagao, M.(chair) 1988 "Language Engineering : The Real Bottleneck of Natural Language Processing", Proceedings of the 12th International Conference on Computational Linguistics. Nirenburg, S. 1987 Machine Translation, Cambridge University Press, 350. Nitta, Y. 1986 'Idiosyncratic Gap: A Tough Problem to Structure-bound Machine Translation", Proceedings of the 11th International Conference on Computational Linguistics, 107-111. Ogura, K., Hashimoto, K., and Morimoto, T. 1989 "Object-Oriented User Interface for Linguistic Database", Proceedings of Working Conference on Data and Knowledge Base Integration, University of Keele, England. Ohno, S. and Hamanishi, M. 1984 Ruigo-Shin-Jiten, Kadokawa, 93 2, (in Japanese). Sadler, V. 1989a ''Translating with a Simulated Bilingual Knowledge Bank(BKB)", BSO/Research. Sadler. V. 1989b Working with Analogical Semantics, Foris Publications, 25 6. Sato, S. and Nagao, M. 1989 "Memory-Based Translation", Reprint of WGNL 70-9, IPSJ, (in Japanese). Sato, S. and Nagao, M. 1990 "Toward Memory-Based Translation", Proceedings of the 13th International Conference o n Computational Linguistics. Shimazu, A. , Naito, S. , and Nomura, H. 1987 "Semantic Structure Analysis of Japanese Noun Phrases with Adnominal Particles", Proceedings of the 25th Annual Meeting of the Association for Computational Linguistics, 123-130. Stanf'dl, C. and Waltz, D. 1986 'Toward Memory-Based Reasoning", CACM, 29-12, 1213-1228. Sumita, E. and Tsutsumi, Y. 1988 "A Translation Aid System Using Flexible Text Retrieval Based on Syntax-Matching", Proceedings of The Second International Conference on Theoretical and Methodological Issues in Machine Translation of NaturalLanguages, CMU, Pittsburgh. Sumita, E., Iida, H. and Kohyama, H. 1990a '~l'ranslating with Examples: A New Approach to Machine Translation", Proceedings of The Third International Conference on Theoretical and Methodological Issues in Machine Translation of NaturalLanguages, Texas, 203-212. Sumita, E. /ida, H. and Kohyama, H. 1990b "Example-based Approach in Machine Translation", Proceedings of lnfoJapan "90, Part 2: 65-72. Sumita, E. and Iida, H. 1991 "Acceleration of Example-Based Machine Translation", (manuscript). 192
1991
24
Resolving Translation Mismatches With Information Flow Megumi Kameyama, Ryo Ochitani, Stanley Peters The Center for the Study of Language and Information Ventura Hall, Stanford University, Stanford, CA 94305 ABSTRACT Languages differ in the concepts and real-world en- tities for which they have words and grammatical constructs. Therefore translation must sometimes be a matter of approximating the meaning of a source language text rather than finding an exact counterpart in the target language. We propose a translation framework based on Situation Theory. The basic ingredients are an information lattice, a representation scheme for utterances embedded in contexts, and a mismatch resolution scheme defined in terms of information flow. We motivate our ap- proach with examples of translation between En- glish and Japanese. 1 Introduction The focus of machine translation (MT) technol- ogy has been on the translation of sentence struc- tures out of context. This is doomed to limited quality and generality since the grammars of un- like languages often require different kinds of con- textual information. Translation between English and Japanese is a dramatic one. The definiteness and number information required in English gram- mar is mostly lacking in Japanese, whereas the hon- orificity and speaker's perspectivity information re- quired in Japanese grammar is mostly lacking in English. There are fundamental discrepancies in the extent and types of information that the gram- mars of these languages choose to encode. An MT system needs to reason about the context of utterance. It should make adequate assumptions when the information required by the target lan- guage grammar is only implicit in the source lan- guage. It should recognize a particular discrepancy between the two grammars, and systematically re- act to the needs of the target language. We propose a general reasoning-based model for handling translation mismatches. Implicit informa- tion is assumed only when required by the target language grammar, and only when the source lan- guage text allows it in the given context. Transla- tion is thus viewed as a chain of reactive reasoning *Linguistic Systems, Fujitsu Laboratories Ltd. between the source and target languages. 1 An MT system under this view needs: (a) a uni- form representation of the context and content of utterances in discourse across languages, (b) a set of well-defined reasoning processes within and across languages based on the above uniform representa- tion, and (c) a general treatment of translation mis- matches. In this paper, we propose a framework based on Situation Theory (Barwise and Perry 1983). First we will define the problem of translation mismatches, the key translation problem in our view. Second we will define the situated representation of an utter- mace. Third we will define our treatment of transla- tion mismatches as a flow of information (Barwise and Etchemendy 1990). At the end, we will discuss a translation example. 2 What is a translation mismatch? Consider a simple bilingual text: EXAMPLE I: BLOCKS (an AI problem) EWGLISH: Consider the blocks world wiCh three blocks, A, B, and C. The blocks A and B are on the table. C is on A. Which blocks are clear? JAPAIIESE: mlttu no tumaki A to B to C g~ 6ru tumild no sekLi wo ~ngaete three of block A and B and C NOM exist block of world ACC consider m/ru try A to ta no tun~ki ha tnkue no ue A and B of block TOPIC t&ble of &bore LOC riding C hl A mo ue n| notteiru C TOPIC A of .bore LOC riding n&nimo ue as nottelnai tam/hi hl dote h nothin& above LOC riding block TOPIC which ? Note the translation pair C is on A and C t~ A ~9 _h~j~-~w~ (C ha A no .e ni nofteirn). In En- 1 Such a reasoning-based MT system is one kind of "negotiation"- based system, as proposed by Martin Kay. We thank him for stimulating our thinking. 193 glish, the fact that C is on top of A is expressed using the preposition on and verb is. In Japanese, the noun _1= (ue) alone can mean either "on top of" or "above", and there is no word meaning just "on top of". Thus the Japanese translation narrows the relationship to the one that is needed by bringing in the verb j~-~ 77 w ~ (notteirn) 'riding'. This phe- nomenon of the same information being attached to different morphological or syntactic forms in differ- ent languages is a well-recognized problem in trans- lation. TRANSLATION DIVERGENCES 2 of this kind mani- fest themselves at a particular representation level. They can be handled by (i) STRUCTURE-TO-STRUCTURE TRANSFERS, e.g., structural transformations of Na- gao (1987), the sublanguage approach of Kosaka et al (1988), or by (ii) TRANSFER VIA A "DEEPER" COMMON GROUND, e.g., the entity-level of Carbonell and Tomita (1987), the lexical-conceptual structure of Dorr (1990). A solution of these types is not gen- eral enough to handle divergences at all levels, how- ever. More general approaches to divergences allow (iii) MULTI-LEVEL MAPPINGS, i.e., direct transfer rules for mapping between different representation levels, e.g., structural correspondences of Kaplan et al. (1989), typed feature structure rewriting sys- tem of Zajac (1989), and abduction-based system of Hobbs and Kameyama (1990). We want to call special attention to a less widely recognized problem, that of TRANSLATION MISMATCHES. They are found when the grammar of one language does not make a distinction required by the gram- mar of the other language. For instance, English noun phrases with COUNT type head nouns must specify information about definiteness and number (e.g. a town, the town, towns, and the towns are well-formed English noun phrases, but not town). Whereas in Japanese, neither definiteness nor num- ber information is obligatory. Note the translation pair Which blocks are clear? and f~ %_h~77 W~ W~]~Cg~ ~°~ ( Nanimo ne ni notteinai tnmiki ha dore ka) above. Blocks is plural, but tnmiki has no number information. A mismatch has a predictable effect in each trans- lation direction. From English into Japanese, the plurality information gets lost. From Japanese into English, on the other hand, the plurality informa- tion must be explicitly added. Consider another example, a portion of step-by- step instructions for copying a file from a remote system to a local system: EXAMPLE 2: FTP ~Thls term was taken from Dorr (1990) where the prob- lem of divergences in verb predicate-argument structures was treated. Our use of the term extends the notion to cover a much more general phenomenon. ENGLISH: 2. Type 'open', a space, and the name of the remote systems and press [return]. The system displays system connection messages and prompts for a user name. 3. Type the user name for your account on the remote system and press [return]. The system displays a message about passwords and prompts for a password if one is required. JAPANESE: 2. open ~1~ ~ ~-- b'~':~-.a,~:~-'l' 7"b~ ~--y ~o 'open' kuuhaku rimooto sisutemu met wo taipu si [RETURN] 'open' space remote system name ACC type and [RETURN] slsntemn setnsokn messeesi to ynnsaa reel wo ton puronputo system connection message and user name ACC ash prompt ga hyousi s~reru NOM display PASSIVE rimooto slsutemu deno sihun no ak~unto no yuusa met remote system LOC SELF of account of user name wo t~ipu s| [RETURN] wo osu ACC type and [RETURN] ACC push pasuwaado ni ksnsurn messeess to, moshi pasuwaado Sa p~ssword about messaKe And, if password NOM hltuyon nara po~suwaado wo tou pronputo ga hyoujl sarern required then password ACC ask prompt NOM dlsplay PASSIVE The notable mismatches here are the definiteness and number of the noun phrases for "space," "user name," "remote system," and "name" of the remote system in instruction step 2, and those for "mes- sage," "password," and "user name" in step 3. This information must be made explicit for each of these references in translating from Japanese into English whether or not it is decidable. It gets lost (at least on the surface) in the reverse direction. Two important consequences for translation fol- low from the existence of major mismatches be- tween languages. First in translating a source lan- guage sentence, mismatches can force one to draw upon information not expressed in the sentence information only inferrable from its context at best. Secondly, mismatches may necessitate making in- formation explicit which is only implicit in the source sentence or its context. For instance, the alterna- tion of viewpoint between user and system in the FTP example is implicit in the English text, de- tectable only from the definiteness of noun phrases like "a/the user name" and "a password," but Japanese grammar requires an explicit choice of the user's viewpoint to use the reflexive pronoun zibsn. When we analyze what we called translation di- vergences above more closely, it becomes clear that divergences are instances of lexical mismatches. In the blocks example above, for instance, there is a mismatch between the spatial relations expressed with English on, which implies contact, and Japanese 194 ue, which implies nothing about contact. It so hap- pens that the verb "notteiru" can naturally resolve the mismatch within the sentence by adding the in- formation "on top of". Divergences are thus lexical mismatches resolved within a sentence by coocur- ring lexemes. This is probably the preferred method of mismatch resolution, but it is not always possi- ble. The mismatch problem is more dramatic when the linguistic resources of the target language offer no natural way to match up with the information content expressed in the source language, as in the above example of definiteness and number. This problem has not received adequate attention to our knowledge, and no general solutions have been pro- posed in the literature. Translation mismatches are thus a key transla- tion problem that any MT system must face. What are the requirements for an MT system from this perspective? First, mismatches must be made rec- ognizable. Second, the system must allow relevant information from the discourse context be drawn upon as needed. Third, it must allow implicit facts be made explicit as needed. Are there any system- atic ways to resolve mismatches at all levels? What are the relevant parameters in the "context"? How can we control contextual parameters in the transla- tion process? Two crucial factors in an MT system are then REPRESENTATION and REASONING. We will first describe our representation. 3 Representing the translation con- tent and context Translation should preserve the information con- tent of the source text. This information has at least three major sources: Content, Context, Language. From the content, we obtain a piece of information about the relevant world. From the context, we obtain discourse-specific and utterance-specific in- formation such as information about the speaker, the addressee, and what is salient for them. From the linguistic forms (i.e., the particular words and structures), through shared cooperative strategies as well as linguistic conventions, we get information about how the speaker intends the utterance to he interpreted. DISTRIBUTIVE LATTICE OF INFONS. In this approach, pieces of information, whether • they come from linguistic or non-linguistic sources, are represented as infons (Devlin 1990). For an n- place relation P, ((P, Zl, ...,z, ;1)) denotes the in- formational item, or infon, that zl, ..., xn stand in the relation P, and ((P, Zl,...,zn ;0)) denotes the infon that they do not stand in the relation. Given a situation s, and an infon or, s ~ ~ indicates that the infon a is made factual by the situation s, read s supports ~r . Infons are assumed to form a distributive lattice with least element 0, greatest element 1, set I of infons, and "involves" relation :~ satisfying: 3 for infons cr and r, if s ~ cr and cr ~ r then s ~ 1- This distributive lattice (I, =~), together with a nonempty set Sit of situations and a relation ~ on Sit x I constitute an infon algebra (see Barwise and Etchemendy 1990). THE LINGUISTIC INFON LATTICE. We propose to use infons to uniformly represent infor- mation that come from multiple "levels" of linguis- tic abstraction, e.g., morphology, syntax, semantics, and pragmatics. Linguistic knowledge as a whole then forms a distributive lattice of infons. For instance, the English words painting, draw- ing, and picture are associated with properties; call them P1, P2, and P3, respectively. In the following sublattice, a string in English (EN) or Japanese(JA) is linked to a property with the SIGNIFIES relation (written ==),4 and properties themselves are inter- linked with the INVOLVES relation (=~): EN: "picture" ~-= Pl((picture, x; 1)) EN: "painting" == P2((painting, x; 1)) EN: "drawing" == P3((drawing, x; 1)) EN: "oil painting" =----- P4((oil painting, x; 1~ EN: "water-color" == Ph((water-color, x; 1)) P2 ¢> P1, P3 ~ P1, P4 =P P2, PS =P P2 So far the use of lattice appears no different from familiar semantic networks. Two additional factors bring us to the basis for a general translation frame- work. One is multi-linguality. The knowledge of any new language can be added to the given lattice by inserting new infons in appropriate places and adding more instances of the "signifies" relations. The other factor is grammatical and discourse-functional notions. Infons can be formed from any theoretical notions whether universal or language-specific, and placed in the same lattice. Let us illustrate how the above "picture" sublat- tice for English would be extended to cover Japanese words for pictures. In Japanese, ~ (e) includes both paintings and drawings, but not photographs. It is thus more specific than picture but more general than painting or drawing. No Japanese words co- signify with painting or drawing, but more specific concepts have words-- ~ (aburae) for P4, (suisaiga) for P5, and the rarely used word ~ (senbyou) for artists' line drawings. Note that syn- onyms co- signify the same property. (See Figure 1 for the extended sublattice.) 3We assmne that the relation =~ on infons is transitive, reflexive, and anti-symmetric after Barwise and Etchemendy. 4This is our addition to the infon lattice. The SIGNIFIES relation links the SIGNIFIER and SIGNIFIED to forrn a SIGN (de Saussure 1959). Our notation abbreviates standard infons, e.g., ((signifies, "picture", EN, P1; 1)) . 195 EN:"picture n m---- P1 ((picture,x; I)) n EN:-p~intins~ JA:'e m m--.-- P6 ((e,z; 1)) ((p~tnting,z; 1))P2 P3 ((drawlns,x;l))----~ EN:Sdrawing" R> P7 ((line dr&wing,z;1)) ({oil p&intins,~c; 1))P4 P5 ((water.colorjc;1)) %% JA:asenbyou ~ EN:~oil p~nting j EN:aw&ter.color j JA:U&burae" JA: =suis~lga = Figure 1: The "Picture" Sublattice ((give, x, y, .;i)) ^ ((pov, x;l)) ^({look-up, s, s; 0)) ^((look-down, s, m;0)) ^((speaker, s, 1)) ((give, z, y, s;1)) ^((pov, s;l)) A((looLup, s, x;0)) ^((look-down, a, x;O)) ^((spe6ker, s;l)) ({give, x, y, s;l)) ((give, z, y, s;l)) ^((po., ffi;l)) ^((pov, x;z)) ^((look.up, s, s;1)) ^((look-up, s, s;0)) ^((look-down, s, I;O)) ^((look.down, s, s;1)) ^((.p.~ker, .~11) __~--_ JA-.Ukudas~ru~'"~ ~..~ JA:~yokosu N ((~.~, •, y, .;1)) ((gi.~, =, ~, .;1)) ^((pov, s~s)) ^((~o., J;s)) ^((look-up, s, x;1)) ^((look.up, s, x;0)) ^((look.down, s, x; 0)) ^((look-down, s, x;1)) ^((speaker s;l)) ^((speaker 8;1)) Figure 2: Verbs of giving JA: "Jr(e)" == PO((e, x; 1)) JA: "~l~(aburae)" == P4({oil painting, x; I}) JA: "f#L~iU(muisaiga)" ----= PS((water-color, x; 1)) JA: "W/~/l(senbyou)" ----= P7{(senbyou, x; I}) P2 =~ P6, P3 =P PS, PS =~ PI, P7 =#P P3 Lexical differences often involve more complex prag- matic notions. For instance, corresponding to the English verb give, Japanese has six basic verbs of giving, whose distinctions hinge on the speaker's perspectivity and honorificity. For "X gave Y to Z" with neutral honorificity, ageru has the viewpoint on X, and burets, the viewpoint on Z. Sasiageru honors Z with the viewpoint on X, and l~udasaru honors X with the viewpoint on Z, and so on. See Figure 2. As an example of grammatical notions in the lat- tice, take the syntactic features of noun phrases. English distinguishes six types according to the pa- rameters of count/mass, number, and definiteness, whereas Japanese noun phrases make no such syn- tactic distinctions. See Figure 3. Grammatical no- tions often draw on complex contextual properties such as "definiteness", whose precise definition is a research problem on its own. THE SITUATED UTTERANCE REPRE- SENTATION. A translation should preserve as far as practical the information carried by the source text or discourse. Each utterance to be translated gives information about a situation being described-- precisely what information depends on the context in which the utterance is embedded. We will utilize what we call a SITUATED UTTERANCE REPRESEN- TATION (SUR) to integrate the form, content, and ~ N ~ UN~:JA =;0)) Figure 3: The "NP" Sublattice context of an utterance. 5 In translating, contextual information plays two key roles. One is to reduce the number of possible translations into the target language. The other is to support reasoning to deal with translation mismatches. Four situation types combine to define what an utterance is: Described Situation The way a certain piece of reality is, according to the utterance Phrasal Situation The surface form of the utter- ance Discourse Situation The current state of the on- going discourse when the utterance is produced Utterance Situation The specific situation where the utterance is produced The content of each utterance in a discourse like the Blocks and FTP examples is that some situa- tion is described as being of a certain type. This is the information that the utterance carries about the DESCRIBED SITUATION. The PHRASAL SITUATION represents the surface form of an utterance. The orthographic or phonetic, phonological, morphological, and syntactic aspects of an utterance are characterized here. The DISCOURSE SITUATION is expanded here in situation theory to characterize the dynamic as- pect of discourse progression drawing on theories in computational discourse analysis. It captures the linguistically significant parameters in the cur- rent state of the on-going discourse, s and is espe- cially useful for finding functionally equivalent re- ferring expressions between the source and target languages. ¢ • reference time = the time pivot of the linguistic SOur characterization of the context of utterance draws on a number of existing approaches to discourse representa- tion and discourse processing, most notably those of Grosz and Sidner (1986), Discourse Representation Theory (Kamp 1981, Helm 1982), Situation Semantics (Barwise and Perry 1983, Gawron and Peters 1990), and Linguistic Discourse Model (Scha and Polanyi 1988). °Lewis (1979) discussed a number of such parameters in a logical framework. 7Different forms of referring expressions (e.g. pronouns, demonstratives) and surface structures (i.e. syntactic and 196 description ("then") s • point of view = the individual from whose view- point a situation is described ~ • attentional state -- the entities currently in the focus and center of attention ~° • discourse structural context = where the utter- ance is in the structure of the current discourse I z The specific UTTERANCE SITUATION contains in- formation about those parameters whose values sup- port indexical references and deixes: e.g., informa- tion about the speaker, hearer(s), the time and loca- tion of the utterance, the perceptually salient con- text, etc. The FTP example text above describes a situation in which a person is typing commands to a com- puter and it is displaying various things. Specif- ically, it describes the initial steps in copying a file from a remote system to a local system with ftp. Consider the first utterance in instruction step ~uttering, x, u, t; 1 ~ ^ ~addressing, ~, y, t; 1 Note that the parameter y of DeS for the user (to whom the discourse is addressed) has its value constrained in US; the same is true of the param- eter t for utterance time. Similarly, the parameter r of DeS for the definite remote system under dis- cussion is assigned a definite value only by virtue of the information in DiS that it is the unique remote system that is salient at this point in the discourse. This cross-referencing of parameters between types constitutes further support for combining all four situation types in a unified SUR. In order for the analysis and generation of an utterance to be as- sociated with an SUIt, the grammar of a language should be a set of constraints on mappings among the values assigned to these parameters. 4 Translation as information flow 3 repeated here: Type the user name for your . . . . d , - ~ Translation must often be a matter of approxi- accoun~ on ~ne remo~e system an press Lre~urnj . . . . . . . . . . It occurs in a type of DISCOURSE SITUATION where mating the meaning oI a source mnguage ~ex~ ramer than finding an exact counterpart in the target lan- there has previously been mention of a remote sys- tem and where a pattern has been established of alternating the point of view between the addressee and another agent (the local computer system). We enumerate below some of the information in the SUl~ associated with this utterance. The Described Situation (DES) of the utterance is ~type, y,n,t~;1 ~ A ~press, y,k,tl~;1 ~ where n satisfies n = n I ~=~ ~named, a, n~; 1 ~ a satisfies ~account, a, y,r; 1 ~ r satisfies ~system, r; 1 A ~'~remotefrom, r,y;1 ~tlsatisfies~later, t~,t;1 ~'n , k satisfies ~named,k,[return];l~ t satisfies ~later, t , t ; 1 The Phrasal Situation (PS) of the utterance is ~language, u,English; 1 ~ ^ ~written, u, "Type the user name for your account on the remote system and press [return]."; 1 ~ ^ ~syntax, u,{...~written, e, "the user name"; 1 ~ ^ ~np, e; 1 ~ ^ ~deflnite, e; 1 ~, A ~singular, e; 1 ~ ^ ...}; 1 The Discourse Situation (DIS) is r = r ~ ~ ~focus, el,remote system; 1 ~, Finally, the Utterance Situation (US) is phonetic) often carry equivalent discourse functions, so ex- plicit discourse representation is needed in translating these forms. See also Tsujil (1988) for this point. s Reichenb~.h (1947) pointed out the significance of refer- ence time, which in the FTP example accounts for why the addressee is to press [return] after typing the user name of his/her remote a~count. 9 Katagiri (to appear) describes how this parameter inter- acts with Japanese grammar to constrain use of the reflexive pronoun zibu~. 10 See Grosz (1977), Grosz et al. (1983), Kameyama (1986), Brennan et al. (1987) for discussions of this parameter. llThis parameter may be tied to the "intentional" aspect of discourse as proposed by Grosz and Sidner (1986). See, e.g., Scha and Polanyi (1988) and Hobbs (1990) for discourse structure models. guage since languages differ in the concepts and real-world entities for which they have words and grammatical constructs. In the cases where no translation with exactly the same meaning exists, translators seek a target lan- guage text that accurately describes the same real world situations as the source language text. 12 The situation described by a text normally includes ad- ditional facts besides those the text explicitly states. Human readers or listeners recognize these addi- tional facts by knowing about constraints that hold in the real world, and by getting collateral informa- tion about a situation from the context in which a description is given of it. For a translation to be a good approximation to a source text, its "fleshed out" set of facts--the facts its sentences explicitly state plus the additional facts that these entail by known real-world constraints--should be a maximal subset of the "fleshed out" source text facts. Finding a translation with the desired property can be simplified by considering not sets of facts (infons) but infon lattices ordered by involvement relations including known real-world constraints. If a given infon is a fact holding in some situation, all infons in such a lattice higher than the given one (i.e., all further infons it involves) must also be facts in the situation. Thus a good translation can be found by looking for the lowest infons in the lattice that the source text either explicitly or im- plicitly requires to hold in the described situation, and finding a target language text that either ex- plicitly or implicitly requires the maximal number 12In some special cases, translation requires mapping be- tween different hut equivalent real world situations, e.g., cars drive on different sides of the street in Japan and in the US. 197 of them to hold. 13 THE INFORMATION FLOW GRAPH. Trans- lation can be viewed as a flow of information that re- sults from the interaction between the grammatical constraints of the source language (SL) and those of the target language (TL). This process can be best modelled with information flow graphs (IFG) defined in Barwise and Etchemendy 1990. An IFG is a semantic formalization of valid reasoning, and is applicable to information that comes from a variety of sources, not only linguistic but also visual and other sensory input (see Barwise and Etchemendy 1990b). By modelling a treatment of translation mismatches with IFGs, we aim at a semantically correct definition that is open to various implemen- tations. IFGs represent five basic principles of information flow: Given Information present in the initial assump- tions, i.e., an initial "open case." Assume Given some open case, assume something extra, creating an open subcase of the given case. Subsume Disregard some open case if it is sub- sumed by other open cases, any situation that supports the infons of the subsumed case sup- ports those of one of the subsuming cases. Merge Take the information common to a number of open cases, and call it a new open case. Recognize as Possible Given some open case, rec- ognize it as representing a genuine possibility, provided the information present holds in some situation. RESOLVING MISMATCHES. First~ a trans- lation mismatch is recognized when the generation of a TL string is impossible from a given set of in- fons. More specifically, given a Situated Utterance Representation (SUIt), when no phrasal situations of TL support SUR because no string of TL sig- nifies infon a in SUR, The TL grammar cannot generate a string from SUR, and there is a TRANSLATION MISMATCH on 0 r. A translation mismatch on ~, above is resolved in one of two directions: Mismatch Resolution by Specification: Assume a specific case r such that r =:~ and there is a Phrasal Situation of TL that supports v. A new open case SUR' is then generated, adding r to SUR. 13As more sophisticated translation is required, We could make use of the multiple situation types to give more impor- tance to some aspects of translation than others depending on the purpose of the text (see Hauenschild (1988) for such translaion needs). This is the case when the Japanese word ~ (e) is translated into either painting or drawing in English. The choice is constrained by what is known in the given context. Mismatch Resolution by Generaliza- tion: Assume a general case r such that a =~ r and there is a Phrasal Situation of TL that supports r. A new open case SUR' is then generated, adding 7- to SUR. This is the case when the Japanese word ~ (e) is translated into picture in English, or English words ppainting and drawing are both translated into (e) in Japanese. That is, two different utterances in English, I like this painting and I like this draw- ing, would both be translated into ~J~l'~ ~ Otl~ff ~'~ (watasi wa kono e ga suki desn) in Japanese according to this scheme. Resolution by generalization is ordinarily less con- strained than resolution by specification, even though it can lose information. It should be blocked, how- ever, when generalizing erases a key contrast from the content. For example, given an English utter- ance, I like Matisse's drawings better than paintings, the translation into Japanese should not generalize both drawings and paintings into ~ (e) since that would lose the point of this utterance completely. The mismatches must be resolved by specification in this case, resulting in, for instance, $J~1"~'¢~" 4 gO ~tt~e~A~ ]: 9 ~ ~ t ~ ~'t?'J" ( watasi wa Ma- tisse no abnrae ya snisaiga yorimo senbyou ga suki dest 0 'I like Matisse's line_drawings(P7) better than oil_paintings(P4) or water-colors(P5)'. There are IFGs for the two types of mismatch resolution. Using o for an open (unsubsumed) node and • for a subsumed node, we have the following: Mismatch Resolution by Specification: (given r :~ a) Given: o{a} Assume: ?{a} / 6{¢, ~} Mismatch Resolution by Generalization: (given o" :¢, ¢) Given: o{a} Assume: l{a} Subsume: l{a} 6{¢,¢} ~{q,T} Both resolution methods add more infons to the given SUR by ASSUMPTION, but there is a differ- ence. In resolution by specification, subsequent sub- surnption does not always follow. That is, only by contradicting other given facts, can some or all of the newly assumed SUR's later be subsumed, and only by exhaustively generating all its subcases, the original SUR can be subsumed. In resolution by generalization, however, the newly assumed general case immediately subsumes the original SUR. 14 14Resolution by specification models a form of abductive inference, and generalization, a form of deductive inference 198 Source Language Target L~ngu@ge Discourse Situations DiS 1 .. DiS m Utterance Situations US 1 .. USI Phrasal Situations PS 1 .. PS k Discourse Situations ~is i .. Dis~, Utterance Situations ~s i .. ~s i, Phrasal Situations Psi .. Psi,, Figure 4: Situated Translation THE TRANSLATION MODEL. Here is our characterization of a TRANSLATION: Given a SUR ( DeT, PS, DiS, US ) of the nth source text sentence and a dis- course situation DiS" characterizing the target language text following translation of the (n-1)st source sentence, find a SUR ( DeT', PS ~, DiS ~, US ~) allowed by the tar- get language grammar such that DiS" _C DiS ~ and ( DeT, PS, DiS, US ) ,~ ( DeT s, PS s, DiS ~, US'). (N is the approximates relation we have discussed, which constrains the flow of in- formation in translation.) Our approach to translation combines SURs and IFGs (see Figure 4). Each SUR for a possible inter- pretation of the source utterance undergoes a FLOW OF TRANSLATION as follows: A set of infons is ini- tially GIVEN in an SUR. It then grows by mismatch resolution processes that occur at multiple sites un- til a generation of a TL string is RECOGNIZED AS POSSIBLE. Each mismatch resolution involves AS- SUMING new SUR's and SUBSUMING inconsistent or superfluous SUR's. ~s Our focus here is the epistemologicai aspect of translation, but there is a heuristically desirable property as well. It is that the proposed mismatch resolution method uses only so much additional in- formation as required to fill the particular distance between the given pair of linguistic systems. That is, the more similar two languages, leas computa- tion. This basic model should be combined with various control strategies such as default reasoning in a sltuation-theoretic context. One way to implement these methods is in the abduction-based system proposed by Hobbs and Kameyama (1990). ~SA possible use of MERGE in this application is that two different SUit's may be merged when an identical TL string would be generated from them. ((count,.x.zx)) ~ p 5 ((de~,x;o)) Uthe user name N athe user n&mes u ua user name ~ ~user nlLmesn Figure 5: The IFG for NP Translation in an actual implementation. 5 A translation example We will now illustrate the proposed approach with a Japanese-to-English translation example: the first sentence of instruction step 3 in the FTP text. INPUT STRING: "3. ~ -~'-- ]- ":/.~ ~'J-~'C'~'J ~'ff)7" ~ / ~ = - - - ~ ~ " 7"L~ ~--y~9-o " 1. In the initial SUR are infons for 9 -~-- b ":I ~ ~" (rimoofo sisutemu) 'remote system', 7' ~ :I i. (akaunfo) 'account', and :'---'~ (yu~zaa mei) 'user name'. All of thesewords signify properties that are signified by English COUNT nouns but the Japanese SUR lacks definiteness and number information. 2. Generation of English from the SUR fails be- cause, among other things, English grammar requires NPs with COUNT head nouns to be of the type, Sg-Def, Sg-Indef, PI-Def, or Pl-Indef. (translation mismatch) 3. This mismatch cannot be resolved by general- ization. It is resolved by assuming four sub- cases for each nominal, and subsuming those that are inconsistent with other given informa- tion. The "remote system" is a singular entity in focus, so it is Sg-Def, and the other three subcases are subsumed. The "user name" is an entity in center, so Definite. The "account" is Definite despite its first mention because its possesser (addressee) is definite. Both "user name" and "account" can be either Singular or Plural at this point. Let's assume that a form of default reasoning comes into play here and concludes that a user has only one user name and one account name in each computer. 4. The remaining open case permits generation of English noun phrases, so the translation of this utterance is done. OUTPUT STRING: "Type the user name for your account on the remote system and ..." 6 Conclusions In order to achieve high-quality translation, we need a system that can reason about the context of utterances to solve the general problem of transla- 199 tion mismatches. We have proposed a translation framework based on Situation Theory that has this desired property. The situated utterance represen- tation of the source string embodies the contextual information required for adequate mismatch reso- lution. The translation process has been modelled as a flow of information that responds to the needs of the target language grammar. Reasoning across and beyond the linguistic levels, this approach to translation respects and adapts to differences be- tween languages. 7 Future implications We plan to design our future implementation of an MT system in light of this work. Computational studies of distributive lattices constrained by multi- ple situation types are needed. Especially useful lin- guistic work would be on grammaticized contextual information. More studies of the nature of transla- tion mismatches are also extremely desirable. The basic approach to translation proposed here can be combined with a variety of natural language processing frameworks, e.g., constraint logic, ab- duction, and connectionism. Translation systems for multi-modal communication and those of multi- ple languages are among natural extensions of the present approach. 8 Acknowledgements We would like to express our special thanks to Hidetoshi Sirai. Without his enthusiasm and en- couragement at the initial stage of writing, this pa- per would not even have existed. This work has evolved through helpful discussions with a lot of people, most notably, Jerry Hobb8, Yasuyoshi Ina- gaki, Michio Isoda, Martin Kay, Hideo Miyoshi, Hi- roshi Nakagawa, Hideyuki Nakashima, Livia Polanyi, and Yoshihiro Ueda. We also thank John Etchemendy, David Israel, Ray Perrault, and anonymous review- ers for useful comments on an earlier version. References [1] Barwise, Jon and John Etchemendy. 1990. Information, In- fons, and Inference. In Cooper et ai. (eds), 33-78. [2] Barwise, Jon and John Etchemendy. 1990b. Visual Informa- tion and Valid Reasoning. In W. Zimmerman (ed.) Visualiza- tion in Mathematics. Washington DC: Mathematical Associ- ation of America. [3] Barwise, Jon, and John Perry. 1983. Situations and Atti- tudes. Cambridge, MA: MIT Press. [4] Brennan, Susan, Lyn Friedman, and Carl Pollard. 1987. A Centering Approach to Pronouns. In Proceedings of the 25th Annual Meeting of the Association for Computational Lin- guistics, Cambridge, MA: ACL, 155-162. [5] Carbonell, Jaime G. and Masaru Tomita. 1987. Knowledge- based Machine Translation, the CMU Approach. In Nirenburg (ed.), 68-89. [6] Cooper, Robin, Kuniaki Mukai, and John Perry (eds) 1990. Situation Theory and Its Applications, Volume 1, CSLI Lec- ture Notes Number 22. Stanford: CSLI Publications. [7] Devlin, Keith. 1990. lnfons and Types in an Information° Based Logic. In Cooper et el. (eds), 79-96. [8] Dorr, Bonnie. 1990. Solving Thematic Divergences in Ma- chine Translation. In Proceedings of the £8th Annual Meet- ing of the Association for Computational Linguistics, Pitts- burgh, PA, 127-134. [9] Gawron, J. Mark and Stanley Peters. 1990. Anaphora and Quantification in Situation Semantics, CSLI Lecture Notes Number 19. Stanford: CSLI Publications. [10] Grosz, Barbara. 1977. The Representation and Use of Fo- cus in Dialogue Understanding. Technical Report 151, SPA International, Menlo Park, CA. [11] Grosz, Barbara J., Aravind K. Joshi, and Scott Weinstein. 1983. Providing a Unified Account of Definite Noun Phrases in Discourse. In Proceedings of the £1st Annual Meeting of the Association for Computational Linguistics, Cambridge, MA, 44-50. [12] Grosz, Barbara J. and Candace L. Sidner. 1986. Atten- tion, Intention, and the Structure of Discourse. Computa- tional Linguistics, 12(3), 175-204. [13] Hauenschild, Christa. 1988. Discourse Structure - Some Im- plications for Machine Translation. In Maxwell et el. (eds), 145o156. [14] Heim, Irene R. 1982. The Semantics of Definite and In- definite Noun Phrases. PhD dissertation, University of Mas- sachusetts at Amherst. [151 Hobbs, Jerry. 1990. Literature and Cognition. CSLI Lec- ture Note Number 21. Stanford: CSLI Publications. [16] H0bbs, Jerry and Megumi Karneyama. 1990. Translation by Abduction. In Proceedings of the 13th International Confer- ence on Computational Linguistics, Helsinki, Finland. [17] Kameyama, Megumi. 1986. A Property-sharing Constraints in Centering. In Proceedings of the £4th Annual Meeting of the Association for Computational Linguistics, Cambridge, MA: ACL, 200-206. [18] Kemp, Hans. 1981. A Theory of Truth and Semantic Rep- resentation. In 3. Groenendijk, T. Jansaen, and M. Stokhof (eds), Formal Methods in the Study of Language. Amster- dam: Mathematical Center. [19] Kaplan, Ronald M., Klaus Netter, Jiirgen Wedekind, and Annie Zaenen. 1989. Translation by Structural Correspon- dences. In Proceedings of the 4th Conference of the European Chapter of the Association for Computational Linguistics, Manchester, United Kingdom, 272-281. [20] Katagiri, Yasuhiro. To appear. Structure of Perspectivity: A Case of Japanese Reflexive Pronoun "zibun". Paper pre- sented at STASS-90, Scotland. [21] Kosaka, Michiko, Virginia Teller, and Ralph Grishman. 1988. A Sublanguage Approach to Japanese-English Machine Trans- lation. In Maxwell et ai. (eda), 109-122. [22] Lewis, David K. 1979. Scorekeeping in Language Game. In B~uerle, R., U. Egli and A. yon Stechow (eds) Semanticsyrom Different Points of View. Berlin: Springer Verlag. [23] Maxwell, Dan, Klaus Schubert, and Toon Witkam (eds). 1988. New Directions in Machine 7PanMation. Dordrecht, Holland: Foris. [24] Nagan, Makoto. 1987. The Role of Structural Transforma- tion in a Machine Translation System. In Nirenburg (ed.), 262-277. [25] Nirenburg, Sergei (ed.) 1987. Machine Translation. Cam- bridge: Cambridge University Press. [26] Reichenbach, Hans. 1947. Elements of Symbolic Logic. New York: Dover. [27] de Saussure, Ferdinand. 1959. Course in General Linguis- tics. Edited by Charles Belly and Albert Sechehaye in collab- oration with Albert Riedlinger. Translated by Wade Baskin. New York: McGraw-Hill. [28] Scha, Remko and Livia Polanyi. 1988. An Augmented Context- free Grammar of Discourse. In Proceedings of the l~th In- ternational Conference on Computational Linguistics, Bu- dapest, Hungary. [29] Tsujii, Junoichi. 1988. What is a Croas-linguisticaily Valid Interpretation of Discourse? In Maxwell et el. (eds), 157-166. [30] Zajac, Remi. 1989. A Transfer Model Using a Typed Fea- ture Structure Rewriting System with Inheritance. In Pro- ceedings of the ~Tth Annual Meeting of the Association for Computational Linguistics, Vancouver, Canada. 200
1991
25
AUTOMATIC NOUN CLASSIFICATION BY USING JAPANESE-ENGLISH WORD PAIRS* Naomi Inoue KDD R & D Laboratories 2-1-50hara, Kamifukuoka-shi Saitama 356, Japan [email protected] ABSTRACT This paper describes a method of classifying semantically similar nouns. The approach is based on the "distributional hypothesis". Our approach is characterized by distinguishing among senses of the same word in order to resolve the "polysemy" issue. The classification result demonstrates that our approach is successful. 1. INTRODUCTION Sets of semantically similar words are very useful in natural language processing. The general approach toward classifying words is to use semantic categories, for example the thesaurus. The "is-a" relation is connected between words and categories. However, it is not easy to acquire the "is-a" connection by hand, and it becomes expensive. Approaches toward automatically classifying words using existing dictionaries were therefore attempted[Chodorow] [Tsurumaru] [Nakamura]. These approaches are partially successful. However, there is a fatal problem in these approaches, namely, existing dictionaries, particularly Japanese dictionaries, are not assembled on the basis of semantic hierarchy. On the other hand, approaches toward automatically classifying words by using a large-scale corpus have also been attempted[Shirai][Hindle]. They seem to be based on the idea that semantically similar words appear in similar environments. This idea is derived from Harris's "distributional hypothesis"[Harris] in linguistics. Focusing on nouns, the idea claims that each noun is characterized by Verbs with which it occurs, and also that nouns are similar to the extent that they share verbs. These automatic classification approaches are also partially successful. However, Hindle says that there is a number of issues to be confronted. The most important issue is that of "polysemy". In Hindle's experiment, two senses of"table", that is to say "table under which one can hide" and "table which can be commuted or memorized", are conflated in the set of words similar to "table". His result shows that senses of the word must be distinguished before classification. (1)I sit on the table. (2)I sit on the chair. (3)I fill in the table. (4)I fill in the list. For example, the above sentences may appear in the corpus. In sentences (1) and (2), "table" and "chair" share the same verb "sit on". In sentences (3) and (4), "table" and "list" share the same verb "fill in". However, "table" is used in two different senses. Unless they are distinguished before classification, "table", "chair" and "list" may be put into the same category because "chair" and "list" share the same verbs which are associated with "table". It is thus necessary to distinguish the senses of "table" before automatic classification. Moreover, when the corpus is not sufficiently large, this must be performed for verbs as well as nouns. In the following Japanese sentences, the Japanese verb "r~ < "is used in different senses. One is * This study was done during the author's stay at ATR Interpreting Telephony Research Laboratories. 201 ' l-' '1 t" space at object El:Please ~ in the reply :form ahd su~rmiE the summary to you. A. . Figure 1 An example of deep semantic relations and the correspondence "to request information from someone". The other is "to give attention in hearing". Japanese words " ~ ~l-~ (name)" and " ~ "~ (music)" share the same verb" ~ < ". Using the small corpus, " ~ Hl~ (name)" and" ~ (music)" may be classified into the same category because they share the same verb, though not the same sense, on relatively frequent. (5):~ ~ t" M < (6):~ ~" ~J This paper describes an approach to automatically classify the Japanese nouns. Our approach is characterized by distinguishing among senses of the same word by using Japanese-English word pairs extracted from a bilingual database. We suppose here that some senses of Japanese words are distinguished when Japanese sentences are translated into another language. For example, The following Japanese sentences (7),(8) are translated into English sentences (9),(10), respectively. (7)~ ~J~ ~: ~ (8)~ ~ ~ ~ ~ ~- (9)He sends a letter. (t0)He publishes a book. The Japanese word " ~ T" has at least two senses. One is "to cause to go or be taken to a place" and the other is "to have printed and put on sale". In the above example, the Japanese word" ~ ~-" corresponds to "send" from sentences (7) and (9). The Japanese word " ~ -~" also corresponds to "publish" from sentences (8) and (10). That is to say, the Japanese word" ~ T" is translated into 202 different English words according to the sense. This example shows that it may be possible to distinguish among senses of the same word by using words from another language. We used Japanese-English word pairs, for example," ~ ~-send" and" ~ ~- publish", as senses of Japanese words. In this paper, these word pairs are acquired from ATR's large scale database. 2. CONTENT OF THE DATABASE ATR has constructed a large-scale database which is collected from simulated telephone and keyboard conversations [Ehara]. The sentences collected in Japanese are manually translated into English. We obtain a bilingual database. The database is called the ATR Dialogue Database(ADD). ATR aims to build ADD to one million words covering two tasks. One task is dialogues between secretaries and participants of international conferences. The other is dialogues between travel agents and customers. Collected Japanese and English sentences are morphologically analyzed. Japanese sentences are also dependency analyzed and given deep semantic relations. We use 63 deep semantic cases[Inoue]. Correspondences of Japanese and English are made by several linguistic units, for example words, sentences and so on. Figure 1 shows an example of deep semantic relations and correspondences of Japanese and English words. The sentence is already morphologically analyzed. The solid line shows deep semantic relations. The Japanese nouns" ') 7" ~ 4 7 ~r -- ~" and "~ ~'~" modify the Japanese verbs "~ v~" and "~ ", respectively. The semantic relations are "space at" and "object", which are almost equal to "locative" and "objective" of Fillmore's deep case[Fillmore]. The dotted line shows the word correspondence between Japanese and English. The Japanese words "V 7"~ 4 7 ~- --./~","~","~,~)"and"~ L" correspond to the English words "reply form", "fill out", "summary" and "submit", respectively. Here, " ~ v," and " ~i [~" are conjugations of" ~ < " and " ~ -¢", respectively. However, it is possible to extract semantic relations and word correspondence in dictionary form, because ADD includes the dictionary forms. 3. CLASSIFICATION OF NOUNS 3.1 Using Data We automatically extracted from ADD not only deep semantic relations between Japanese nouns and verbs but also the English word which corresponds to the Japanese word. We used telephone dialogues between secretaries and participants because the scale of analyzed words was largest. Table 1 shows the current number of analyzed words. Table I Analyzed words counts of ADD Media Task Words Conference 139,774 Telephone Travel 11,709 Conference 64,059 Keyboard Travel 0 Figure 2 shows an example of the data extracted from ADD. Each field is delimited by the delimiter "1"- The first field is the dialogue identification number in which the semantic relation appears. The second and the third fields are Japanese nouns and their corresponding English words. The next 2 fields are Japanese verbs and their corresponding English words. The last is the semantic relations between nouns and verbs. Moreover, we automatically acquired word pairs from the data shown in Figure 2. Different senses of nouns appear far less frequently than those of verbs because the database is restricted to a specific task. In this experiment, only word pairs of verbs are used. Figure 3 shows deep semantic relations between nouns and word pairs of verbs. The last field is raw frequency of co-occurrence. We used the data shown in Figure 3 for noun classification. 1[ ~J $,~ ~ [registration feel • • Ipay[object 151~ ¢.'~ Isummaryl ~ ~-Isend]object 15717" ~ ~/- ~" 4 ~ ~f[proceedingl~ lissuelobject 41~ ~lconferencel~ ;5 Ibe heldlobject 8] ~ r~9 Iquestionl~ ;5 Ihavelobject 31J~ ~ Ibusl~ ~ Itakelobject 1801~: ~ Inewspaperl~! ;5 Iseelspace at Figure 2 An example of data extracted from ADD The experiment is done for a sample of 138 nouns which are included in the 500 most frequent words. The 500 most frequent words cover 90% of words accumulated in the telephone dialogue. Those nouns appear more frequently than 9 in ADD. ~l~ ~-paylobjectll ~,'~ I~ T -sendlobjectl2 7" ~ ":/- -7" ~ :~/fl~-issue~objectl2 ~ ~]~ ;5 -be heldlobject 16 ~o9 I~ $ -havelobjectl7 /< ;1, ]!~! ;5 -take]objectll ~ I~ $ -seelspace atl 1 Figure 3 - An example of semantic rela- tions of nouns and word pairs 3.2 Semantic Distance of Nouns Our classification approach is based on the "distributional hypothesis". Based on this semantic theory, nouns are similar to the extent that they share verb senses. The aim of this paper is to show the efficiency of using the word pair as the word sense. We therefore used the following expression(l), which was already defined by Shirai[Shirai] as the distance between two words. The 203 d(a,b) ~(M(a,v,r),M(b,v,r)) v( V,rE R ~(M(a,v,r) + M(b,v,r)) v(V,r(R (1) Here, a,b : noun (a,b (N) r : semantic relation v : verb senses N : the set of nouns V : the set of verb senses R : the set of semantic relations M(a,v,r) : the frequency of the semantic relation r between a and v ¢P(x,y) = fi + y (x > 0, y > 0) (x=0ory=0) second term of the expression can show the semantic similarity between two nouns, because it is the ratio of the verb senses with which both nouns (a and b) occur and all the verb senses with which each noun (a or b) occurs. The distance is normalized from 0.0 to 1.0. If one noun (a) shares all verb senses with the other noun (b) and the frequency is also same, the distance is 0.0. If one noun (a) shares no verb senses with the other noun (b), the distance is 1.0. 3.3 Classification Method For the classification, we adopted cluster analysis which is one of the approaches fn multivariant analysis. Cluster analysis is generally used in various fields, for example biology, ps.ychology, etc.. Some hierarchical clustering methods, for example the nearest neighbor method, the centroid method, etc., have been studied. It has been proved that the centroid method can avoid the chain effect. The chain effect is an undesirable phenomenon in which the nearest unit is not always classified into a cluster and more distant units are chained into a cluster. The centroid method is a method in which the cluster is characterized by the centroid of categorized units. In the following section, the result obtained by the centroid method is shown. 4.EXPERIMENT 4.1 Clustering Result All 138 nouns are hierarchically classified. However, only some subsets of the whole hierarchy are shown, as space is limited. In Figure 4, we can see that semantically similar nouns, which may be defined as "things made from paper", are grouped together. The X-axis is the semantic distance defined before. Figure 5 shows another subset. All nouns in Figure 5, "~ ~_ (decision)", "~ ~(presentation)", ";~ ~" - ~" (speech)" and " ~(talk)", have an active concept like verbs. Subsets of nouns shown in Figures 4 and 5 are fairly coherent. However, all subsets of nouns are not coherent. In Figure 6, " ~ ~ 4 b ° (slide)", "~, ~ (draft)", " ~" ~ (conference site)", "8 E (8th)" and" ~R (station)" are grouped together. The semantic distances are 0.67, 0.6, 0.7 and 0.8. The distance is upset when "~ ~(conference site)" is attached to the cluster containing ":~ ~ 4 b'(slide)" and "~ ~(draft)". This is one characteristic of the centroid method. However, this seems to result in a semantically less similar cluster. The word pairs of verbs, the deep semantic relations and the frequency are shown in Table 2. After "~ ~ 4 b ~ (slide)" and "~ ~(draft)" are grouped into a cluster, the cluster and " ~ (conference site)" share two word pairs, " fE ") -use" and "~ ~ -be". "~ Yo -be" contributes more largely to attach " ~ ~(conference site)" to the cluster than "tE ~) -use" because the frequency of co-occurrence is greater. In this sample, " ~ ~-be" occurs with more nouns than "f~ ") -use". It shows that "~J~ Yo - be" is less important in characterizing nouns 204 though the raw frequency of co-occurrence is greater. It is therefore necessary to develop a means of not relying on the raw frequency of co-occurrence, in order to make the clustering result more accurate. This is left to further study. 4.2 Estimation of the Result All nouns are hierarchically classified, but some semantically separated clusters are acquired if the threshold is used. It is possible to compare clusters derived from this experiment with semantic categories which are used in our automatic interpreting telephony system. We used expression (2), which was defined by Goodman and Kruskal[Goodman], in order to objectively compare them. 0.0 I ~J :~ b (list) .... ~(form) ~=~(material) ' ~T ~_-~ (hope) ~(document) 7" 7.~ b ~ ~ ~ (abstract) 7" ~ ~ ~ ~ (program) Figure 4 0.2 0.4 0.6 0.8 1.0 I I I I I t-- i An example of the classification of nouns 0.0 0.2 0.4 0.6 0.8 1.0 I I I I I I ii~(decision) ~ (presentation) ~" - ~- (speech) ~8(talk) Figure 5 0.0 ~ -1" b" (slide) ~, ~ (draft) ~ (conference site) 8 E (Sth) ~(station) Figure 6 Another example of the classification of nouns 0.2 0.4 0.6 I J J 0.8 J Another example of the classification of nouns 1.0 l 205 Table 2 noun ~ d" b" (slide) /~,~ (draft) ~J~ (conference site) 8 [3 (8th) ~(station) A subset of semantically similar nouns word pairs of verb deep case frequency T ~-make goal 1 {~ 7~ -make object 1 5 -use object 1 f~ & -make object 1 • -be object 1 o_look forward to object 1 ~J~ • -take condition 1 ~ ") -get space to 1 ") -use object 1 ~ 7o -can space at 1 "~ -say space at 1 /~ & -be object 2 ~. 7~ -end time 2 /~ 7o -be object 1 ~] < -guess content 1 ~ 7~ -take condition 1 1~ ~ ~ ~-there be space from 1 p -- Here, P1 "P2 (2) Pl P1 - 1- f-m p P2 = .ri.(1 - fi. Jfi.) ill f.= = max(f.1, f.2, "", f,~} farn a -- max{fal, fa2, "", faq} % = n /n f.j = nln A : a set of clusters which are automatically obtained. B : a set ofclusters which are used in our interpreting telephony system. p • the number of clusters of a set A q : the number of clusters of a set B nij : the number of nouns which are included in both the ith cluster of A and the jth cluster of B n.j : the number ofnouns which are included in the jth cluster of B n : all nouns which are included in A or B 206 They proposed that one set of clusters, called 'A', can be estimated to the extent that 'A' associates with the other set of clusters, called 'B'. In figure 7, two results are shown. One (solid line) is the result of using the word pair to distinguish among senses of the same verb. The other (dotted line} is the result of using the verb form itself. The X-axis is the number of classified nouns and the Y-axis is the value derived from the above expression.Figure 7 shows that it is better to use word pairs of verbs than not use them, when fewer than about 30 nouns are classified. However, both are almost the same, when more than about 30 nouns are classified. The result proves that the distinction of senses of verbs is successful when only a few nouns are classified. I Word Pain of Verbs B .......... Verb Form 0.3 0.2 | h" 0.1 ' = ' ; ~J 0.0 z ...:. L.'! / • ~~./, . ~ , in 50 100 Number of Nou,us Figure 7 Estimation result 5. CONCLUSION Using word pairs of Japanese and English to distinguish among senses of the same verb, we have shown that using word pairs to classify nouns is better than not using word pairs, when only a few nouns are classified. However, this experiment did not succeed for a sufficient number of nouns for two reasons. One is that the raw co-occurrent frequency is used to calculate the semantic distance. The other is that the sample size is too small. It is thus necessary to resolve the following issues to make the classification result more accurate. (1)to develop a means of using the frequency normalized by expected word pairs. (2)to estimate an adequate sample size. In this experiment, we acquired word pairs and semantic relations from our database. However, they are made by hand. It is also preferable to develop a method of automatically acquiring them from the bilingual text database. Moreover, we want to apply the hierarchically classified result to the translated word selection problem in Machine translation. ACKNOWLEDGEMENTS The author is deeply grateful to Dr. Akira Kurematsu, President of ATR Interpreting Telephony Research Laboratories, Dr. Toshiyuki Takezawa and other members of the Knowledge & Data Base Department for their encouragement, during the author's StaY at ATR Interpreting Telephony Research Laboratories. REFERENCES [Chodorow] Chodorow, M. S., et al. "Extracting Semantic Hierarchies from a Large On-line Dictionary.", Proceedings of the 23rd Annual Meeting of the ACL, 1985. [Ehara] Ehara, T., et al. "ATR Dialogue Database", Proceedings of ICSLP, 1990. [Fillmore] Fillmore, C. J. "The case for case", in E. Bach & Harms (Eds.) Universals in linguistic theory, 1968. [Goodman] Goodman, L. A., and Kruskal W.H. "Measures of Association for Cross Classifications", J. Amer. Statist. Assoc. 49, 1954. [Harris] Harris, Z. S. "Mathematical Structures of Language", a Wiley- Interscience Publication. 207 [Hindle] Hindle, D. "Noun Classification from Predicate-Argument Structures", Proceedings of 28th Annual Meeting of the ACL, 1990. [Inoue]. Inoue, N., et al. "Semantic Relations in ATR Linguistic Database" (in Japanese), ATR Technical Report TR-I-0029, 1988. [Nakamura] Nakamura, J., et al. "Automatic Analysis of Semantic Relation between English Nouns by an Ordinal English Dictionary" (in Japanese), the Institute of Electronics, Information and Communication Engineers, Technical Report, NLC-86, 1986. [Shirai] Shirai K., et al. "Database Formulation and Learning Procedure for Kakariuke Dependency Analysis" (in Japanese), Transactions of Information Processing Society of Japan, Vol.26, No.4, 1985. [Tsurumaru] Tsurumaru H., et al. "Automatic Extraction of Hierarchical Structure of Words from Definition Sentences" (in Japanese), the Information Processing Society of Japan, Sig. Notes, 87- NL-64, 1987. 208
1991
26
AUTOMATIC ACQUISITION OF SUBCATEGORIZATION FRAMES FROM UNTAGGED TEXT Michael R. Brent MIT AI Lab 545 Technology Square Cambridge, Massachusetts 02139 [email protected] ABSTRACT This paper describes an implemented program that takes a raw, untagged text corpus as its only input (no open-class dictionary) and gener- ates a partial list of verbs occurring in the text and the subcategorization frames (SFs) in which they occur. Verbs are detected by a novel tech- nique based on the Case Filter of Rouvret and Vergnaud (1980). The completeness of the output list increases monotonically with the total number of occurrences of each verb in the corpus. False positive rates are one to three percent of observa- tions. Five SFs are currently detected and more are planned. Ultimately, I expect to provide a large SF dictionary to the NLP community and to train dictionaries for specific corpora. 1 INTRODUCTION This paper describes an implemented program that takes an untagged text corpus and generates a partial list of verbs occurring in it and the sub- categorization frames (SFs) in which they occur. So far, it detects the five SFs shown in Table 1. SF Good Example Bad Example Description direct object direct object & clause direct object & infinitive clause infinitive greet them tell him he's a fool want him to attend know I'll attend hope to attend *arrive them *hope him he's a fool *hope him to attend *want I'll attend *greet to attend Table 1: The five subcategorization frames (SFs) detected so far The SF acquisition program has been tested on a corpus of 2.6 million words of the Wall Street Journal (kindly provided by the Penn Tree Bank project). On this corpus, it makes 5101 observa- tions about 2258 orthographically distinct verbs. False positive rates vary from one to three percent of observations, depending on the SF. 1.1 WHY IT MATTERS Accurate parsing requires knowing the sub- categorization frames of verbs, as shown by (1). (1) a. I expected [nv the man who smoked NP] to eat ice-cream h. I doubted [NP the man who liked to eat ice-cream NP] Current high-coverage parsers tend to use either custom, hand-generated lists of subcategorization frames (e.g., Hindle, 1983), or published, hand- generated lists like the Ozford Advanced Learner's Dictionary of Contemporary English, Hornby and Covey (1973) (e.g., DeMarcken, 1990). In either case, such lists are expensive to build and to main- tain in the face of evolving usage. In addition, they tend not to include rare usages or specialized vocabularies like financial or military jargon. Fur- ther, they are often incomplete in arbitrary ways. For example, Webster's Ninth New Collegiate Dic- tionary lists the sense of strike meaning 'go occur to", as in "it struck him that... ", but it does not list that same sense of hit. (My program discov- ered both.) 1.2 WHY IT'S HARD The initial priorities in this research were: . Generality (e.g., minimal assumptions about the text) . Accuracy in identifying SF occurrences • Simplicity of design and speed Efficient use of the available text was not a high priority, since it was felt that plenty of text was available even for an inefficient learner, assuming sufficient speed to make use of it. These priorities 209 had a substantial influence on the approach taken. They are evaluated in retrospect in Section 4. The first step in finding a subcategorization frame is finding a verb. Because of widespread and productive noun/verb ambiguity, dictionaries are not much use -- they do not reliably exclude the possibility oflexical ambiguity. Even if they did, a program that could only learn SFs for unambigu- ous verbs would be of limited value. Statistical disambiguators make dictionaries more useful, but they have a fairly high error rate, and degrade in the presence of many unfamiliar words. Further, it is often difficult to understand where the error is coming from or how to correct it. So finding verbs poses a serious challenge for the design of an accu- rate, general-purpose algorithm for detecting SFs. In fact, finding main verbs is more difficult than it might seem. One problem is distinguishing participles from adjectives and nouns, as shown below. (2) a. John has [~p rented furniture] (comp.: John has often rented apart- ments) b. John was smashed (drunk) last night (comp.: John was kissed last night) c. John's favorite activity is watching TV (comp.: John's favorite child is watching TV) In each case the main verb is have or be in a con- text where most parsers (and statistical disam- biguators) would mistake it for an auxiliary and mistake the following word for a participial main verb. A second challenge to accuracy is determin- ing which verb to associate a given complement with. Paradoxically, example (1) shows that in general it isn't possible to do this without already knowing the SF. One obvious strategy would be to wait for sentences where there is only one can- didate verb; unfortunately, it is very difficult to know for certain how many verbs occur in a sen- tence. Finding some of the verbs in a text reliably is hard enough; finding all of them reliably is well beyond the scope of this work. Finally, any system applied to real input, no matter how carefully designed, will occasionally make errors in finding the verb and determining its subcategorizatiou frame. The more times a given verb appears in the corpus, the more likely it is that one of those occurrences will cause an erroneous judgment. For that reason any learn- ing system that gets only positive examples and makes a permanent judgment on a single example will always degrade as the number of occurrences increases. In fact, making a judgment based on any fixed number of examples with any finite error rate will always lead to degradation with corpus- size. A better approach is to require a fixed per- centage of the total occurrences of any given verb to appear with a given SF before concluding that random error is not responsible for these observa- tions. Unfortunately, determining the cutoff per- centage requires human intervention and sampling error makes classification unstable for verbs with few occurrences in the input. The sampling er- ror can be dealt with (Brent, 1991) but predeter- mined cutoff percentages stir require eye-bailing the data. Thus robust, unsupervised judgments in the face of error pose the third challenge to de- veloping an accurate learning system. 1.3 HOW IT'S DONE The architecture of the system, and that of this pa- per, directly reflects the three challenges described above. The system consists of three modules: 1. Verb detection: Finds some occurrences of verbs using the Case Filter (Rouvret and Vergnaud, 1980), a proposed rule of gram- mar. 2. SF detection: Finds some occurrences of five subcategorization frames using a simple, finite-state grammar for a fragment of En- glish. 3. SF decision: Determines whether a verb is genuinely associated with a given SF, or whether instead its apparent occurrences in that SF are due to error. This is done using statistical models of the frequency distribu- tions. The following two sections describe and eval- uate the verb detection module and the SF de- tection module, respectively; the decision module, which is still being refined, will be described in a subsequent paper. The final two sections pro- vide a brief comparison to related work and draw conclusions. 2 VERB DETECTION The technique I developed for finding verbs is based on the Case Filter of Rouvret and Verguaud (1980). The Case Filter is a proposed rule of gram- mar which, as it applies to English, says that ev- ery noun-phrase must appear either immediately to the left of a tensed verb, immediately to the right of a preposition, or immediately to the right of a main verb. Adverbs and adverbial phrases (including days and dates) are ignored for the pur- poses of case adjacency. A noun-phrase that sat- isfies the Case Filter is said to "get case" or "have case", while one that violates it is said to "lack case". The program judges an open-class word to be a main verb if it is adjacent to a pronoun or proper name that would otherwise lack case. Such a pronoun or proper name is either the subject or 210 the direct object of the verb. Other noun phrases are not used because it is too difficult to determine their right boundaries accurately. The two criteria for evaluating the perfor- mance of the main-verb detection technique are efficiency and accuracy. Both were measured us- ing a 2.6 million word corpus for which the Penn Treebank project provides hand-verified tags. Efficiency of verb detection was assessed by running the SF detection module in the normal mode, where verbs were detected using the Case Filter technique, and then running it again with the Penn Tags substituted for the verb detection module. The results are shown in Table 2. Note SF direct object direct object &: clause direct object & infinitive clause infinitive Occurrences Found 3,591 94 310 739 367 Control 8,606 381 3,597 14,144 11,880 Efficiency 40% 25% 8% 5% 3% Table 2: Efficiency of verb detection for each of the five SFs, as tested on 2.6 million words of the Wall Street Journal and controlled by the Penn Treehank's hand-verified tagging the substantial variation among the SFs: for the SFs "direct object" and "direct object & clause" efficiency is roughly 40% and 25%, respectively; for "direct object & infinitive" it drops to about 8%; and for the intransitive SFs it is under 5%. The reason that the transitive SFs fare better is that the direct object gets case from the preced- ing verb and hence reveals its presence -- intran- sitive verbs are harder to find. Likewise, clauses fare better than infinitives because their subjects get case from the main verb and hence reveal it, whereas infinitives lack overt subjects. Another obvious factor is that, for every SF listed above except "direct object" two verbs need to be found -- the matrix verb and the complement verb -- if either one is not detected then no observation is recorded. Accuracy was measured by looking at the Penn tag for every word that the system judged to be a verb. Of approximately 5000 verb tokens found by the Case Filter technique, there were 28 disagreements with the hand-verified tags. My program was right in 8 of these cases and wrong in 20, for a 0.24% error-rate beyond the rate us- ing hand-verified tags. Typical disagreements in which my system was right involved verbs that are ambiguous with much more frequent nouns, like mold in "The Soviet Communist Party has the power to shape corporate development and mold it into a body dependent upon it ." There were several systematic constructions in which the Penn tags were right and my system was wrong, includ- ing constructions like "We consumers are..." and pseudo-clefts like '~vhat you then do is you make them think .... (These examples are actual text from the Penn corpus.) The extraordinary accuracy of verb detection -- within a tiny fraction of the rate achieved by trained human taggers -- and it's relatively low efficiency are consistent with the priorities laid out in Section 1.2. 2.1 SF DETECTION The obvious approach to finding SFs like "V NP to V" and "V to V" is to look for occurrences of just those patterns in the training corpus; but the obvious approach fails to address the attachment problem illustrated by example (1) above. The solution is based on the following insights: • Some examples are clear and unambiguous. • Observations made in clear cases generalize to all cases. • It is possible to distinguish the clear cases from the ambiguous ones with reasonable ac- curacy. • With enough examples, it pays to wait for the clear cases. Rather than take the obvious approach of looking for "V NP to V', my approach is to wait for clear cases like "V PRONOUN to V'. The advantages can be seen by contrasting (3) with (1). (3) a. OK I expected him to eat ice-cream b. * I doubted him to eat ice-cream More generally, the system recognizes linguistic structure using a small finite-state grammar that describes only that fragment of English that is most useful for recognizing SFs. The grammar relies exclusively on closed-class lexical items such as pronouns, prepositions, determiners, and aux- iliary verbs. The grammar for detecting SFs needs to distinguish three types of complements: direct objects, infinitives, and clauses. The gram- mars for each of these are presented in Fig- ure 1. Any open-class word judged to he a verb (see Section 2) and followed immediately by matches for <DO>, <clause>, <infinitives, <DO><clanse>, or <DO><inf> is assigned the corresponding SF. Any word ending in "ly" or 211 <clause> := that? (<subj-pron> I <subj-obj-pron> <tensed-verb> <subj-pron> := I J he [ she [ I [ they <subj-obj-pron> := you, it, yours, hers, ours, theirs <DO> := <obj-pron> <obj-pron> := me [ him [ us [ them <infinitive> := to <previously-noted-uninflected-verb> I his I <proper-name>) Figure 1: A non-recursive (finite-state) grammar for detecting certain verbal complements. "?" indicates an optional element. Any verb followed immediately expressions matching <DO>, <clause>, <infinitive>, <DO> <clause>, or <DO> <infinitive> is assigned the corresponding SF. belonging to a list of 25 irregular adverbs is ig- nored for purposes of adjacency. The notation "T' follows optional expressions. The category previously-noted-uninflected-verb is special in that it is not fixed in advance -- open-class non- adverbs are added to it when they occur following an unambiguous modal. I This is the only case in which the program makes use of earlier decisions -- literally bootstrapping. Note, however, that ambiguity is possible between mass nouns and un- inflected verbs, as in to fish. Like the verb detection algorithm, the SF de- tection algorithm is evaluated in terms of efficiency and accuracy. The most useful estimate of effi- ciency is simply the density of observations in the corpus, shown in the first column of Table 3. The SF direct object direct object & clause direct object & infinitive clause infinitive occurrences found 3,591 94 310 739 367 % error 1.5% 2.0% 1.5% 0.5% 3.0% Table 3: SF detector error rates as tested on 2.6 million words of the Wall Street Journal accuracy of SF detection is shown in the second 1If there were room to store an unlimited number of uninflected verbs for later reference then the gram- mar formalism would not be finite-state. In fact, a fixed amount of storage, sufficient to store all the verbs in the language, is allocated. This question is purely academic, however -- a hash-table gives constant-time average performance. column of Table 3. 2 The most common source of error was purpose adjuncts, as in "John quit to pursue a career in finance," which comes from omitting the in order from "John quit in order to pursue a career in finance." These purpose ad- juncts were mistaken for infinitival complements. The other errors were more sporadic in nature, many coming from unusual extrapositions or other relatively rare phenomena. Once again, the high accuracy and low ef- ficiency are consistent with the priorities of Sec- tion 1.2. The throughput rate is currently about ten-thousand words per second on a Sparcsta- tion 2, which is also consistent with the initial pri- orities. Furthermore, at ten-thousand words per second the current density of observations is not problematic. 3 RELATED WORK Interest in extracting lexical and especially collocational information from text has risen dra- matically in the last two years, as sufficiently large corpora and sufficiently cheap computation have become available. Three recent papers in this area are Church and Hanks (1990), Hindle (1990), and Smadja and McKeown (1990). The latter two are concerned exclusively with collocation relations between open-class words and not with grammat- ical properties. Church is also interested primar- ily in open-class collocations, but he does discuss verbs that tend to be followed by infinitives within his mutual information framework. Mutual information, as applied by Church, is a measure of the tendency of two items to ap- pear near one-another -- their observed frequency in nearby positions is divided by the expectation of that frequency if their positions were random and independent. To measure the tendency of a verb to be followed within a few words by an in- finitive, Church uses his statistical disambiguator 2Error rates computed by hand verification of 200 examples for each SF using the tagged mode. These are estimated independently of the error rates for verb detection. 212 (Church, 1988) to distinguish between to as an infinitive marker and to as a preposition. Then he measures the mutual information between oc- currences of the verb and occurrences of infinitives following within a certain number of words. Unlike our system, Church's approach does not aim to de- cide whether or not a verb occurs with an infiniti- val complement -- example (1) showed that being followed by an infinitive is not the same as taking an infinitival complement. It might be interesting to try building a verb categorization scheme based on Church's mutual information measure, but to the best of our knowledge no such work has been reported. 4 CONCLUSIONS The ultimate goal of this work is to provide the NLP community with a substantially com- plete, automatically updated dictionary of subcat- egorization frames. The methods described above solve several important problems that had stood in the way of that goal. Moreover, the results ob- tained with those methods are quite encouraging. Nonetheless, two obvious barriers still stand on the path to a fully automated SF dictionary: a deci- sion algorithm that can handle random error, and techniques for detecting many more types of SFs. Algorithms are currently being developed to resolve raw SF observations into genuine lexical properties and random error. The idea is to auto- matically generate statistical models of the sources of error. For example, purpose adjuncts like "John quit to pursue a career in finance" are quite rare, accounting for only two percent of the apparent infinitival complements. Furthermore, they are distributed across a much larger set of matrix verbs than the true infinitival complements, so any given verb should occur with a purpose adjunct extremely rarely. In a histogram sorting verbs by their apparent frequency of occurrence with in- finitival complements, those that in fact have ap- peared with purpose adjuncts and not true sub- categorized infinitives will be clustered at the low frequencies. The distributions of such clusters can be modeled automatically and the models used for identifying false positives. The second requirement for automatically generating a full-scale dictionary is the ability to detect many more types of SFs. SFs involving certain prepositional phrases are particularly chal: lenging. For example, while purpose adjuncts (mistaken for infinitival complements) are rela- tively rare, instrumental adjuncts as in "John hit the nail with a hammer" are more common. The problem, of course, is how to distinguish them from genuine, subcategorized PPs headed by with, as in "John sprayed the lawn with distilled wa- ter". The hope is that a frequency analysis like the one planned for purpose adjuncts will work here as well, but how successful it will be, and if successful how large a sample size it will require, remain to be seen. The question of sample size leads back to an evaluation of the initial priorities, which favored simplicity, speed, and accuracy, over efficient use of the corpus. There are various ways in which the high-priority criteria can be traded off against efficiency. For example, consider (2c): one might expect that the overwhelming majority of occur- rences of "is V-ing" are genuine progressives, while a tiny minority are cases copula. One might also expect that the occasional copula constructions are not concentrated around any one present par- ticiple but rather distributed randomly among a large population. If those expectations are true then a frequency-modeling mechanism like the one being developed for adjuncts ought to prevent the mistaken copula from doing any harm. In that case it might be worthwhile to admit "is V-ing', where V is known to be a (possibly ambiguous) verb root, as a verb, independent of the Case Fil- ter mechanism. ACKNOWLEDGMENTS Thanks to Don Hindle, Lila Gleitman, and Jane Grimshaw for useful and encouraging conversa- tions. Thanks also to Mark Liberman, Mitch Marcus and the Penn Treebank project at the University of Pennsylvania for supplying tagged text. This work was supported in part by National Science Foundation grant DCR-85552543 under a Presidential Young Investigator Award to Profes- sor Robert C. Berwick. References [Brent, 1991] M. Brent. Semantic Classification of Verbs from their Syntactic Contexts: An Imple- mented Classifier for Stativity. In Proceedings of the 5th European A CL Conference. Association for Computational Linguistics, 1991. [Church and Hanks, 1990] K. Church and P. Hanks. Word association norms, mutual in- formation, and lexicography. Comp. Ling., 16, 1990. [Church, 1988] K. Church. A Stochastic Parts Program and Noun Phrase Parser for Unre- stricted Text. In Proceedings of the 2nd ACL Conference on Applied NLP. ACL, 1988. [DeMarcken, 1990] C. DeMarcken. Parsing the LOB Corpus. In Proceedings of the A CL. As- socation for Comp. Ling., 1990. [Gleitman, 1990] L. Gleitman. The structural sources of verb meanings. Language Acquisition, 1(1):3-56, 1990. 213 [Hindle, 1983] D. Hindle. User Manual for Fid- ditch, a Deterministic Parser. Technical Report 7590-142, Naval Research Laboratory, 1983. [Hindle, 1990] D. Hindle. Noun cl~sification from predicate argument structures. In Proceedings of the 28th Annual Meeting of the ACL, pages 268-275. ACL, 1990. [Hornby and Covey, 1973] A. Hornby and A. Covey. Ozford Advanced Learner's Dictio- nary of Contemporary English. Oxford Univer- sity Press, Oxford, 1973. [Levin, 1989] B. Levin. English Verbal Diathe- sis. Lexicon Project orking Papers no. 32, MIT Center for Cognitive Science, MIT, Cambridge, MA., 1989. [Pinker, 1989] S. Pinker. Learnability and Cogni- tion: The Acquisition of Argument Structure. MIT Press, Cambridge, MA, 1989. [Rouvret and Vergnaud, 1980] A. Rouvret and J- R Vergnaud. Specifying Reference to the Sub- ject. Linguistic Inquiry, 11(1), 1980. [Smadja and McKeown, 1990] F. Smadja and K. McKeown. Automatically extracting and representing collocations for lan- guage generation. In 28th Anneal Meeting of the Association for Comp. Ling., pages 252-259. ACL, 1990. [Zwicky, 1970] A. Zwicky. In a Manner of Speak- ing. Linguistic Inquiry, 2:223-233, 1970. 214
1991
27
Multiple Default Inheritance in a Unification-Based Graham Russell John Carroll* Susan Warwick-Armstrong ISSCO, 54 route des Acacias, 1227 Geneva, Switzerland [email protected] Lexicon Abstract A formalism is presented for lexical specification in unification-based grammars which exploits defeasi- ble multiple inheritance to express regularity, sub- regularity, and exceptions in classifying the prop- erties of words. Such systems are in the general case intractable; the present proposal represents an attempt to reduce complexity while retaining suf- ficient expressive power for the task at hand. Illus- trative examples are given of morphological analy- ses from English and German. 1 Introduction The primary task of a computational lexicon is to associate character strings representing word forms with information able to constrain the distribution of those word forms within a sentence. 1 The or- ganization of a lexicon requires the ability, on the one hand, to make general statements about classes of words, and, on the other, to express excep- tions to such statements affecting individual words and subclasses of words. These considerations have provoked interest in applying to the lexicon AI knowledge representation techniques involving the notions of inheritance and default. 2 The sys- *current address: Cambridge University Computer Lab- oratory, New Museums Site, Pembroke Street, Cambridge CB2 3QG, UK. OWe are indebted to Af-.al Ballim, Mark Johnson, and anonymous referees for valuable comments on this paper. tin the general case, the relation between forms and in- formation is many-to-many (rather than one-to-many as of- ten assumed) and this observation has influenced the choice of facilities incorporated within the system. See 3.2 below for an example of how distinct forms share identical mor- phosyntactic specifications. 2See e.g. Daelemaus and Gazdar eds. (1990), and the references in Gazdar (1990). The work of Hudson (1984) extends this general approach to sentence syntax. tem described here is part of the ELU s unification grammar development environment intended for research in machine translation, comprising parser, generator, transfer mechanism and lexical compo- nents. The user language resembles that of PATR- II (Shieber, 1986), but provides a larger range of data types and more powerful means of stating re- lations between them. Among the requirements imposed by the context within which this system is used are (i) the ability to both analyse and gen- erate complex word forms, (ii) full integration with existing parts of the ELU environment, and (iii) the ability to accommodate a relatively large number of words. 2 Classes and Inheritance An ELU lexicon consists of a number of 'classes', each of which is a structured collection of con- straint equations and macro calls encoding infor- mation common to a set of words, together with links to other more general 'superc]asses'. For ex- ample, if an 'intransitive' class is used to express the common syntactic properties shared by all in- transitive verbs, then particular instances of in- transitive verbs can be made to inherit this infor- mation by specifying the 'intransitive' class as one of their superclasses - it then becomes unneces- saw to specify the relevant properties individually for each such verb. The lexicon may be thought of as a tangled hierarchy of classes linked by in- heritance paths, with, at the most specific level, lexicai classes and, at the most general, classes for which no superclasses have been defined, and which therefore inherit no information from elsewhere. S "Environnement Linguistique d'Unlfication" - see Esti- val (1990), and, for a description of the earlier UD system on which E~u is based, Johnson and Rosner (1989). 215 Lexical entries are themselves classes, 4 and any in- formation they contain is standardly specific to an individual word; lexical and non-lexical classes dif- fer in that analysis and generation take only the former as entry points to the lexicon. Inheritance of a feature value from a superclass may be overridden by a conflicting value for that feature in a more specific class. This means, for ex- ample, that it is possible to place in the class which expresses general properties of verbs an equation such as '<* aux> = no' (i.e. "typical verbs are not auxiliaries"), while placing the contradictory spec- ification '<* aux> = yes' in s subclass from which only anTiliaries inherit. The ability to encode ex- ceptional properties of lexical items is extremely attractive from the linguistic point of view; the lower the position in the hierarchy at which the property appears, the more exceptional it may be considered. A class definition consists of the compiler direc- tive '#Class' (for a non-lexicai class) or '#Word' (for a lexical class), followed by the name of that class, a possibly empty list of direct superclasses, a possible empty 'main' or default equation set, and sere or more 'variant' equation sets. The su- perclass declaration states from which classes the current class inherits; if more than one such super- class is specified, their order is significant, more specific classes appearing to the left of more gen- eral ones. If the current class is one of the most general in the lexicon, it inherits no information, and its superclass list is empty. Following the superclass declaration are sere or more equations representing default information, which we refer to as the 'main' equation set. These may be overridden by eontlleting information in a more specific class. Each equation in a main set functions as an independent constraint, in a msnner which will be clarified below. Variant equation sets, loosely speaking, corre- spend to alternatives at the same conceptual level in the hiersrchy, and in msny cases reflect the tra- ditional ides of 'paradigm'. Equations within a variant set are absolute constraints, in contrast to those in the main set; if they conflict with informs- tion in a more specific class, failure of unification occurs in the normal way. Also, unlike the main set, each variant set functions as a single, possibly complex, constraint (see section 2.2). A feature 4Thus no distinction is made between classes and 'in- stances', as in e.g. KL-ONE (Schmolse and Lipkis, 1983) structure is created for each variant set that suc- cessfully unifies with the single structure arising from the main set. Each variant set is preceded by the vertical bar ' ['. The order of variant sets within a class is not significant, although, if a main set is employed, it must precede any variant sets. The following simplified example illustrates the form and interaction of class definitions. In equs. tions, unification variables have initial capitals, and negation of constants is indicated by ' '. 'kk' is the string concatenation operator - an equation of the form X = Y kk Z unifies X nondeterministi- cally with the result of concatenating ¥ and Z. #Word walk (Intransitive Verb) <stem>= walk #Class Intransitive () <sub©at> = [SubJ] <$nbJ cat> =np #Class Verb () <aOX> m no <cat> m V I <tense> = past <~onO = <stem> kk ed I =presont <form>= <steuO kk s i <aSr> = "s83 <tense> - present <form> = <stem> The lexiesl class walk is declared as having two direct superclasses, Intransitive and Verb; its main set contains just one equation, which sets the value of the feature stem to be walk. Intransitive has no direct superclasses, and its main equation set assigns to the value of subcat a list with one element, a feature structure in which the value of cat is rip. Neither walk nor Intransitive has sny variant equation sets. Verb, by contrast, has three, in addition to two main set equations. The latter assign, by default, the values of cat and aux. The three variants ac- counted for by this example are the past tense verb, in which the value of form unifies with the result of concatenatin 8 the value of stem with the string 'ed', the third person singular form, in which the suffix string is 's', and the form representing other combinations of person and number in the present tense; in the last case, the form value is simply identical to the stem value. 5 5We ignore for the moment the question of mor- phogrsphemic effects in sufllxstion - see section 3.3 below. 216 2.1 Class Precedence In an ELU lexicon, a class may inherit directly from more than one superclass, thus permitting 'multi- ple inheritance' (Touretsky, 1986: 7ft.), in contrast to 'simple inheritance' in which direct inheritance is allowed from only one superclass at a time. The main advantage that multiple inheritance offers over simple inheritance is the ability to inherit sev- eral (orthogonal or complementary) sets of proper- ties from classes in more than one path through the hierarchy. In the lexical context, it has often been observed that morphological and syntactic proper- ties are essentially disjoint; the subeategorisation class of a verb is not predictable from its conjuga- tion class, and vice versa, for example. Multiple inheritance permits the two types of information to be separated by isolating them in distinct sub- hierarchies. The question to be resolved in systems em- ploying multiple inheritance is that of precedence: which of several superclasses with conflicting prop- erties is to be inherited from? ELU employs the class precedence algorithm of the Common Lisp Object System (CLOS) to compute a total order- ing on the superclasses of a lexicsl class, s The resulting 'class precedence list' (CPL) contains the class itself and all of its superclasses, from most specific to most general, and forms the basis for the defaulting behaviour of the lexicon. As an ex- ample, consider the following lexicon: #Word It (B D) #Class B (C) ZClass C (Y) #Class D (E) #Class E (P) #Class F () Here, the superclass declarations embody the or- derin 8 constraints A < B, A < D, B < D, B < C, C < F, D < E, and E < F; from these are derived a to- tal order assigning to the lexical class A the CPL (A,B,C,D,E,F). 2.2 Inheritance of Properties A lexical class such as walk in the example above corresponds to a family offeature structures. Here, as in most analyses, members of this family rep- resent morphosyntactically distinct realizations of a single basic lexeme. Consulting the lexicon in- volves determining membership of the set of fea- ture structures associated with a given lexical class; s See Steele (1990: 782ff.) for details of the aIgorithm, and Keene (1989:118ff.) for discussion. In circumstances where no such total ordering is possible, the system reports an error. the precedence relation encoded in the CPL con- trols the order in which defeasible information is considered, each class in the CPL adding first de- fault and then non-default information to each FS produced by the previous class. More formally, we define default eztension, su- perclass eztension, and global ez~e~sion as follows: 7 (1) The default eztension of a FS ~ with respect to a set of FSs • is if U ({~b} U ~) :f: _1_, and .1_ otherwise. (2) The superclass ez~ension of a FS ~b with re- spect to a class c having a main equation set M and variant sets Vl,...v, is the set I ~be J.}, where M s is the smallest set of FSs such that each m E M describes some m ~ E M s, ¢~s is the default extension of~b with respect to M e, and v~ is the feature structure described by vl. We refer to this set as E(~b, c). (3) The global eztensio~, of a lexlcvd class having the CPL (cl,...c,) is F~, where Fo = {T}, and r,>0= U{~ IVY, ~ r,_l, • = E(~, c,)}. With regard to (I), each of the FSs in W that can unify with ~b does so - those that cannot, because they conflict with information already present, are ignored. The condition requiring ~ to be unifiable with the result of unifying the elements of • takes account of the potential order-sensitivity of the de- faulting operation - only those main sets having this property can be applied without regard to or- def. If this condition is met then the application of defaults always succeeds, producing a feature structure which, if no member of the default set is applicable, is identical to ~b. This interpretation of default unification is essentially that of Bouma (1990). The superclass extension E(~, c) is formed by applying to ~ any default equations in the main set of c, and then applying to the result each variant set in c; for variant sets Vl,... v,,, the result of this 7'A U B' here denotes the unification of A and B, 'T' denotes the most general, 'empty' FS, which unifies with all others, and '_L' denotes the inconsistent FS, equated with failure of unification. 217 second stage is the set of FSs {@1,...@~}, where each ~ is the result of successfully unifying ~b with some different vj. To speak in procedural terms, the global exten- sion of a lexicai class L with the CPL C is com- puted as follows: T is the empty FS which is input to C; each c~ in C yields as its superelass extension a set of FSs, each member of which is input to the remainder of C, (c~+l,...c,). The global exten- sion of L is then the yield of the most general class in its CPL - expressed in a slightly different way, the global extension of L is the result of applying to T the CPL of L. It is possible to exert quite fine control over in- heritance; one property may override another when assigned in a main equation set, but cause failure when assigned in a variant set. Normally, variant sets are defined so as to be mutually exclusive; a FS that unifies with more than one of the variant sets is in effect multiplied, s The inheritance systems of Calder (1989) and Flickinger (1987) make use of lexical rules - the ELU lexicon does not provide such devices, although some of their functionality may be reproduced by the variant set mechanism. The approach described here differs from some previous proposals for default inheritance in unification-based lexicons in that the process of building FSs is monotonic - classes may add infor- mation to a FS, but are unable to remove or alter it. Thus, given a CPL (ci,...c.), any FS F admit- ted by a class c~ subsumes every FS that can be cre- ated by applying to F the classes (c~ + I,... c,~), m n. Karttunen (1986) and Shieber (1986) describe systems in which FSs may be modified by default statements in such a way that this property does not automatically hold. These schemes permit default statements to override the effect of ear- lier statements, whereas default information in the ELU lexicon may itself be overridden. We now turn to some examples illustrating the r61e of defeasible inheritance in the lexicon. 3 Example Analyses 3.1 German Separable Verbs Two large classes of German verbs are the sep- arable and inseparable prefixed compound verbs. The former are of interest syntactically because, as their name suggests, the prefix is a bound SSee 3.2 below for a case where such multiple matches are desirable. morpheme only in certain syntactic environments, namely when the verb is untensed or head of a verb-final clause. 9 Members of both classes share morphological, but not necessarily syntactic, prop- erties of the verb which corresponds in form to their stem. The separable-prefix verb weglau/en ('run away') and inseparable verlau/en ('elapse') are two such verbs, which the lexicon should be able to relate to their apparent stem lau/en ('run'). Since word definitions are classes, they can be inherited from like any non-lexical class. Thus the lexical classes verlaufen and weglaufen may in- herit from lanfen, itself a lexical class: x° # Word woglau~on (we s lau~on) <s~ = weglaufen # Word vorlaufsn (vet laufsn) <S~ i vorla~en # Class we s (separable) <morph prolix> = wog # Class vet (non_sopLTabls) <morph prefix> = vet # Word lau~en (verb) Base_stun= lauf <smu> = laufon # Class non_separable () Proflx = <morphprefix> # Class sspazablo O l Prefix = <morphprsfix> <lyn 4~v> = no <sya in, l> = "tn,f I Proflx = '' <syn Inv> =yos .<synia~l> = "la.f I # Class Prefix = <moxphprofix> <synin~l> =~ verb O <cat> m v Prefix = '' <morph pref~x> = Prefix && <syn 4.e1> = inf <form> = P_be && on I <form> = P_bs k& • <syn infl> = prss_Indic_s8_l 9Within the syntactic analysis assumed here, the distri- bution of verbs is controlled by a binary feature inv, whose value in these contexts is no. lea number of simplifications have been made here; ]aufen is in reality a member of a subclass of the strong verbs, and the verb class itself has been truncated, so that it accounts for only bare infinitive and first person singu- lar present tense indicative forms. Past participle formation also interacts with the presence of separable and inseparable prefixes. 218 The lexical classes weglaufen and verlaufen each have two immediate superclasses, containing in- formation connected with the prefix and stem. The classes weg and vet set the value of the morph:prefix path of the verb (overriding the value given in the main set of verb), and specify in- heritance from the separable and non.separable classes respectively. The former of these unifies the variable Prefix with either the empty string (in the case of tensed 'inverted' verbs) or the value of morph : prefix (for other variants), while the lat- ter sets the value uniquely for all forms of the verb in question. As the value of sere is fixed in the main equation set ofweglaufen and verlaufen, the cor- responding equation in laufen is overridden, but Base.stem unifies with lauf. Finally, in verb, the main set supplies default values for Prefix and morph : prefix (which in the cases under consid- eration will not be applicable), unifies P_bs with the result of concatenating the strings Prefix and Base_stem, and for each value of syn infl assigns to form the concatenation of P_bs with the appro- priate sufftx string. Values for sere (antics) are provided in main set equations; those in weglaufen and verlaufen are thus correctly able to override that in laufen. 3.2 English Irregular Verbs In most cases, lexical items that realize certain morphosyntactic properties in irregular forms do not also have regular realizations of those proper- ties; thus *sinked is not a well-formed alternative to sank or sunk, on the analogy of e.g. walked. This phenomenon has frequently been discussed in both theoretical and computational morphol- ogy, under the title of 'blocking', and it appears to provide clear motivation for a default-based hier- archical approach to lexical organization. 11 There are exceptions to this general rule, however, and inheritance mechanisms must be sufficiently flexi- ble to permit deviation from the strict behaviour illustrated above. Consider the small class of English verbs includ- ing dream, lean, learn and burn; these have, for many speakers, alternate past finite and past par- ticiple forms: e.g. dreamed and dreamt. The fol- lowing fragment produces the correct pairings of strings and feature structures, the written form of the word being encoded as the value of the form llSee e.g. Calder (1989). feature: 12 #Word walk (verb) <bass> = walk #Word sink (verb) <bass> = sink P_Fin_Form = silk PSP_Form = sunk #Word dream (dual-past verb) <base> = dream #Class dual-past 0 I PSP_Form = <base> k& t P_Fin_Form = <bass> &k t ~morph> = pasttinlts/pastnon~inits I #Class verb () <oat> = v PSP_Porm = <bass> It& sd P_Fin_Form = <bass> &k od J <morplO = present_nones3 <~orm~ = <bass> <morph> = prsssnt_ss3 <~orm> = <bass> &k s ~rph~ - ptstnon:einito <form> = PSP_Fozm <nOXl~lO . ptstflnlts <fo~O = p_F4e_Fo~n The main set equations in s/nk override those in its superclass verb, so that the variants in the latter class which give rise to past participle and past tensed forms associate the appropriate information with the strings sunk and sank, respectively. The class walk, on the other hand, contains nothing to pre-empt the equations in verb, and so its past forms are constructed from its value for base and the suffix string ed. The lex/cai class dream differs from these in hay- ing as one of its direct superclasses dual-past, which contains two variant sets, the second of which is empty (recall that variant sets are pre- ceded by the vertical bar 'I'). Moreover, this class is more specific than the other superclass verb, and so its equations assigning to PaP_Form and P_Fin_Form the string formed by concatenating the value of base and t have precedence over the contradictory statements in the main set of verb. Note that this set also includes a disjunctive con- straint to the effect that the value of morph in this FS must be either pastfinite or pastnonfinite. The dual_past class thus describes two feature IZAgain, the analysis sketched here is simplified; several variants within the verb class have been omitted, and all in- fleetional information is embodied as the value of the single feature morph. 219 structures, but adds no information to the sec- ond. The absence of contradictory specifications permits the equations in the main set of verb to apply, in addition to those in the first variant set of dual-past. The second, empty, variant set in dual-past permits this class also to inherit all the properties of its superclass, i.e. those of regular verbs like walk; among these is that of forming the two past forms by suffixing ed to the stem, which produces the regular dreamed past forms. 3.3 Word-Form Manipulation The string concatenation operator '&&' allows the lexicon writer to manipulate word forms with ELU equations and macros. In particular, &t can be used to add or remove prefixes and suE3xes, and also to effect internal modifications, such as Ger- man Umlaut, by removing a string of characters from one end, changing the remainder, and then replacing the first string. In this section we show briefly how unification, string concatenation, and defensible inheritance combine to permit the anal- ysis of some of the numerous orthographic changes that accompany English inflectional sufftxation. The inflectional paradigms of English nouns, verbs, and adjectives are complicated by a num- ber of orthographic effects; big, hop, etc. undergo a doubling of the final stem character in e.g. big- ger, hopped, stems such as/oz, bush, and arch take an epenthetic • before the plural or third singu- lar present suiflx s, stem-final ie becomes y before the present participle suifL~ ing, and so on. Pe- ripheral alternations of this kind are accomplished quite straightforwardly by macros like those in the following lexicon fragment (in which invocations of user-defined macros are introduced by ': ,):is Final_Sibilant(Strin s) $trin$= _ I~eh/c~/e/x/s Ftnal_Y(Striag,Prefiz) String = ~reftx I~ y Prefix= &k b/c/4/~/g/h/j/k/i/m/n/p/r/s/t/v/w/x/z # Word try (verb_spe11~) <base> = try # Word watch (verb_spe].I/a 8) <base> = watch 13As before, this is s somewhat sbbre~sted version of s full descrip~on; the verb and vo~bJpolliag classes require additional variant sets to account for other morphosyntsc~c prope~|es. Other st~ng-predicste macros, in particular OK, must be defined in order to ester for the ~ range of spelling changes observed in verbal inflee~on. # Class verb_spelling (verb) I !Final_T(<base>,P) Base_P_PSP = P && i Base_3SG = P &k ie J !F~al_Sibilant(<baee>) Base_3SG = <base> k& • I !OK(<base>) #Class verb () <cat> = v Base_3SG = <base> Baso_P_PSP = <bass> PSP_Form- Baso_P_PSP k& od SG3_Fozmffi Base_3SG k& s J ! Sing3 <form> = SG3_Form I ; PastNonFin <form> = PSP_Form Two macros definitions are shown here; Final_¥ is true of a pair of strings String and Prefix iff String consists of Prefix followed by y and the final character of Prefix is one of the set denoted by the disjunction b/c.., z, while Final_Sibilant is true of a given string iff that string terminates in sh, ch, s, z, or z. OK is a macro which is true of only those strings to which neither Final.Sibilant nor Final_Y apply. The class verbJpellJ.ng contains three variant equation sets, the first two of which assign values to variables according to the form of the string which is the value of the base feature. If Final_¥ is appli- cable, Base.P-PSP is unified with the concatenation of the second argument to the macro (e.g. tr) and is, while Base_3SG is unified with e.g. tr and i. If FJ.na1.Slbilant is applicable, then Base.3SG is unified with the concatenation of the value of base (e.g. watch) and e. If neither of these is applica- ble (because the base string does not match the patterns in the macro definitions), the variables are bound not within this class, but in the main equation set of its superc]ass verb. Here, their val- ues are unified directly with that of base, and the eventual values of the form feature result from con- catenation of the appropriate suiflx strings, giving values of watched, watches, tried, and tries. 4 Summary The lexicon system presented above is fully inte- grated into the ELU environment; in particular, the result of analysis and the starting point for generation is the same type of feature structure as that produced by ELU grammars, and the equa- 220 tions within classes are of the same sort as those used elsewhere in a linguistic description, being able to exploit re-entrancy, negation, disjunction, direct manipulation of lists, etc. For the purpose of experimenting with the struc- ture of the class hierarchy and the distribution of information within individual classes, the lexicon is held in memory, and is accessed directly by means of an exhaustive search. Once a stable descrip- tion is achieved, and the coverage of the lexicon in- creases, a more efficient access mechanism exists, in which possible word-forms are pre-computed, and used to index into a disk file of class definitions. We have presented an implemented treatment of a framework for lexical description which is both practical from the perspective of efficiency and at- tractive in its reflection of the natural organiza- tion of a lexicon into nested and intersecting gen- eralizations and exceptions. The system extends traditional unification with a multiple default in- heritance mechanism, for which a declarative se- mantics is provided. References Boums, G. (1990) "Defaults in Unification Gram- mar," Proceedings of the ~Sth Annual Meeting of the Association for Computational Linguis- tics, Pittsburgh, June 6th-9th. 165-172. Calder, J. (1989) "Paradigmatic Morphology," Proceedings of the Fourth Conference of the European Chapter of the Association for Computational Linguistics, Manchester, April 10th-12th. $8-65. Daelemans, W. and G. Gazdar, eds. (1990) Inher- itance in Natural Language Processing: Work- shop Proceedings. ITK, Tilbut8 University. Estival, D. (1990) "ELU User Manual". Technical Report 1, ISSCO, Geneva. Flickinger, D. P. (1987) "Lexical Rules in the Hier- archical Lexicon," PhD Thesis, Stanford Uni- versity. Gasdar, G. (1990) "An Introduction to DATR," in R. Evans and G. Gasdar (eds.) The DATR Papers: February I990. Cognitive Science Re- search Paper CSRP 139, School of Cognitive and Computing Sciences, University of Sussex. 1-14. Hudson, R. A. (1984) Word Grammar. Oxford: Blackwell. Johnson, R. and M. Rosner (1989) "A Rich Envi- ronment for Experimentation with Unification Grammars," Proceedings of the Fourth Confer- ence of the European Chapter of the Associ- ation .for Computational Linguistics, Manch- ester, April 10th-12th. 182-189. Karttunen, L. (1986) "D-PATR: A Development Environment for Unification-Based Gram- mars," Proceedings of the llth lnterna. tional Conference on Computational Linguis. tics, Bonn, August 25th-29th. 74-80. Keene, S. (1989) Object-Oriented Programming in Common Lisp. Reading, Massachussetts: Addison-Wesley. Schmolse, J. G. and T. A. Lipkis (1983) "Classifi- cation in the KL-ONE Knowledge Representa- tion System," Proceedings of the Eighth Inter- national Joint Conference on Artificial Intelli- gence, Karlsruhe, West Germany. 330-332. Shieber, S. M. (1986) An Introduction to Unifi- cation-Based Approaches to Grammar. CSLI Lecture Notes no. 4, Stanford University. Steele, G. L. (1990) Common Lisp: The Lan- guage (second edition). Bedford, Massachus- setts: Digital Press. Touretsky, D. S. (1986) The Mathematics of Inher- itance Systems. London: Pitman Publishing. 221
1991
28
METAPHORIC GENERALIZATION THROUGH SORT COERCION Ellen Hays 10 Pine Avenue Arlington, MA 02174 [email protected] Samuel Bayer The MITRE Corporation, A040 Burlington Rd. Bedford, MA 01730 [email protected] Abstract This paper presents a method for interpret- ing metaphoric language in the context of a portable natural language interface. The method licenses metaphoric uses via coercions between incompatible ontological sorts. The machinery allows both previously-known and unexpected metaphoric uses to be correctly interpreted and evaluated with respect to the backend expert sys- tem. 1 Introduction One of the central issues in AI systems has been how to model the domain: what are the primitives of the ontological language, how are the ontolog- ical sorts organized, and so on. AI researchers have explored a wide range of object-centered and relation-centered representations (for exam- ple, Brachman and Schmolze (1985) and Minsky (1975)). When setting up the domain model for a natural language interface, though, one must also keep the lexicon in mind, so that words can be defined and processed efficiently; if possible, the hierarchical organization of the domain model should minimize sense ambiguity, by allowing lex- ical items to point to classes that dominate the objects that reflect each item's range of meanings. However, a growing body of literature argues that the generalizations about the world im- plied by the lexicon do not correspond exactly to standard computational notions of fine-grained ontological structure. Rather, the mapping is mediated by pervasive low-level metaphoric and metonymic processes (as pointed out by Lakoff (1987) and others) that make for a mismatch be- tween the desired world model and the lexicon. At the MITRE Corporation, we are developing an interface architecture to support King Kong, our portable natural language interface for ex- pert systems, and AIMI, our multimedia interface for the same class of systems) Portable inter- faces provide an additional set of problems be- yond simple domain modeling. In particular, in our case, the structure the knowledge represen- tation imposes on the backend domain model is hierarchical and relation-based, and its form must be consistent across system ports; thus the knowl- edge representation may structure domain-specific information in a way that is fundamentally differ- ent from the way it is organized in the backend. In this context, one needs to develop a computational account of the low-level metaphor that creates the mismatch between the domain model and the lex- icon. In this paper, we will discuss a mechanism implemented in King Kong that we call "sort co- ercion" that is intended to address that mismatch. 2 Refinement in the King Kong domain model In the King Kong knowledge representation, both concepts and relations are organized hierarchi- cally. King Kong exploits this hierarchy in a num- ber of ways, of which the most relevant to this discussion occurs in the process of refinement. When King Kong interprets a sentence, it builds an interpretation corresponding to the input. In- terpretations represent a point in the semantic 1 The AIMI system is, in fact, one of the domains to which King Kong has been ported. The current implementation of King Kong has also been ported to two mission planning systems and one transportation planning system. The co- ercion mechanism described here currently supports exam- ples in the mission planning and interface domains. 222 analysis that is subsequent to some lexical disam- biguation but prior to the determination of scope relationships and reference resolution. They are built in large part out of knowledge representa- tion objects. They have heads, for instance, which are typically filled by relations from the domain model, and argument lists, which are usually map- pings from the arguments of the relation in the head to other interpretations. The heads of these interpretations can be very general relations, and King Kong uses refinement to find relations in the hierarchy that are dom- inated by the head indicated by the input and that are specific enough to be evaluated. Once referents have been resolved, refinement chooses appropriate leaf relations by recursively checking the children of each relation in the subgraph acces- sible from the input relation and eliminating any children whose argument restrictions are disjoint from the sorts of the arguments. Each leaf relation has backend access code stored on it that allows King Kong to communicate with the backend ex- pert system. The code stored on the leaf relations found by this procedure supports the evaluation of the logical expressions generated from the in- put interpretations. 3 Motivations for sort coer- cion The obvious problem for a system using a hier- archy of the kind just described is that in most cases there is no direct, one-to-one mapping be- tween words and concepts. Most lexical items have a number of different meanings, and within those meanings there are often different senses, as well as various selectional restrictions and preferences, whether rigidly defined or merely stylistic. One case in point is the locative prepositions, which have been studied in great detail by a number of linguists, including Herskovits (1986), whose analysis of static locative prepositions such as in, on, and at defines a program of sorts for in- terpreting each, in the presence of particular argu- ments. The scheme consists of an ideal meaning (a very abstract definition) and a number of use types (more concrete senses). The relations so defined, however, require that the system have recourse to a number of "functions" that, in some sense, "co- erce" the objects arguments to the relations from one ontological sort to another. Herskovits calls these geometric description functions; they capture a number of different kinds of conceptualization (or recasting) of objects. For example, for the purposes of the abstract rela- tion at(x,y) ("X [is] at y,,),2 both x and y are taken to be points. 3 Then in the actual instance of the relation at (j olin, airport), according to this model, we have conceptualized both of the (three- dimensional) objects in the relation as points in or- der to express that particular locative relation be- tween them. In the same way, when we use at with a temporal argument ("a meeting at 5 o'clock"), we are in some sense "viewing" a time point as a spatial object, namely a geometric point. 4 Since a geometric description function can ap- ply to any argument of the appropriate ontolog- ical sort (i.e., within the range of the function), regardless of the relation it figures in, what this scheme captures is a generalization about concep- tual "transfer of reference', as Herskovits has more recently called it (Herskovits, 1989). The coercion mechanism described in this pa- per was inspired partly by Herskovits' work and partly by the system's existing domain model. It is a response to the need for a one-to-many map- ping from lexical items to ontological items (in this case locative and event relations), and is an at- tempt to capture explicitly some of the ways in which changing the way an object is viewed allows certain metaphoric and metonymic uses. 4 The coercion mechanism The central information source in our account of metaphor and metonymy is a set of coercion rules. Coercion rules declare different ways of viewing particular classes of objects. So if we wish to view temporal intervals as one-dimensional spatial ob- jects (lines), we would declare: (I) (defCoerce temporal-interval line) These coercion rules can be chained; if we wish to view events as temporal intervals (that is, the intervals over which they occur), we could ulti- mately view them as lines as well simply by adding another declaration: 2Herskovlts follows Talmy (1983) and others in seeing locative prepositions as defining a figure/ground relation- ship between a located object and a reference object. 3The ideal meaning of at is for two points to coincide (1986, p.128). 4 Jackendoffproposes a similar response to the problem, with respect to temporal use of spatial expressions. See (Jackendoff, 1983, ch.10). 223 (2) (defCoerce durative- event t emporal-int erval) King Kong uses these coercion rules in two re- lated ways. The first is to license what we call shadow relations. These are relations that have no parent but are connected to the domain model by means of a shadow link. This link requires that the value restrictions on the arguments of the shadowing relation be connected to the value re- strictions on the shadowed relation by a chain of coercion rules. These shadow links are required because the normal subsumption relationship does not permit the shadowed relations to be connected to their shadows; the endpoints of coercion links will typically be disjoint. Intuitively, these shadow relations represent the metaphoric uses that Lakoff called attention to. When King Kong encounters a relation pointed to by the input that has shad- ows associated with it, it exploits an expanded version of the refinement mechanism described in Section 2 to search through not only children but also shadows for acceptable leaf relations. Let us take a brief example. Imagine that we wish to capture the low-level metaphor in a sen- tence like "The length of the meeting is 5 hours." The ideal meaning of the length-of relation in- volves a line and a one-dimensional (spatial) mea- sure, which are the value restrictions on the two arguments (indicated here as vr): (3) (defRelation length-of (arg object (vr line)) (arg measure (vr ld-measure)) (super measure-of) ) The coercions described in (1) and (2), together with a view of quantities of time as spatial mea- sures (shown in (4)), suffice to license the shadow embodying the temporal metaphoric use of the length-of relation in (3): (4) (defCoerce quant ity-of-tiJne ld-measure) (5) (defRelation length-of-event (axg event (vr durative-event) ) (arg measure (vr quantity-of-time) ) (shadows length-of)) But the mechanisms introduced so far do not address a particular requirement of the King Kong metaphor mechanism that might not be imposed on other such mechanisms: the resulting logical expressions must be evaluable. Since King Kong is an interface, its domain model captures the shape of the data, but it does not itself store any facts; it must consult an external (i.e., the backend sys- tem's) database to reply to any queries. So when it recognizes a metaphoric use, it must provide the proper backend argument fillers to the back- end database in order to evaluate the query. But if the metaphoric use of the relation correspond- ing to the input has an argument corresponding to event and the ideal meaning requires an argu- ment corresponding to line, as in the length-of relation given above, how can King Kong provide the proper backend individuals? The answer lies in the way coercion rules inter- act with the domain model. When they license a shadow relation, they instantiate a point in the space of possible coercions, and to this shadow re- lation we can attach backend access code that ex- pects objects corresponding to the classes in the value restrictions of the current (shadowing) rela- tions. In other words, in the example given above, although conceptually we are viewing an instance of event as an instance of line, we need not refer to the ideal class at all in processing; the shadow relation permits us to treat these instances as or- dinary members of the event class. The existence of this shadow implies that there is a conceptual mismatch between the way the backend system records this information and the way language ex- presses it; the backend system considers the in- put classes directly, while the ontology and lexicon view these classes as coercions from other classes. 5 But what if the backend system requires that the input classes be coerced, just as the domain model and lexicon do? This is the second way in which the coercion rules can support metaphoric language. Coercion rules can have fragments of logical expressions attached to them that describe how to convert items of one class to items of an- other. We can use these augmented coercion rules to process novel uses of relations. If a path of co- ercions can be followed dynamically (rather than built at load time, as when shadows are licensed), the novel use can be evaluated, as long as the log- 5This shadow, along with many others, could be auto- matically generated from our set of coercion rules, but since the backend access code that shadows are "repositories" for cannot be automatically generated as well, that would not be productive. Furthermore, we acknowledge the possi- bility that the unconstrained application of these coercion rules would generate shadow relations with no linguistic validity. 224 ical expressions attached to the coercion rules can themselves be evaluated. In that case, the proce- dure that builds logical expressions will fold the logical expressions associated with the coercion rules into the overall logical expression, in order to create an evaluable expression, e For example, consider a backend system that knows about meetings and their start and end times, but doesn't store their duration. Further- more, it knows how to manipulate intervals of time. We might amend the coercion rule in (2) above in the following way, and replace the shadow shown in (5): (e) (defCoerce durative-event temporal-interval (lambda x (durative-event-has-interval durative-event x))) (7) (defgelation durative-event-has-iuterval (arg event (vr durative-event)) (arg interval (vr temporal-interval)) (super event-has-property)) (s) (defRela$ion length-of-interval (arg interval (vr temporal-interval)) (arg measure (vr quantity-of-time)) (shadows length-of)) In this situation, the length-of-interval re- lation instantiates a point in the space of possible coercions that represents the system's ability to compare a temporal interval with a time measure- ment. It represents the direct understanding of something like "The length of the coffee break was 10 minutes," where we assume that a coffee break is a kind of temporal interval. Ignoring tense, the logical expression corresponding to this example is: 7 (9) (length-of coffee-break1 lO-minutes) The generalized refinement process will locate the shadow length-of-interval and use the 6If the coercion rules are not all evaluable, we can build an interpretation for the input, but we cannot evaluate it. ? King Kong actually represents measurements as undif- ferentiated pools of individuals, much as it represents "10 planes", for instance. We may ignore that detail here. code associated with it to communicate with the backend system. We can do more, however. Given the existence of the augmented coercion rule, we can understand sentences like our first example "The length of the meeting is 5 hours" by build- ing a chain of coercions that consists of a single link, from events to temporal intervals. In this case, our logical expression will be: (10) (exists y (lambda x (durat ive-event-has-int erval coffee-break1 x) ) (length-of y lO-minutes) ) As long as there is backend access code asso- ciated with the durative-event-has-interval relation, we can process this use of the length-of relation without the shadow in (5) (length-of-event) present. In fact, we can pro- cess any metaphoric reference to an event that appears in an argument position whose filler is re- stricted to intervals of time. Consider the overlap relation, whose ideal meaning is a relation between two planes or two lines. The coercion rules already given will license a shadow that relates two inter- vals: (xx) (defRelat ion overlap (arg obj 1 (vr line)) (arg obj2 (vr line)) (super static-locative) ) (12) (defRelation temporal-overlap (arg objl (vr temporal-interval)) (arg obj2 (vr temporal-interval)) (shadows overlap) ) The shadow in (12) corresponds to an example like "The current calendar year overlaps with the next fiscal year." But given the augmented coer- cion rule, we can understand sentences like "The first meeting overlaps with the second meeting" just as easily: (13) (exists y (lambda x (durat ive-event-has- interval meeting1 x) ) (exists z (lambda x (durat ive-event-has-int erval meeting2 x)) (overlap y z))) 225 This method of supporting metaphorical ex- tension by explicitly defining the space of pos- sible ways of conceptualizing an object allows us considerable flexibility in understanding novel metaphoric use. s The same augmented coercion rules can be used if we wish to license a shadow relation that has no backend access code associated with it. We might want to use that strategy in the situation where the metaphoric use can be anticipated but the access code associated with the shadow would have to perform exactly the same computation as the coercion code. 5 Comparison with other ac- counts As in DeJong and Waltz's work (1983), the King Kong coercion mechanism is triggered by viola- tions of sort restrictions on arguments. We do not, however, agree with DeJong and Waltz's contention that "Nouns are far less likely to be metaphorical than verbs." The symbiosis be- tween shadows and coercion rules implies that the metaphor lies not in the functor or its arguments, but rather in the association between them. Fur- thermore, our mechanism also structures the path between metaphoric use and ideal meaning, and provides computational support for argument co- ercion. The mechanism has the same advantage over the work of Jacobs and Martin. 5.1 Jacobs and Martin In a series of papers (Besemer and Jacobs, 1987; Jacobs, 1986; Jacobs, 1987), Paul Jacobs has de- veloped a relationship he calls a view. Views express a relationship between event types that implements metaphoric extension. For example, in order to handle examples like "The command takes three arguments ~, he defines the following view: (VIEW execute-operation causal-doubl e-trans~ er (ROLE-PIAY input object-l) (ROLE-PLAY output object-2) (ROLE-PlAY user source-l) (ROLE-PlAY operation source-2)) SNote that shadows always embody dlsjointness between at least one of their arguments and those in the ideal mean- ing. Thus, no input relation can be simultaneously inter- preted both as a subsumed relation and as a shadow. In Jacobs' system, this view would incorporate the metaphorical mappings from the full range of expressions referring to exchange operations such as giving, buying, and selling. As a result, the mappings in this view may be used to understand expressions such as "This command gives you the file names", and so on. Like the work of Martin (see below), Jacobs' approach has the potential for grouping families of relationships into situations, a capability King Kong does not yet have. Jacobs' views correspond roughly to our shadow relations. However, the view mechanism provides no lim- itations on the correspondences between the ob- jects in the ROLE-PLAY declarations, nor does there seem to be any capability for computing one argu- ment class from another. As a result, it is difficult to see how Jacobs' account would intelligently re- strict the range of novel language use the system will handle, or how it might be used to provide computational support for sort coercion in an in- terface. Martin (1987a, 1987b), working with the same mechanism, takes steps toward addressing the first concern. His work involves learning new metaphoric uses in light of already recognized metaphors. So Martin's heuristics allow the sys- tem to learn what "getting out of Lisp" means if it knows what "getting into Lisp" means. His sys- tem knows about entering and exiting, enabling and disabling Lisp processes, and that there is a map between entering and enabling Lisp. Be- cause entering and exiting are closely connected (they are related by the frame semantic relation reversible-state-change), Martin's system can build the metaphoric link from exiting to disabling Lisp. Techniques such as this one constrain the in- terpretation of novel language use, since the sys- tem can only generalize from the existing library of metaphoric uses. However, they provide no com- putational support for evaluating novel uses. 5.2 Gentner et al. Gentner's structure-mapping techniques (Gen- tner, 1983; Gentner et al. 1987) are applicable mostly to explicit analogies such as "An electric battery is like a reservoir." Her approach, imple- mented by Falkenhainer and Forbus (1986), maps the structure of the source of the metaphor to the structure of the target by creating match hypothe- ses between relational representations of the base and target using a set of match construction rules. But the central example of a match construction 226 rule seems to require that the names of the predi- cates in the facts being matched be identical. Un- der this sort of construction rule, it is possible to derive a metaphoric mapping only if the names of the predicates have been set up to encode the metaphor ahead of time. Under this system, it is not possible to deduce new metaphors; in fact, one can only recognize them if the metaphoric link has been made but not recorded. 5.3 Boguraev and Pustejovsky Boguraev and Pustejovsky (1990) argue that the normal conceptions of the structure of the lexicon are impoverished for two major reasons. First, a great number of distinctions beyond those usually made are necessary to capture the essential as- pects of lexical semantics. Second, the common technique for representing ambiguity in the lexi- con (enumeration) falls short because enumeration of word senses neither organizes the senses intelli- gently nor provides for creative use of words. For instance, under the enumeration method, the following uses of "fast" require that at least these three senses he listed in the lexicon: :fast(l): able to move quickly (a fast car) fast(2): able to perform some act quickly (a fast typist) faat(3): taking little time (a fast oil change) However, these three senses are not enough to ac- count for the creative use of "fast" in a phrase such as "a fast highway". Pustejovsky's solution to this problem (outlined also in (Pustejovsky, 1990)) is a "generative lex- icon", which organizes lexical items with respect to one or more of: (1) argument structure, (2) event structure, (3) qualia structure, and (4) lexi- cal inheritance structure. These lexical structures are intended to address the different ways in which words are understood; the differing interpretations of "fast" shown above are taken to be a function of the differing qualia structures of "car", "typist", "oil change", and "highway". While Pustejovsky's proposal for a variety of lexical structures is far richer than anything cur- rently implemented in King Kong, one problem with his account is that the links are links be- tween lexical items and not between objects in a domain model. Simple cases of anaphoric refer- ence demonstrate that in many cases the coercions that he conceives of are properties not of lexical items but rather of the objects referred to: John bought a Porsche, and it's fast. John hired a typist, and he's fast. I drove down 1-90 yesterday, and it's fast. John bought a new car, but Bill's is faster. John hired a good typist, but Bill's is faster. America is supposed to have good high- ways, but Italy's are faster. The lexical items whose qualia structures are in- tended to account for the different interpretations of "fast" are not present in the second clause of each of the preceding examples, but the correct in- terpretations are still available. This implies that it is the language user's conception of the object in question (that is, the user's world model) that determines the precise sense of "fast". In our account, in contrast, the links that sup- port the range of metaphoric extensions Puste- jovsky deals with reside in the domain model. This account also supports generalization of these ex- tensions to hierarchies of semantic classes: John bought a new car, and it's fast. John bought a new vehicle, and it's fast. and preserves these extensions under synonymy: John bought a new car, and it's fast. John bought a new automobile, and it's fast. 6 Conclusion One insight missed in most relation-based ac- counts of metaphor 9 is the wide space of possibil- ities for conceptualizing the argument types: how these possibilities are constrained, how the trans- formations can be computed. The coercion mecha- nism in King Kong supports metaphoric processes both statically and dynamically, by defining how metaphoric links between relations are established and supporting computational tools for compre- hending and processing novel metaphoric uses. Acknowledgments This research was supported by the MITRE Cor- poration under MSR project 91340. 9 With the exception of Boguraev and Pustejovsky's, of COUlee. 227 References [Besemer and Jacobs 1987] David J. Besemer and Paul S. Jacobs. FLUSH: A flexible lexicon design. In Proceed- ings of the 25th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 186-192. [Boguraev and Pustejovsky 1990] Branimir Boguraev and James Pustejovsky. Lexical ambiguity and the role of knowl- edge representation in lexicon design. In COLING-gO: Proceedings of the 13th Inter- national Conference on Computational Lin- guistics, volume 2, pages 36-41. [Brachman and Schmolze 1985] R.J. Braehman and J.G. Schmolze. An overview of the KL-ONE knowledge represen- tation system. Cognitive Science, 9(2):171- 216. [DeJong and Waltz 1983] Gerald F. DeJong and David L. Waltz. Un- derstanding novel language. Computers and Mathematics with Applications, 9(1):131-147. [Falkenhainer et at. 1986] B. Falkenhainer, K.D. Forbus, and D. Gen- tner. The structure-mapping engine. In AAAI-86: Proceedings of the Fifth National Conference on Artificial Intelligence, pages 272-277. [Gentner 1983] Dedre Gentner. Structure-mapping: A theo- retical framework for analogy. Cognitive Sci- ence, 7:155-170. [Gentner et al. 1987] Dedre Gentner, Brian Falkenhainer, and Jan- ice Skorstad. Metaphor: the good, the bad and the ugly. In Yorick Wilks, edi- tor, TINLAP-3: Theoretical lssues in Natural Language Processing-$, pages 155-159, New Mexico State University, Las Cruces. [Herskovits 1986] Annette Herskovits. Language and spatial cognition: an interdisciplinary study of the prepositions in English. Cambridge Univer- sity Press, New York. [Herskovits 1989] Annette Herskovits. The linguistic expression of spatial knowledge. L.A.U.D. Paper A 248, Linguistic Agency University of Duisburg. [Jackendoff 1983] Ray Jackendoff. Semantics and Cognition. MIT Press, Cambridge, MA. [Jacobs 1986] Paul S. Jacobs. Language analysis in not-so- limited domains. In Proceedings of the IEEE Fall Joint Computer Conference. [Jacobs 1987] Paul S. Jacobs. A knowledge framework for natural language analysis. In IJCAI-87: Pro- ceedings of the lOth International Joint Con- ference on Artificial Intelligence, pages 675- 678. [Lakoff 1987] George Lakoff. Women, Fire, and Dangerous Things. University of Chicago Press, Chicago. [Martin 1987a] James H. Martin. The acquisition of poly- semy. In Proceedings of the Fourth Interna- tional Workshop on Machine Learning, pages 198-204. [Martin 1987b] James H. Martin. Understanding new metaphors. In IJCAI-87: Proceedings of the lOth International Conference on Artificial Intelligence, pages 137-139. [Minsky 1975] Marvin Minsky. A framework for represent- ing knowledge. In Patrick Henry Winston, editor, The Psychology of Computer Vision, chapter 6, pages 211-277. McGraw-Hill, New York. [Pustejovsky 1990] James Pustejovsky. Lexical ambiguity and the role of inheritance. Talk given at BBN, Cambridge, MA, 6 November 1990. [Talmy 1983] Leonard Talmy. How language structures space. In Herbert Pick and Linda Acredolo, editors, Spatial Orientation: Theory, Re- search, and Application. Plenum Press, New York. 228
1991
29
Event-building through Role-filling and Anaphora Resolution Greg Whittemore Electronic Data Systems Corp. 5951 Jefferson Street N.E. Albuquerque, NM 87109-3432 [email protected] Melissa Macpherson Electronic Data Systems Corp. 5951 Jefferson Street N.E. Albuquerque, NM 87109-3432 [email protected] Greg Carlson Linguistics Program, University of Rochester Rochester, NY grca~uorvm.bitnet ABSTRACT In this study we map out a way to build event representations incrementally, using information which may be widely distributed across a dis- course. An enhanced Discourse Representation (Kamp, 1981) provides the vehicle both for car- rying open event roles through the discourse until they can be instantiated by NPs, and for resolving the reference of these otherwise problematic NPs by binding them to the event roles. INTRODUCTION The computational linguistics literature includes a wide variety of ideas about how to represent events in as much detail as is required for reason- ing about their implications. Less has been writ- ten about how to use information in text to incre- mentally build those event representations as dis- course progresses, especially when the identifica- tion of event participants and other details is dis- persed across a number of structures. We will be concerned here with providing a representational framework for this incremental event-building, and with using that representation to examine the ways in which reference to the internal structure of events contributes to discourse cohesion. That is, we will be interested both in the process of gleaning fully-specified event descriptions from continuous text, and in showing how individual elements of an event's internal structure can behave anaphorically. Examples of the kinds of linkages that must be dealt with in building representations of events from text follow: la) He was believed Co be a liar. b) We promised him to be truthful. c) He tried to keep his mouth shut. 2a) Joe gave Pete a book to read. b) Joe gave Pete a book to impress him. c) Joe asked Pete for a book to read. d) I asked Joe for a book to impress Sam. e) Joe gave Pete the message to save his skin. 3a) Joe told Pete that to err is human. b) He told us that to quit eould be silly. 4a) GM will broaden collaboration with Lotus to make a new car. b) Mary thought that an argument with herself would be entertaining. c) Mary thought that a conference with himself would make John look silly. The examples in (1) are familiar cases of syntac- tically obligatory control; we will consider their be- havior to be straightforwardly and locally resolved. The sentences of (2) show infinitival relatives, pur- pose, and 'in-order-to' clauses in which control of the infinitive (and hence of its implicit subject) is sometimes clear, sometimes ambiguous. In (3), a subject infinitival phrase receives an unavoidably generic reading in one case and a non-generic but ambiguous reading in the other. Finally, the exam- ples of (4) indicate that nominalizations of events also have roles whose reference must be determined, and whose existence and identity has consequences for subsequent discourse. Aside from the sentences in (1), in which control is unambiguously sorted out within the sentence on the basis of verb type, all the examples above can 17 be paraphrased with equivalent multi-sentence con- structions in which the facts of referent-assignment are identical. Even more extended discourses, in- cluding dialogues such as that in (5), show the in- fluence of an instantiated situation or event over the assignment of referents to entities introduced later in the discourse. 5) A: John has been hobbling around for two weeks with a sprained ankle. B: So what did the nurse say yesterday? A: She said that it would not be smart to run so soon after injuring himself. (adapted from Nishigauchi's 48, cited as a modification of Chao's 28) The distribution of event participants across multi-sentence discourses is sufficient to lay to rest any idea that the linkage is syntactically governed, even though the entities which provide cohesion in these examples are arguments which are typically bound syntactically. That is, it seems that initially unfilled thematic roles play a part in tying one sen- tence to the next. Event roles left unfilled after the operation of local syntactic processing are ap- parently still 'active', in some sense, and they ap- pear to be able to attract participants from exter- nal structures to fill them. Carlson and Tanenhaus (1988) provide psycholinguistic evidence that this is indeed the case; open thematic roles do appear to be effective as cohesion devices. 1 Previous theories about how open roles become filled (mostly intra-sententially) have been based on notions ranging from strictly syntactic to more pragmatic, knowledge-based approaches. Obvi- ously wherever we do have what appears to be invariant and obligatory control, we want to ex- ploit a syntactic explanation. However, these cases 1Whether it is just thematic roles, or those plus certain types of highly predictable adjuncts, or a wide variety of other types of slots which can provide the type of linking we are talking about is still an open question. We do assume that for each event we will encode not only THAT it expects certain arguments to be filled, but HOW it expects them to be filled; for instance it should be perceived that the noun 'salesman' is a highly suitable Agent for a sale event. We may need to know about more than that. In particular, we may require metonymical devices that make discourses like the following possible. I had a hard time shopping. First, the parking lot was all full .... Coherence in this example dearly depends on being able to associate 'the parking lot' with 'store' and 'store' with the Location of the 'shopping' event. This extension is no different in kind, however, from the core of what we are proposing here. do not account for much of the ground that we need to cover. As the examples above show, even the syntactic position PRO often defies straightfor- ward control assignment, and in the case of nominal references to events, Williams' (1985) arguments against a strictly syntactic account of referent- assignment are convincing. Of course, there are no syntactic means for linking arguments with event descriptions intersententially. Appeals to underly- ing thematic role notions and/or more pragmati- cally governed operators then seem to hold more promise for the kinds of situations we are describ- ing. Given their currency above and below the sen- tence level, and the fact that they seem to be sen- sitive to both syntactic and pragmatic constraints, the behavior of unfilled event roles will best be ex- plained at the discourse level. Like other discourse anaphoric elements, open roles can not only receive their reference from distant structures, but they also seem to be used productively to create links between linguistic structures and to extend focus in both forward and backward directions. To machine-build representations of events whose essential components are dispersed across multiple structures, two key ingredients are neces- sary. First, the system must have knowledge about events and their expected participants and other characteristics. Given this, one can make predic- tions about the expectancy of arguments and the underlying properties they should hold. The sec- ond ingredient required is a means for assessing the mutual accessibility of discourse entities. As has been pointed out by various researchers, sen- tential structure, thematic relationships, and dis- course configurations all may play a part in deter- mining which entities must, might, and cannot be associated with others, and a discourse framework must make it possible to take all these factors into account in assigning reference and building repre- sentations of events. Our intent in this paper is to provide a prototype model of event building which is effective across clauses, both intra- and inter-sententially. We will incorporate into this representation of events a means for assessing accessibility of events and event participants for anaphoric reference, and we will use the representation to examine the anaphoric behavior of open roles. Event-Building Representation: We have chosen DRT as an overall representation scheme, though we will be modifying it to some extent. DRT has been designed to perform a variety of 18 tasks, including proper placement of individual events in an overall discourse representation and making it possible to indicate which event entities are available for future anaphoric referencing and what constraints hold over those entities. A typi- cal DR for a simple sentence is given in (6). The sentence, 'John gave Bill a dollar' is designated by the variable E1 and has associated with it a pred- icate calculus statement that contains the predi- cate, give, and argument variables V1, V2, and V3. The give event specification and other constraints, again in predicate calculus form, are contained in the lower portion of the DR. In the top half of the DR, any entities, including events, which are avail- able for subsequent anaphoric referencing are listed by their variable names. Vl, V2, V3, E1 (John V1) (Bill V2) (Dolla~V3) El:(give (agent Vl), (goal V2),(theme V3)) 6. A DR for John gave Bill a dollar. Our representation departs in some ways from the way in which the binding of anaphors is usu- ally shown in DRT. In versions of DRT with re- altime processing, whenever an NP is being pro- cessed, two things can happen: i) either the NP can be linked with a previously occurring NP and become anaphorically bound to it, or ii) a new ref- erent can be generated for the NP and posted when no antecedent can be found. For our purposes, it is convenient to include in the DR an extra tier which contains items which have not yet found a referent. ~ To designate the three parts of our DRs, we will use the following tier labels: Available Referents - AR Unbound Referents - UR, and Constraints on Referents - CR. For processing purposes, we will not attempt to immediately bind anaphors as they are encountered in sentences, beyond what we can get for free from syntactic analysis. Rather, we will initiate a two- stage process, with the first DR having unbound anaphors and the second attempting representa- tion of binding. In the first representation, we will 2 A buffer of this sort may be implicit in other treatments of anaphora resolution; our extension is just to add it ex- plicitly to the DR representation. Without some such buffer it is not clear how one would handle sentences like 'When he was a kid, John was pretty goofy.' post unbound anaphors in UR. We will also post constraints for unbound items within CR to reflect their type, e.g. (PRO Xl), (DEFINITE X2), and (HE X3). When items in UR become bound (or when their referents are found), their bindings will be represented in AR, they will be crossed off from within UR, and a new DR will be created to reflect the change in status. We will also revise the representation of event descriptions in CR, by including in them implicit arguments for each event as well as ones which are explicitly realized in the sentence. Every event will have its underlying thematic and highly expected adjunctive roles posted in CR, whether the roles have been filled or not. These unfilled or implicit roles are posted as entities requiring binding, in UR. The constraint (IMPLICIT X) will be included for any open role, and for each event variable we will note in CR whether it was a verbal or other- than-verbal description. Example (7) contains an instance of what we intend. The nominalized form of an investigate event, marked with El, has two open slots: Agent and Theme, V1 and V2, respectively. E1 is posted as a possible referent in AR; its two implicit argu- ments V1 and V2 are posted in UR. Similarly, E2, the launch event is posted in AR, while its open agent role, designated by V3, is shown in UR; its explicit Theme is already posted in AR as El. AK: El, E2 UR: V1, V2, V3 CR: El:(investigate (Agent V1)CTheme V2)) E2:(launch (Agent V3) (Theme El)) 7. A DR of the sentence An investigation was launched. We will show that because of the inclusion of open roles in the representation of events and on the UR tier, this framework for discourse repre- sentation makes it possible to link arguments that appear in a variety of structures to their respective events, and thus provides more predictive power for anaphoric resolution processes. Verb-based Event References: We will demonstrate how DRs can be used to build inter- clausal events by providing various examples. We will move from the easiest examples, those that have much syntactic support, to the hardest, those whose resolution is mostly based on pragmatic grounds. ~9 We treat the binding of the PRO subject of em- bedded infinitive as a case of open role filling, and for our purposes, such binding is fundamentally the same in both obligatory and non-obligatory en- vironments, since in every case the result is that open event roles are filled by arguments from ex- ternal sources. That is, even where control is gen- erated entirely within syntax, the links are con- strued as being the result of a cross-clause event- building process. The operational difference is just that wherever control CAN be reliably determined syntactically, as in the case of obligatory control verbs, indices between controllers and PROs will be in place when initial DRs are generated. 3 A typical DR with a controller-controllee relationship would appear as in (8). AR: Xl, El, E2 CR: (John, Xl) El:(try (Agent Xl)(Goal E2)) E2:(leave (Agent Xl)) 8. The DR for John tried to leave. In the event-building examples that we show in the remainder of the paper, the aim is the con- struction of DRs that ultimately link events and arguments in this same way. What is different about the more complicated cases is just the means of accomplishing the linking. In the case of non- obligatory control of PRO, such results may often require information from several levels of process- ing, and an adequate event-building representation must be able to accommodate the representation of all factors which are shown to be effective in pre- dicting that control. Nishigauchi (1984), for example, demonstrates that choice of controller can often be determined through knowledge of thematic roles (see also Bach, 1982, and Dowty and Ladusaw, 1988, for their ac- counts). In Nishigauchi's account, control of infini- tival purpose clauses and infinitival relative clauses is primarily dependent on the presence of one of three thematic roles from his so-called Primary Lo- cation hierarchy; the idea is that a controller can be assigned if a Goal, Location, or Source is present in the sentence. Where a Goal is present, its refer- 3Dowty and Ladusaw (1988) believe that control is gen- erally established via pragmatic means. They claim that it is pragmatic knowledge of events that enables one to gen- erate links between participants and events. They also be- lieve, however, that there are a large number of situations for which control has become grammaticized, and that there does not need to be any internal analysis in these situations to comprehend argument-to-event links. ent has precedence as controller; where Goal is not present, Location or Source can take control. The examples in (9) are indicative of the kinds of links that can be made via this hierarchy. In ex- ample (9a), the Goal 'Mary' controls the infinitival relative. 4 In (9b), John ends up with the book, so 'John' is Goal, while in (9c), John as the possessor of the book is its Location; in both cases 'John' controls the infinitive. (9) a) John bought Ha.ry a book PRO to read. b) John bought a book PRO to read. c) John has a book PRO to read. To handle examples like (9a-c), we begin with ini- tial DRs that include the kind of information that can be expected from a syntactic/semantic parser that produces initial logical forms. For instance, we know that 'John' is the Agent and 'Mary' the Goal of a buy event, and that the PRO subject of 'read' (the Agent of the read event) has no binding. The object of 'read' is identified in syntax as 'book'. 5 An initial DR for (9a) is illustrated in (10). AR: X1 X2 X3 E1 E2 UR: X4 CR: El:(buy (Agent Xl)(0bjeet X2)(Goal X3)) E2:(read (Agent X4)(Object X2)) (John X1) (book X2) (Mary X3) (PRO X4) (10). The initial DR for John bought Mary a book to read. At this stage, a positive check for Goal in E1 re- sults in the binding of the unbound variable X4 to X3 in AR; X4 is then canceled out of UR. Were there no Goal in El, a Location or Source would have the same effect. In a case where none of these roles is specified explicitly, as in example (11) (from Bach), it must be filled by default and/or from 4 'Mary' is more typically interpreted as Beneficiary in this sentence, but Nishigauchi claims that since Mary ends up with the book, she is the Goal. Bach's (1982) explanation is similar; it is that entity which the matrix verb puts in a position to do the VERBing which controls the infinitive. SThis analysis assumes that the infinitive is recognized as an infinitival relative on 'book', so that it does have an Object gap. The infinitive could also of course he an 'in- order-to' clause with intransitive 'read', in which case the controller is the Agent of 'buy'. 20 context before it can bind the infinitive. In this case the default Goal for 'brought' is "present com- pany", and so the PRO subject of 'enjoy' is first person plural inclusive. (11) I brought this miserable Morgon to enjoy with our dinner. Nominal Descriptions of Events: Much discus- sion has focused on the extent to which the internal structure of NPs that have nominalized events as heads, e.g. 'the destruction of the city by the Ro- mans,' carries over the internal structure of the as- sociated verb-headed structure, as in 'the Romans destroyed the city'. The consensus is that such de- verbal noun phrases, while obviously semantically parallel in some ways, are not equivalent to ver- bal descriptions. In particular, semantic arguments associated with the nominalized form are held to be syntactically adjunctive in nature and entirely optional, even where they would be expressed as obligatory complements to the associated verb. We are interested here in cases in which nomi- nals representing events are linked with arguments that are not part of the same immediate syntac- tic environment. Several examples are provided in (12) and (13). As Higgins (1973, cf. Dowty, 1986) has discussed, in sentences like (12a) the subject of the matrix verb 'make' can be associated with the Agent position of an embedded nominal; there- fore we understand 'Romans' to be the Agent of 'attack'. It is apparently the nature of the verb 'make' that permits this association; 'perform' be- haves similarly. The verbs 'suffer' and 'undergo', on the other hand, link their subjects to the Theme or Experiencer of a nominalized event (that is, to what would be the expected object of the associ- ated verb), as shown in (12b). 12a) The Romans made an attack on the Sabines. b) The Romans suffered a crippling defeat. Williams (1985) makes use of the notion that a matrix verb can impose an association between its own arguments and any implicit arguments of a controlled event noun. However as the following examples show, not all verbs impose association of arguments to the degree that 'perform' and 'un- dergo' do. A verb may show some tendency toward association between Agents, as 'schedule' does in (13a), but be open to a realignment of matrix sub- ject with some other more focused role in other environments, as in (13b). Some may have such a slight tendency to associate arguments in a par- ticular way that it can be disrupted by syntactic structure, as in (13c) and (13d). In (13c) Sam may or may not be a party himself to the agreement, but in (13d) he is probably not involved. (13a) John scheduled a takeover/meeting. b) John scheduled a haircut/a checkup. c) Sam negotiated an agreement. d) An agreement was negotiated by Sam. What is necessary in order to sort this out is a working framework within which these tenden- cies can be represented and their interactions with other factors tracked. Where the tendency towards association is as strong as it is for 'make', which is considered to be semantically "bleached" in such constructions as make an attempt, make an ar- rangement, make a promise, make an attack (that is, it could be said to have become just a mech- anism for linking matrix subject to object event), our representation will allow for an early linking at the level of syntax. For the general run of cases where an event noun is the object of a matrix verb, as in (13a-d), we must rely on our knowledge of typ- ical interactions between events in order to decide what the linking between matrix subject and em- bedded event might be. The interaction between the AR and the UR tiers of the DR, along with constraints on variables of both types, allows us to manipulate the association as may seem appropri- ate, with as much knowledge as we have at the time of linking. Cross-Sentence Event-building: As we men- tioned earlier, the linking phenomena we are ex- amining hold across, as well as within sentences. Discourse (14) is provided as an example of a dis- course in which an open role is filled in a subsequent sentence. In the first sentence, there are actually several open roles. Left unfilled are (at least) the roles Source and Exchange. With the DR struc- turing we have chosen, an initial DR for the first sentence of (14) would be built as in (15). The main thing to note in (15) is that the open role variables, are Z1 and Q1, the Source and the Exchange, have been posted in UR. (14a) Pete bought a car. b) The salesman was a real jerk. 21 (ls) AR: EI,XI,YI UR: Zl O1 CR: (Pete Xl) (car Y1) El:(buy (Agent Xl), (Theme Y1), (Source ZI), (Exchange Ol)) (implicit Z1) (implicit ql) The initial DR. for the second sentence of (14) is in (16a). The variable X2, representing 'the sales- man', has been posted in the unresolved NP buffer, and X2 will be the first thing to be resolved by way of anaphora operators. The anaphoric processes invoked at this point would be much like what has been promoted else- where. A variety of factors would come into play, including looking at basic semantic characteristics, centering, etc. We would also want to provide a means for ordering available referents as they are placed in AR. in terms of their forward focusing character (Grosz, Joshi, and Weinstein, 1983). For 'the salesman', the previously occurring dis- course entities that are available as referents are El, Xl, and Y1 in the previous AR., and Z1 and Q1 in the previous UR. The possible referent Xl, 'Pete', ranks as a possible candidate but not a very likely one, since if Pete were to be referred to in a subse- quent sentence it would more likely be done via a personal pronoun. The other available referent, Y1, the 'car', is semantically unlikely and is not con- sidered a good choice. A search is then made into the previous UR.. The Source Z1, in this instance, would be a highly likely choice, since any seman- tic qualities that would accompany 'the salesman' would fit those of the Source of a buy event. It has been reported in previous studies that def- inite NPs often have no clear antecedent. For in- stance, 363 out of 649 definite NPs found in a study of corpus of dialogues (Brunner, Ferrara, and Whit- temore, 1990) had no direct linguistic antecedents. 53% of the 363 definite NPs had semantically in- ferrable antecedents, where definite NPs were used to refer to attributes of antecedents and the like, but not to antecedents themselves. Apparently, definite NPs function to focus on some partial as- pect of an antecedent or topic and not necessarily to refer directly to it as a whole. 6 Following the 6The other 47% were reported to have no clear an- tecedents, and were only 'topically' tied to the context. It might prove beneficial to re-examine these true orphans and see if any of these refer back to open roles. line of reasoning that one could take from these findings, it could be the case that there is actually a preference for definite NPs to refer back to open roles, since they represent particular points of focus or sub-components of events. 'Salesman', via the variable X2, would then get bound to the buy event and a second DR. with no unresolved anaphora would be returned, as shown in (16b). (16a) AR: E2 UR: X2 CR: (Salesman X2) (definite X2) E2:(IS X2 real-jerk) (16b) AR: X2, E2 UR: CR: (Salesman X2) (definite X2) E2:(IS X2 real-jerk) Similarly, the DR for the first sentence would need modification since now the open Source role, represented as Z1, would need to be bound to X2, 'the salesman' (this updated binding is not shown). Limits on Linking: There are limits on the kinds of linking that can be effected between event descriptions and fillers for open roles. For instance, note that the open slot in the example above does not seem to be available for pronominal reference. If (14b) is replaced with 'He was a real jerk,' the sequence of sentences makes no sense (or at least we would have to say that the same role is not accessed). This restriction appears to be true in general for pronominal reference into event descrip- tions, as the following examples show: • I was attacked. *He was enormous. • We unloaded the car. *They [the suitcases] were very heavy. • This borrowing has got to stop. *They [the borrowed things] get left all over the place. An event description itself, as a whole, nomi- nal or verbal, may function as an antecedent for 22 subsequent anaphoric reference, including pronom- inal reference ('I went swimming. It was horrible.'). It is just pronominal reference INTO an event de- scription, especially a verbal one, which seems to be blocked. The event described in (17a) below cannot typically be elaborated upon by (l?ai). However, (17ai) is fine as a continuation if (17aii), in which the event is nominalized, comes between. (17b), in which the agree event is referred to nominally, can be followed by (17bi), (17bii) or both. (17) a) Bob finally agreed eith Joe. i) *It was to not fight anymore. ii) The agreement ,as negotiated by Sam. b) Bob and Joe finally made an agreement. i) It was to not fight anymore. ii) It/The agreement was negotiated by Sam. c) *It was between Bob and Sam. In our representation the posting of event de- scriptions, verbal and nominal, in AR, accounts for the fact that each can be linked to by a sub- sequent pronominal element. Our intuition is that in order to be completely accessible as a referent, however, an entity must have not only a semantic but also a phonological realization; since open roles are merely implicit until they are bound, it is pre- dictable that there would be a difference in their accessibility. For this reason we post open roles only in UR, not in AR, and in our framework this blocks pronominal access to them. As for the fact that nominalizing an event seems to ease the restrictions on referring into it by means of a pronoun (as in the (17ai-ii) examples), our guess is that in these cases the pronominal refer- ence is actually to the event as a thing, and that the apparent elaboration of roles is allowed by the same mechanisms that allow addition of other adjuncts to nominals, as in 'I really enjoyed my vacation. It was in Texas in July.' In any case our tagging of event variables in CR as nominal or verbal allows this distinction to be taken into account. The idea of role slots which are canceled from UR as they are bound explains another restriction on the ways in which events can be elaborated. (17c) above cannot appropriately follow either (171) or (17b), because we already know from either that the agreement was between Bob and Joe. Further, if (17bii) follows (17b), then we know that Sam is not himself a participant in the agreement he negotiated, because we already know from (17b) that the agreenaent was between Bob and Joe. In each of these cases, the open role in question will have been canceled out of UR by binding to other entities before the new anaphoric elements come along, and so there is no possibility of filling a role twice. Hard Cases: Finally, we offer a few comments on a "pretty hard" and a "really hard" example, given in (18) and (19). These are revised versions of the discourse given in (5). The task in both cases is to bind the referent 'John', which appears in the first sentence, to the Agent slot of 'run', which is in the second sentence. (18) John has been hobbling around on a sprained ankle. Today, the nurse said it would be best not to run for teo weeks. (19) John has been hobbling around on a sprained ankle. Today, the nurse told his mother it would be best not to run for two weeks. To resolve these examples, we can employ two tactics. First, we will impose a thematic role asso- ciation between the Addressee of a say event and the Agent of embedded agentless verbs that denote advice. Secondly, we will use the notion of open implicit roles in DtLs to obtain a filler for the open Addressee role in the say/tell event. 7 With these two notions in place, we can easily resolve (18). (18)'s context provides only one pos- sible candidate for the open Addressee role, namely 'John' (that is, if we disregard the speaker of the utterance). Once 'John' is used to fill that role, we can link 'John also, through the default thematic role association, to the Agent slot for 'run'. (19), however shows that the situation can be more complicated. There is no open Addressee role in (19); the explicit Addressee is 'his mother'. By the process above, then, 'his mother' would be linked to the Agent slot of 'run', which of course is incorrect. We do not have a perfect explanation for why (19) is different from (18), other than that John's mother is not the ultimate Addressee. That is, a mechanism is needed that can determine that John's mother transfers the advice on to the per- son who needs it, namely the ailing person, namely John. Even if such a complicated scenario is the ZA more general form of the first step would be a the- matic role reasoning device that permits PROs to be linked with those entities which are most eligible to carry out the action of the subjectless infinitive. This formulation would be in the spirit of Bach, 1982. 23 correct one, we believe that our combined thematic role/discourse representation would provide a plat- form upon which one could make use of such prag- matic information. Conclusion: Our stated task was to provide a vehicle for constructing event representations which have roles that are not filled by local syntac- tic means. DRT is a natural vehicle for~this kind of exercise, given certain extensions. The major ex- tension is the posting of open event (thematic) roles as potential anchors for subsequent reference. In other words we are treating open roles as a type of anaphor. Where roles integral to an understanding of an event are not immediately filled on the basis of local syntax, we hypothesize that they should be posted nonetheless as not-yet-instantiated slots. We have added a tier to the conventional notion of a DR to accommodate this posting. Our experiments with this representation have shown how information from various levels of pro- cessing can be brought together in event building. This framework also allows us to discover limits on linking phenomena; in particular, it naturally illus- trates the inaccessibility of open roles to pronomi- nal reference, and the tendency for definite NPs to link to substructures within an event. ACKNOWLEDGEMENTS We would like to note that the idea of using DRs as a means for building events across clauses came from a comment by Rich Thomason, cited in Dowty (1986:32): "Rich Thomason (p.c.) has suggested to me that a very natural way to construct a theory of event anaphora would be via Discourse Repre- sentation Theory." Thomason was addressing (we think) the notion of referring to events via nominal- izations. We just extended the idea of using DRT to construct events across clauses to also include those denoted by verbs. [3] Dowty, D. and Ladusaw, W. 1988. Toward a Nongrammatical Account of Thematic Roles, in Volume 21 of SYNTAX AND SEMANTICS, pgs. 61-73. [4] Grosz, B., Joshi, A., and Weinstein, S. 1983. Providing a Unified Account of Definite Noun Phrases in Discourse. SRI Technical note ~292. [5] Kamp, H. 1981. A Theory of Truth and Se- mantic Representation, in J. Groenendijk, T. Jannsen, and M. Stokhof, (eds.). FORMAL METHODS IN THE STUDY OF LANGUAGE. [6] Nishigauchi, T. 1984. Control and the Thematic Domain. LANGUAGE, Volume 60, no. 2, pgs. 215-250. [7] Williams, E. 1980. Predication. LINGUISTIC INQUIRY, Volume 11, no. 1, pgs. 203-238. [8] Williams, E. 1985. PRO and Subject of NP. NATURAL LANGUAGE AND LINGUISTIC THEORY, Volume 3, no. 3, pgs. 297-315. References [1] Carlson, G. and Tanenhaus, M. 1988. Thematic Roles and Language Comprehension. THE- MATIC RELATIONS, VOLUME 21 OF SYN- TAX AND SEMANTICS, pgs. 263-291. [2] Dowty, D. 1986. On the Semantic Content of the Notion "Thematic Role". paper presented at the University of Massachusetts conference on Property Theory, Type Theory, and Semantics, March 13-16, 1986. 24
1991
3
STRUCTURAL AMBIGUITY AND LEXICAL RELATIONS Donald Hindle and Mats Rooth AT&T Bell Laboratories 600 Mountain Avenue Murray Hill, NJ 07974 Abstract We propose that ambiguous prepositional phrase attachment can be resolved on the basis of the relative strength of association of the preposition with noun and verb, estimated on the basis of word distribution in a large corpus. This work suggests that a distributional approach can be effective in resolving parsing problems that apparently call for complex reasoning. Introduction Prepositional phrase attachment is the canonical case of structural ambiguity, as in the time worn example, (1) I saw the man with the telescope The existence of such ambiguity raises problems for understanding and for language models. It looks like it might require extremely complex com- putation to determine what attaches to what. In- deed, one recent proposal suggests that resolving attachment ambiguity requires the construction of a discourse model in which the entities referred to in a text must be reasoned about (Altmann and Steedman 1988). Of course, if attachment am- biguity demands reference to semantics and dis- course models, there is little hope in the near term of building computational models for unrestricted text to resolve the ambiguity. Structure based ambiguity resolution There have been several structure-based proposals about ambiguity resolution in the literature; they are particularly attractive because they are simple and don't demand calculations in the semantic or discourse domains. The two main ones are: • Right Association - a constituent tends to at- tach to another constituent immediately to its right (Kimball 1973). • Minimal Attachment - a constituent tends to attach so as to involve the fewest additional syntactic nodes (Frazier 1978). For the particular case we are concerned with, attachment of a prepositional phrase in a verb + object context as in sentence (1), these two princi- ples - at least in the version of syntax that Frazier assumes - make opposite predictions: Right Asso- ciation predicts noun attachment, while Minimal Attachment predicts verb attachment. Psycholinguistic work on structure-based strate- gies is primarily concerned with modeling the time course of parsing and disambiguation, and propo- nents of this approach explicitly acknowledge that other information enters into determining a final parse. Still, one can ask what information is rel- evant to determining a final parse, and it seems that in this domain structure-based disambigua- tion is not a very good predictor. A recent study of attachment of prepositional phrases in a sam- ple of written responses to a "Wizard of Oz" travel information experiment shows that neither Right Association nor Minimal Attachment account for more than 55% of the cases (Whittemore et al. 1990). And experiments by Taraban and McClel- land (1988) show that the structural models are not in fact good predictors of people's behavior in resolving ambiguity. Resolving ambiguity through lexical associations Whittemore et al. (1990) found lexical preferences to be the key to resolving attachment ambiguity. Similarly, Taraban and McClelland found lexical content was key in explaining people's behavior. Various previous proposals for guiding attachment disambiguation by the lexical content of specific 229 words have appeared (e.g. Ford, Bresnan, and Ka- plan 1982; Marcus 1980). Unfortunately, it is not clear where the necessary information about lexi- cal preferences is to be found. In the Whittemore et al. study, the judgement of attachment pref- erences had to be made by hand for exactly the cases that their study covered; no precompiled list of lexical preferences was available. Thus, we are posed with the problem: how can we get a good list of lexical preferences. Our proposal is to use cooccurrence of with prepositions in text as an indicator of lexical pref- erence. Thus, for example, the preposition to oc- curs frequently in the context send NP --, i.e., after the object of the verb send, and this is evi- dence of a lexical association of the verb send with to. Similarly, from occurs frequently in the context withdrawal --, and this is evidence of a lexical as- sociation of the noun withdrawal with the prepo- sition from. Of course, this kind of association is, unlike lexical selection, a symmetric notion. Cooccurrence provides no indication of whether the verb is selecting the preposition or vice versa. We will treat the association as a property of the pair of words. It is a separate matter, which we unfortunately cannot pursue here, to assign the association to a particular linguistic licensing re- lation. The suggestion which we want to explore is that the association revealed by textual distri- bution - whether its source is a complementation relation, a modification relation, or something else - gives us information needed to resolve the prepo- sitional attachment. Discovering Lexical Associa- tion in Text A 13 million word sample of Associated Press new stories from 1989 were automatically parsed by the Fidditch parser (Hindle 1983), using Church's part of speech analyzer as a preprocessor (Church 1988). From the syntactic analysis provided by the parser for each sentence, we extracted a table containing all the heads of all noun phrases. For each noun phrase head, we recorded the follow- ing preposition if any occurred (ignoring whether or not the parser attached the preposition to the noun phrase), and the preceding verb if the noun phrase was the object of that verb. Thus, we gen- erated a table with entries including those shown in Table 1. In Table 1, example (a) represents a passivized instance of the verb blame followed by the prepo- VERB blame control enrage spare grant determine HEAD NOUN PASSIVE money development government military accord radical WHPRO it concession flaw Table h A sample of the Verb-Noun-Preposition table. sition for. Example (b) is an instance of a noun phrase whose head is money; this noun phrase is not an object of any verb, but is followed by the preposition for. Example (c) represents an in- stance of a noun phrase with head noun develop- ment which neither has a following preposition nor is the object of a verb. Example (d) is an instance of a noun phrase with head government, which is the object of the verb control but is followed by no preposition. Example (j) represents an instance of the ambiguity we are concerned with resolving: a noun phrase (head is concession), which is the ob- ject of a verb (grant), followed by a preposition (to). From the 13 million word sample, 2,661,872 noun phrases were identified. Of these, 467,920 were recognized as the object of a verb, and 753,843 were followed by a preposition. Of the noun phrase objects identified, 223,666 were am- biguous verb-noun-preposition triples. Estimating attachment prefer- ences Of course, the table of verbs, nouns and preposi- tions does not directly tell us what the strength lexical associations are. There are three potential sources of noise in the model. First, the parser in some cases gives us false analyses. Second, when a preposition follows a noun phrase (or verb), it may or may not be structurally related to that noun phrase (or verb). (In our terms, it may at- tach to that noun phrase or it may attach some- where else). And finally, even if we get accu- rate attachment information, it may be that fre- 230 quency of cooccurrence is not a good indication of strength of attachment. We will proceed to build the model of lexical association strength, aware of these sources of noise. We want to use the verb-noun-preposition table to derive a table of bigrams, where the first term is a noun or verb, and the second term is an associ- ated preposition (or no preposition). To do this we need to try to assign each preposition that occurs either to the noun or to the verb that it occurs with. In some cases it is fairly certain that the preposition attaches to the noun or the verb; in other cases, it is far less certain. Our approach is to assign the clear cases first, then to use these to decide the unclear cases that can be decided, and finally to arbitrarily assign the remaining cases. The procedure for assigning prepositions in our sample to noun or verb is as follows: 1. No Preposition - if there is no preposition, the noun or verb is simply counted with the null preposition. (cases (c-h) in Table 1). 2. Sure Verb Attach 1 - preposition is attached to the verb if the noun phrase head is a pro- noun. (i in Table 1) 3. Sure Verb Attach 2 - preposition is attached to the verb if the verb is passivized (unless the preposition is by. The instances of by fol- lowing a passive verb were left unassigned.) (a in Table 1) 4. Sure Noun Attach - preposition is attached to the noun, if the noun phrase occurs in a con- text where no verb could license the preposi- tional phrase (i.e., the noun phrase is in sub- ject or pre-verbal position.) (b, if pre-verbal) 5. Ambiguous Attach 1 - Using the table of at- tachment so far, if a t-score for the ambiguity (see below) is greater than 2.1 or less than -2.1, then assign the preposition according to the t-score. Iterate through the ambiguous triples until all such attachments are done. (j and k may be assigned) 6. Ambiguous Attach 2 - for the remaining am- biguous triples, split the attachment between the noun and the verb, assigning .5 to the noun and .5 to the verb. (j and k may be assigned) 7. Unsure Attach - for the remaining pairs (all of which are either attached to the preceding noun or to some unknown element), assign them to the noun. (b, if following a verb) This procedure gives us a table of bigrams rep- resenting our guess about what prepositions asso- ciate with what nouns or verbs, made on the basis of the distribution of verbs nouns and prepositions in our corpus. The procedure for guessing attach- ment Given the table of bigrams, derived as described above, we can define a simple procedure for de- termining the attachment for an instance of verb- noun-preposition ambiguity. Consider the exam- ple of sentence (2), where we have to choose the attachment given verb send, noun soldier, and preposition into. (2) Moscow sent more than 100,000 sol- diers into Afganistan ... The idea is to contrast the probability with which into occurs with the noun soldier (P(into [ soldier)) with the probability with which into occurs with the verb send (P(into [ send)). A t- score is an appropriate way to make this contrast (see Church et al. to appear). In general, we want to calculate the contrast between the conditional probability of seeing a particular preposition given a noun with the conditional probability of seeing that preposition given a verb. P(prep [ noun) - P(prep [ verb) t= ~/a2(P(prep I noun)) + ~2(e(prep I verb)) We use the "Expected Likelihood Estimate" (Church et al., to appear) to estimate the prob- abilities, in order to adjust for small frequencies; that is, given a noun and verb, we simply add 1/2 to all bigram frequency counts involving a prepo- sition that occurs with either the noun or the verb, and then recompute the unigram frequencies. This method leaves the order of t-scores nearly intact, though their magnitude is inflated by about 30%. To compensate for this, the 1.65 threshold for sig- nificance at the 95% level should be adjusted up to about 2.15. Consider how we determine attachment for sen- tence (2). We use a t-score derived from the ad- justed frequencies in our corpus to decide whether the prepositional phrase into Afganistan is at- tached to the verb (root) send/V or to the noun (root) soldier/N. In our corpus, soldier/N has an adjusted frequency of 1488.5, and send/V has an adjusted frequency of 1706.5; soldier/N occurred in 32 distinct preposition contexts, and send/Via 231 60 distinct preposition contexts; f(send/V into) = 84, f(soidier/N into) = 1.5. From this we calculate the t-score as follows: 1 t- P(wlsoldier/ N ) - P(wlsend/ V) ~/a2(P(wlsoidier/N)) + c~2(P(wlsend/ V)) l(soldier/N into)+ll2 .f(send/V into)+l/2 f(soidierlN)+V/2 /(send/V)+V/2 \//(,oldier/N into)+l/2 /(send/V into)+l[2 (f(soldierlN)+V/2)2 + (/(send/V)+V/2)~ 1.s+1/2 84+1/2 -.. 1488.5+70/2- 1706.5-t-70/2 ~,--8.81 1.5+i/2 84+i/2 1488.5+70/2p -I- 1706.s+70/2)2 This figure of-8.81 represents a significant asso- ciation of the preposition into with the verb send, and on this basis, the procedure would (correctly) decide that into should attach to send rather than to soldier. Of the 84 send/V into bigrams, 10 were assigned by steps 2 and 3 ('sure attachements'). Testing Attachment Prefer- ence To evaluate the performance of this procedure, first the two authors graded a set of verb-noun- preposition triples as follows. From the AP new stories, we randomly selected 1000 test sentences in which the parser identified an ambiguous verb- noun-preposition triple. (These sentences were se- lected from stories included in the 13 million word sample, but the particular sentences were excluded from the calculation of lexical associations.) For every such triple, each author made a judgement of the correct attachment on the basis of the three words alone (forced choice - preposition attaches to noun or verb). This task is in essence the one that we will give the computer - i.e., to judge the attachment without any more information than the preposition and the head of the two possible attachment sites, the noun and the verb. This gave us two sets of judgements to compare the al- gorithm's performance to. a V is the number of distinct preposition contexts for either soldier/N or send/V; in this c~se V = 70. Since 70 bigram frequencies f(soldier/N p) are incremented by 1/2, the unigram frequency for soldier/N is incremented by 70/2. Judging correct attachment We also wanted a standard of correctness for these test sentences. To derive this standard, we to- gether judged the attachment for the 1000 triples a second time, this time using the full sentence context. It turned out to be a surprisingly difficult task to assign attachment preferences for the test sam- ple. Of course, many decisions were straightfor- ward; sometimes it is clear that a prepositional phrase is and argument of a noun or verb. But more than 10% of the sentences seemed problem- atic to at least one author. There are several kinds of constructions where the attachment decision is not clear theoretically. These include idioms (3-4), light verb constructions (5), small clauses (6). (3) But over time, misery has given way to mending. (4) The meeting will take place in Quan- rico (5) Bush has said he would not make cuts in Social Security (6) Sides said Francke kept a .38-caliber revolver in his car's glove compartment We chose always to assign light verb construc- tions to noun attachment and small clauses to verb attachment. Another source of difficulty arose from cases where there seemed to be a systematic ambiguity in attachment. (7) ...known to frequent the same bars in one neighborhood. (8) Inaugural officials reportedly were trying to arrange a reunion for Bush and his old submarine buddies ... (9) We have not signed a settlement agreement with them Sentence (7) shows a systematic locative am- biguity: if you frequent a bar and the bar is in a place, the frequenting event is arguably in the same place. Sentence (8) shows a systematic bene- factive ambiguity: if you arrange something for someone, then the thing arranged is also for them. The ambiguity in (9) arises from the fact that if someone is one of the joint agents in the signing of an agreement, that person is likely to be a party to the agreement. In general, we call an attach- ment systematically ambiguous when, given our understanding of the semantics, situations which 232 make the interpretation of one of the attachments true always (or at least usually) also validate the interpretation of the other attachment. It seems to us that this difficulty in assigning attachment decisions is an important fact that de- serves further exploration. If it is difficult to de- cide what licenses a prepositional phrase a signif- icant proportion of the time, then we need to de- velop language models that appropriately capture this vagueness. For our present purpose, we de- cided to force an attachment choice in all cases, in some cases making the choice on the bases of an unanalyzed intuition. In addition to the problematic cases, a sig- nificant number (120) of the 1000 triples identi- fied automatically as instances of the verb-object- preposition configuration turned out in fact to be other constructions. These misidentifications were mostly due to parsing errors, and in part due to our underspecifying for the parser exactly what configuration to identify. Examples of these misidentifications include: identifying the subject of the complement clause of say as its object, as in (10), which was identified as (say minis- ters from); misparsing two constituents as a single object noun phrase, as in (11), which was identi- fied as (make subject to); and counting non-object noun phrases as the object as in (12), identified as (get hell out_oJ). (10) Ortega also said deputy foreign min- isters from the five governments would meet Tuesday in Managua .... (11) Congress made a deliberate choice to make this commission subject to the open meeting requirements ... (12) Student Union, get the hell out of China! Of course these errors are folded into the calcu- lation of associations. No doubt our bigram model would be better if we could eliminate these items, but many of them represent parsing errors that cannot readily be identified by the parser, so we proceed with these errors included in the bigrams. After agreeing on the 'correct' attachment for the sample of 1000 triples, we are left with 880 verb-noun-preposition triples (having discarded the 120 parsing errors). Of these, 586 are noun attachments and 294 verb attachments. Evaluating performance First, consider how the simple structural attach- ment preference schemas perform at predicting the Judge 1 I i i i i 4.9 i LA 557 323 85.4 65.9 78.3 Table 2: Performance on the test sentences for 2 human judges and the lexical association proce- dure (LA). outcome in our test set. Right Association, which predicts noun attachment, does better, since in our sample there are more noun attachments, but it still has an error rate of 33%. Minimal Attach. meat, interpreted to mean verb attachment, has the complementary error rate of 67%. Obviously, neither of these procedures is particularly impres- sive. Now consider the performance of our attach- ment procedure for the 880 standard test sen- tences. Table 2 shows the performance for the two human judges and for the lexical association attachment procedure. First, we note that the task of judging attach- ment on the basis of verb, noun and preposition alone is not easy. The human judges had overall error rates of 10-15%. (Of course this is consid- erably better than always choosing noun attach- ment.) The lexical association procedure based on t-scores is somewhat worse than the human judges, with an error rate of 22%, but this also is an improvement over simply choosing the near- est attachment site. If we restrict the lexical association procedure to choose attachment only in cases where its con- fidence is greater than about 95% (i.e., where t is greater than 2.1), we get attachment judgements on 607 of the 880 test sentences, with an overall error rate of 15% (Table 3). On these same sen- tences, the human judges also showed slight im- provement. Underlying Relations Our model takes frequency of cooccurrence as ev- idence of an underlying relationship, but makes no attempt to determine what sort of relationship is involved. It is interesting to see what kinds of relationships the model is identifying. To in- vestigate this we categorized the 880 triples ac- 233 [ choice I % correct ] N V N V total Judge 1 ~ Judge 2 LA Table 3: Performance on the test sentences for 2 human judges and the lexical association proce- dure (LA) for test triples where t > 2.1 cording to the nature of the relationship underly- ing the attachment. In many cases, the decision was difficult. Even the argument/adjunct distinc- tion showed many gray cases between clear partici- pants in an action (arguments) and clear temporal modifiers (adjuncts). We made rough best guesses to partition the cases into the following categories: argument, adjunct, idiom, small clause, locative ambiguity, systematic ambiguity, light verb. With this set of categories, 84 of the 880 cases remained so problematic that we assigned them to category other. Table 4 shows the performance of the lexical at- tachment procedure for these classes of relations. Even granting the roughness of the categorization, some clear patterns emerge. Our approach is quite successful at attaching arguments correctly; this represents some confirmation that the associations derived from the AP sample are indeed the kind of associations previous research has suggested are relevant to determining attachment. The proce- dure does better on arguments than on adjuncts, and in fact performs rather poorly on adjuncts of verbs (chiefly time and manner phrases). The re- maining cases are all hard in some way, and the performance tends to be worse on these cases, showing clearly for a more elaborated model. Sense Conflations The initial steps of our procedure constructed a table of frequencies with entries f(z,p), where z is a noun or verb root string, and p is a preposition string. These primitives might be too coarse, in that they do not distinguish different senses of a preposition, noun, or verb. For instance, the tem- porM use of in in the phrase in December is identi- fied with a locative use in Teheran. As a result, the procedure LA necessarily makes the same attach- relation }count ] %correct argument noun 375 88.5 argument verb 103 86.4 adjunct noun 91 72.5 adjunct verb 101 61.3 light verb 19 63.1 small clause 13 84.6 idiom 20 65.0 locative ambiguity 37 75.7 systematic ambiguity 37 64.8 other 84 61.9 Table 4: Performance of the Lexical attachment procedure by underlying relationship ment prediction for in December and in Teheran occurring in the same context. For instance, LA identifies the tuple reopen embassy in as an NP at- tachment (t-score 5.02). This is certainly incorrect for (13), though not for (14). 2 (13) Britain reopened the embassy in De- cember (14) Britain reopened its embassy in Teheran Similarly, the scalar sense of drop exemplified in (15) sponsors a preposition to, while the sense rep- resented in drop the idea does not. Identifying the two senses may be the reason that LA makes no attachment choice for drop resistance to (derived from (16)), where the score is -0.18. (15) exports are expected to drop a fur- ther 1.5 percent to 810,000 (16) persuade Israeli leaders to drop their resistance to talks with the PLO We experimented with the first problem by sub- stituting an abstract preposition in,MONTH for all occurrences of in with a month name as an ob- ject. While the tuple reopen embassy in~oMONTH was correctly pushed in the direction of a verb at- tachment (-1.34), in other cases errors were intro- duced, and there was no compelling general im- provement in performance. In tuples of the form drop/grow/increase percent inJ~MONTH , derived from examples such as (16), the preposition was incorrectly attached to the noun percent. 2(13) is a phrase from our corpus, while (14) is a con- structed example. 234 (16) Output at mines and oil wells dropped 1.8 percent in February (17) ,1.8 percent was dropped by output at mines and oil wells We suspect that this reveals a problem with our estimation procedure, not for instance a paucity of data. Part of the problem may be the fact that adverbial noun phrase headed by percent in (16) does not passivize or pronominalize, so that there are no sure verb attachment cases directly corre- sponding to these uses of scalar motion verbs. Comparison with a Dictionary The idea that lexical preference is a key factor in resolving structural ambiguity leads us natu- rally to ask whether existing dictionaries can pro- vide useful information for disambiguation. There are reasons to anticipate difficulties in this re- gard. Typically, dictionaries have concentrated on the 'interesting' phenomena of English, tending to ignore mundane lexical associations. However, the Collins Cobuild English Language Dictionary (Sinclair et al. 1987) seems particularly appro- priate for comparing with the AP sample for sev- eral reasons: it was compiled on the basis of a large text corpus, and thus may be less subject to idiosyncrasy than more arbitrarily constructed works; and it provides, in a separate field, a di- rect indication of prepositions typically associated with many nouns and verbs. Nevertheless, even for Cobuild, we expect to find more concentration on, for example, idioms and closely bound argu- ments, and less attention to the adjunct relations which play a significant role in determining attach- ment preferences. From a machine-readable version of the dictio- nary, we extracted a list of 1535 nouns associated with a particular preposition, and of 1193 verbs associated with a particular preposition after an object noun phrase. These 2728 associations are many fewer than the number of associations found in the AP sample. (see Table 5.) Of course, most of the preposition association pairs from the AP sample end up being non- significant; of the 88,860 pairs, fewer than half (40,869) occur with a frequency greater than 1, and only 8337 have a t-score greater than 1.65. So our sample gives about three times as many sig- nificant preposition associations as the COBUILD dictionary. Note however, as Table 5 shows, the overlap is remarkably good, considering the large space of possible bigrams. (In our bigram table Source [ COBUILD AP sample AP sample (f > 1) AP sample (t > 1.65) Total I NOUN I VERB 2728 88,860 40,869 8,337 COBUILD n AP 1,931 COBUILD N AP 1,040 (t > 1.65) 1535 1193 64,629 24,231 31,241 9,628 6,307 2,030 1,147 784 656 384 Table 5: Count of noun and verb associations for COBUILD and the AP sample there are over 20,000 nouns, over 5000 verbs, and over 90 prepositions.) On the other hand, the lack of overlap for so many cases - assuming that the dictionary and the significant bigrams actually record important preposition associations - indi- cates that 1) our sample is too small, and 2) the dictionary coverage is widely scattered. First, we note that the dictionary chooses at- tachments in 182 cases of the 880 test sentences. Seven of these are cases where the dictionary finds an association between the preposition and both the noun and the verb. In these cases, of course, the dictionary provides no information to help in choosing the correct attachment. Looking at the 175 cases where the dictionary finds one and only one association for the preposi- tion, we can ask how well it does in predicting the correct attachment. Here the results are no better than our human judges or than our bigram proce- dure. Of the 175 cases, in 25 cases the dictionary finds a verb association when the correct associa- tion is with the noun. In 3 cases, the dictionary finds a noun association when the correct associa- tion is with the verb. Thus, overall, the dictionary is 86% correct. It is somewhat unfair to use a dictionary as a source of disambiguation information; there is no reason to expect that a dictionary to provide in- formation on all significant associations; it may record only associations that are interesting for some reason (perhaps because they are semanti- cally unpredictable.) Table 6 shows a small sample of verb-preposition associations from the AP sam- 235 AP sample COBUILD approach appropriate approve approximate arbitrate argue arm arraign arrange array arrest arrogate ascribe ask assassinate assemble assert assign assist associate about (4.1) with (2.4) for (2.5) with (2.5) as(3.2) in (2.4) on (4.1) through (5.9) after (3.4) along_with (6.1) during (3.1) on (2.8) while (3.9) about (4.3) in (2.4) at (3.8) over (5.8) to (5.1) in (2.4) with (6.4) for to between with with on for in for to to about to in with with Table 6: Verb-(NP)-Preposition associations in AP sample and COBUILD. pie and from Cobuild. The overlap is considerable, but each source of information provides intuitively important associations that are missing from the other. Conclusion Our attempt to use lexical associations derived from distribution of lexical items in text shows promising results. Despite the errors in parsing introduced by automatically analyzing text, we are able to extract a good list of associations with prepositions, overlapping significantly with an ex- isting dictionary. This information could easily be incorporated into an automatic parser, and addi- tional sorts of lexical associations could similarly be derived from text. The particular approach to deciding attachment by t-score gives results nearly as good as human judges given the same infor- mation. Thus, we conclude that it may not be necessary to resort to a complete semantics or to discourse models to resolve many pernicious cases of attachment ambiguity. It is clear however, that the simple model of at- tachment preference that we have proposed, based only on the verb, noun and preposition, is too weak to make correct attachments in many cases. We need to explore ways to enter more complex calculations into the procedure. References Altmman, Gerry, and Mark Steedman. 1988. Interac- tion with context during human sentence process- ing. Cognition, 30, 191-238. Church, Kenneth W. 1988. A stochastic parts program and noun phrase parser for unrestricted text, Proceedings of the Second Conference on Applied Natural Language Processing, Austin, Texas. Church, Kenneth W., William A. Gale, Patrick Hanks, and Donald Hindle. (to appear). Using statistics in lexical analysis, in Zernik (ed.) Lexical acqui- sition: using on-line resources to build a lexicon. Ford, Marilyn, Joan Bresnan and Ronald M. Kaplan. 1982. A competence based theory of syntactic clo- sure, in Bresnan, J. (ed.) The Mental Represen. tation o.f Grammatical Relations. MIT Press. Frazier, L. 1978. On comprehending sentences: Syn- tactic parsing strategies. PhD. dissertation, Uni- versity of Connecticut. Hindle, Donald. 1983. User manual for fidditch, a deterministic parser. Naval Research Laboratory Technical Memorandum 7590-142. Kimball, J. 1973. Seven principles of surface structure parsing in natural language, Cognition, 2, 15-47. Marcus, Mitchell P. 1980. A theory of syntactic recog- nition for natural language. MIT Press. Sinclair, J., P. Hanks, G. Fox, R. Moon, P. Stock, et al. 1987. Collins Cobuild English Language Dic- tionary. Collins, London and Glasgow. Taraban, Roman and James L. McClelland. 1988. Constituent attachment and thematic role as- signment in sentence processing: influences of content-based expectations, Journal of Memory and Language, 27, 597-632. Whittemore, Greg, Kathleen Ferrara and Hans Brun- net. 1990. Empirical study of predictive powers of simple attachment schemes for post-modifier prepositional phrases. Proceedings of the ~8th An- nual Meeting of the Association for Computa- tional Linguistics, 23-30. 236
1991
30
STRATEGIES FOR ADDING CONTROL INFORMATION TO DECLARATIVE GRAMMARS Hans Uszkoreit University of Saarbrticken and German Research Center for Arlfficial Intelligence (DFKI) W-6600 Saarbriicken 11, FRG [email protected] Abstract Strategies are proposed for combining different kinds of constraints in declarative grammars with a detachable layer of control information. The added control information is the basis for parametrized dynamically controlled linguistic deduction, a form of linguistic processing that permits the implementation of plausible linguistic performance models without giving up the declarative formulation of linguistic competence. The information can be used by the linguistic processor for ordering the sequence in which conjuncts and disjuncts are processed, for mixing depth-first and breadth-first search, for cutting off undesired derivations, and for constraint-relaxation. 1 Introduction Feature term formalisms (FTF) have proven extremely useful for the declarative representation of linguistic knowledge. The family of grammar models that are based on such formalisms include Generalized Phrase Structure Grammar (GPSG) [Gazdar et al. 1985], Lexical Functional Grammar (LFG) [Bresnan 1982], Functional Unification Grammar (bUG) [Kay 1984], Head-Driven Phrase Structure Grammar (I-IPSG) [Pollard and Sag 1988], and Categorial Unification Grammar (CUG) [Karttunen 1986, Uszkoreit 1986, Zeevat et al. 1987]. Research for this paper was carried out in parts at DFKI in the project DIsco which is funded by the German Ministry for Research and Technology under Grant-No.: 1TW 9002. Partial funding was also provided by the German Research Association (DFG) through the Project BiLD in the SFB 314: Artificial Intelligence and Knowledge-Based Systems. For fruitful discussions we would like to thank our colleagues in the projects DISCO, BiLD and LIIX)G as well as members of audiences at Austin, Texas, and Kyoto, Japan, where preliminary versions were presented. Special thanks for valuable comment and suggestions go to Gregor Erbach, Stanley Peters, Jim Talley, and Gertjan van Noord. The expressive means of feature term formalisms have enabled linguists to design schemes for a very uniform encoding of universal and language-particular linguistic principles. The most radical approach of organizing linguistic knowledge in a uniform way that was inspired by proposals of Kay can be found in HPSG. Unification grammar formalisms, or constraint-based grammar formalisms as they are sometimes called currently constitute the preferred paradigm for grammatical processing in computational linguistics. One important reason for the success of unification grammars I in computational linguistics is their purely declarative nature. Since these grammars are not committed to any particular processing model, they can be used in combination with a number of processing strategies and algorithms. The modularity has a number of advantages: • freedom for experimentation with different processing schemes, • compatibility of the grammar with improved system versions, • use of the same grammar for analysis and generation, • reusability of a grammar in different systems. Unification grammars have been used by theoretical linguists for describing linguistic competence. There exist no processing models for unification grammars yet that incorporate at least a few of the most widely accepted observations about human linguistic performance. • Robustness: Human listeners can easily parse illformed input and adapt to patterns of ungrammaticality. 1The notion of grammar assumed here is equivalent to the structured collection of linguistic knowledge bases including the lexicon, different types of rule sets, linguistic principles, etc. 237 • Syntactic disambiguation in parsing: Unlikely derivations should be cut off or only tried after more likely ones failed. (attachment ambiguities, garden paths) • Lexical disarnbiguation in parsing: Highly unlikely readings should be suppressed or tried only if no result can be obtained otherwise. • Syntactic choice in generation: In generation one derivation needs to be picked out of a potentially infinite number of paraphrases. • Lexical choice in generation: One item needs to be picked out of a large number of alternatives. • Relationship between active and passive command of a language: The set of actively used constructions and lexical items is a proper subset of the ones mastered passively. The theoretical grammarian has the option to neglect questions of linguistic performance and fully concentrate on the grammar as a correct and complete declarative recursive definition of a language fragment. The psycholinguist, on the other hand, will not accept grammar theory and formalism if no plausible processing models can be shown. Computational linguists-independent of their theoretical interests-have no choice but to worry about the efficiency of processing. Unfortunately, as of this date, no implementations exist that allow efficient processing with the type of powerful unification grammars that are currently preferred by theoretical grammarians or grammar engineers. As soon as the grammar formalism employs disjunction and negation, processing becomes extremely slow. Yet the conclusion should not be to abandon unification grammar but to search for better processing models. Certain effective control strategies for linguistic deduction with unification grammars have been suggested in the recent literature. [Shieber et al. 1990, Gerdemarm and Hinrichs 1990] The strategies do not allow the grammar writer to attach control information to the constraints in the grammar. Neither can they be used for dynamic preference assignments. The model of control proposed in this paper can be used to implement these strategies in combination with others. However, the strategies are not encoded in the program but control information and parametrization of deduction. The claim is that unification grammar is much better suited for the experimental and inductive development of plausible processing models than previous grammar models. The uniformily encoded constraints of the grammar need to be enriched by control information. This information serves the purpose to reduce local indeterminism through reordering and pruning of the search graph during linguistic deduction. This paper discusses several strategies for adding control information to the grammar without sacrificing its declarative nature. One of the central hypotheses of the paper is that-in contrast to the declarative meaning of the grammar-the order in which subterms in conjunctions and disjunctions are processed is of importance for a realistic processing model. In disjunctions, the disjuncts that have the highest probability of success should be processed first, whereas in conjunctions the situation is reversed. 2 Control information in conjunctions 2.1 Ordering conjuncts In this context conjuncts are all feature subterms that are combined explicitly or implicitly by the operation of feature unification. The most basic kind of conjunctive term that can be found in all FFFs is the conjunction of feature-value pairs. t"2" V2 Other types of conjunctive terms in the knowledge base may occur in formalisms that allow template, type or sort names in feature term specifications. Verb [Transitive] |3raSing / |lex : hits / t_sem : hit'-] If these calls are processed (expanded) at compile time, the conjunction will also be processed at compile time and not much can be gained by adding control information. If, however, the type or template calls are processed on demand at run time, as it needs to be the case in FTFs with recursive types, these names can be treated as regular conjuncts. If a conjunction is unified with some other feature term, every conjunct has to be unified. Controlling the order in which operands are processed in conjunctions may save time if conjuncts can be processed first that are most likely to fail. This observation is the basis for a reordering method proposed by Kogure [1990]. If, e.g., in syntactic rule applications, the value of the attribute agreement in the representation of nominal elements 238 leads to clashes more often than the value of the attribute definiteneness, it would in general be more efficient to unify agreement before definiteness. Every unification failure in processing cuts off some unsuccessful branch in the search tree. For every piece of information in a linguistic knowledge base we will call the probability at which it is directly involved in search tree pruning its failure potential. More exactly, the failure potential of a piece of information is the average number of times, copies of this (sub)term turn to _1. during the processing of some input. The failure path from the value that turns to _1_ fh'st up to the root is determined by the logical equivalences _1_ = a : _1_ (for any attribute c0 2_ = [_1. x] (for any term x) x = {.J_ x} (for any term x) ± = {.L} plus the appropriate associative laws. Our experience in grammar development has shown that it is very difficult for the linguist to make good guesses about the relative failure potential of subterms of rules, principles, lexical entries and other feature terms in the grammar. However, relative rankings bases on failure potential can be calculated by counting failures during a training phase. However, the failure potential, as it is defined here, may depend on the processing scheme and on the order of subterms in the grammar. If, e.g., the value of the agreement feature person in the definition of the type Verb leads to failure more often than the value of the feature number, this may simply be due to the order in which the two subterms are processed. Assume the unlikely situation that the value of number would have led to failure-if the order had been reversed-in all the cases in which the value of person did in the oM order. Thus for any automatic counting scheme some constant shuffling and reshuffling of the conjunct order needs to be applied until the order stabilizes (see also [Kogure 1990]). There is a second criterion to consider. Some unifications with conjuncts build a lot of structure whereas others do not. Even if two conjuncts lead to failure the same number of times, it may still make a difference in which order they are processed. Finally there might good reasons to process some conjuncts before others simply because processing them will bring in additional constraints that can reduce the size of the search tree. Good examples of such strategies are the so-called head-driven or functor-driven processing schemes. The model of controlled linguistic deduction allows the marking of conjuncts derived by failure counting, processing effort comparisons, or psyeholinguistic observations. However, the markings do not by themselves cause a different processing order. Only if deduction is parametrized appropriately, the markings will be considered by the type inference engine. 2.2 Relaxation markings Many attempts have been made to achieve more robustness in parsing through more or less intricate schemes of rule relaxation. In FTFs all linguistic knowledge is encoded in feature terms that denote different kinds of constraints on linguistic objects. For the processing of grammatically illformed input, constraint relaxation techniques are needed. Depending on the task, communication type, and many other factors certain constraints will be singled out for possible relaxation. A relaxation marking is added to the control information of any subterm c encoding a constraint that may be relaxed. A relaxation marking consists of a function r c from relaxation levels to relaxed constraints, i.e., a set of ordered pairs <i, ci> where i is an integer greater than 0 denoting a relaxation level and ci is a relaxed constraint, i.e., a term subsuming c. 2 The relaxation level is set as a global parameter for processing. The default level is 0 for working with an unrelaxed constraint base. Level 1 is the first level at which constraints are weakened. More than two relaxation levels are only needed if relaxation is supposed to take place in several steps. If the unification of a subterm bearing some relaxation marking with some other term yields &, unification is stopped without putting .L into the partial result. The branch in the derivation is discontinued just as if a real failure had occurred but a continuation point for backtracking is kept on a backtracking stack. The partial result of the unification that was interrupted is also kept. If no result can be derived using the grammar without relaxation, the relaxation level is increased and backtracking to the continuation points is activated. The 2Implicitely the ordered pair <0, c> is part of the control information for every subterm. Therefore it can be omitted. 239 subterm that is marked for relaxation is replaced by the relaxed equivalent. Unification continues. Whenever a (sub)term c from the grammar is encountered for which re(i) is defined, the relaxed constraint is used. This method also allows processing with an initial relaxation level greater than 0 in applications or discourse situations with a high probability of ungram- matical inpuL For a grammar G let Gi be the grammar G except that every constraint is replaced by rc(i). Let L i stand for the language generated or recognized by a grammar G i. If constraints are always properly relaxed, i.e., if relaxation does not take place inside the scope of negation in FITs that provide negation, L i will always be a subset ofLi+ 1. Note that correctness and completeness of the declarative grammar GO is preserved under the proposed relaxation scheme. All that is provided is an efficient way of jumping from processing with one grammar to processing with another closely related grammar. The method is based on the assumption that the relaxed grammars axe properly relaxed and very close to the unrelaxed grammar. Therefore all intermediate results from a derivation on a lower relaxation level can be kept on a higher one. 3 Control information in disjunctions 3.1 Ordering of disjuncts In this section, it will be shown how the processing of feature terms may be controlled through the association of preference weights to disjuncts in disjunctions of constraints. The preference weights determine the order in which the disjuncts are processed. This method is the most relevant part of controlled linguistic deduction. In one model control information is given statically, in a second model it is calculated dynamically. Control information cannot be specified independent from linguistic knowledge. For parsing some readings in lexical entries might be preferred over others. For generation lexical choice might be guided by preference assignments. For both parsing and generation certain syntactic constructions might be preferred over others at choice points. Certain translations might receive higher preference during the transfer phase in machine translation. Computational linguists have experimented with assignments of preferences to syntax and transfer rules, lexical entries and lexical readings. Preferences are usually assigned through numerical preference markers that guide lexical lookup and lexical choice as well as the choice of rules in parsing, generation, and transfer processes. Intricate schemes have been designed for arithmetically calculating the preference marker of a complex unit from the preference markers of its parts. In a pure context-free grammar only one type of disjunction is used which corrresponds to the choice among rules. In some unification grammars such as lexical functional grammars, there exist disjunction between rules, disjunction between lexical items and disjunction between feature-values in f-structures. In such grammars a uniform preference strategy cannot be achieved. In other unification grammar formalisms such as FUG or HPSG, the phrase structure has been incorporated into the feature terms. The only disjunction is feature term disjunction. Our preference scheme is based on the assumption that the formalism permits one type of disjunction only. For readers not familiar with such grammars, a brief outline is presented. In HPSG grammatical knowledge is fully encoded in feature terms. The formalism employs conjunction (unification), disjunction, implication, and negation as well as special data types for lists and sets. Subterms can also be connected through relational constraints. Linguistically relevant feature terms are order-sorted, i.e., there is a partially ordered set of sorts such that every feature term that describes a linguistic object is assigned to a sort. The grammar can be viewed as a huge disjunctive constraint on the wellformedness of linguistic signs. Every wellformed sign must unifiy with the grammar. The grammar consists of a set of universal principles, a set of language-particular principles, a set of lexical entries (the lexicon), and a set of phrase-structure rules. The grammar of English contains all principles of universal grammar, all principles of English, the English lexicon, and the phrase-structure rules of English. A sign has to conform with all universal and language-particular principles, therefore these principles are combined in conjunctions. It is either a lexical sign in which case it has to unify with at least one lexical entry or it is a phrasal sign in which case it needs to unify with at least one phrase-structure rule. The lexicon and the set of rules are therefore combined in disjunctions. 240 [Pi] UniversalGrammar= P2 ['P':~] Principles_of_English = ~P.."+ Lpo Rules_of_English = R2 P [U ve G mar l Grammar of English = [Principles__ofEnglish| l/Rules--°f--English I] L/Lexicon_of_English JJ Figure 1. Organization of the Grammar of English in HPSG Such a grammar enables the computational linguist to implement processing in either direction as mere type inference. However, we claim that any attempts to follow this elegant approach will lead to terribly inefficient systems unless controlled linguistic deduction or an equally powerful paramelrizable control scheme is employed. Controlled linguistic deduction takes advantage of the fact that a grammar of the sort shown in Figure 1 allows a uniform characterization of possible choice points in grammatical derivation. Every choice point in the derivation involves the processing of a disjunction. Thus feature disjunction is the only source of disjunction or nondeterminism in processing. This is easy to see in the case of lexical lookup. We assume that a lexicon is indexed for the type of information needed for access. By means of distributive and associative laws, the relevant index is factored out. A lexicon for parsing written input is indexed by a feature with the attribute graph that encodes the graphemic form. A lexicon with the same content might be used for generation except that the index will be the semantic content. An ambiguous entry contains a disjunction of its readings. In the following schematized entry for the English homograph bow the disjunction contains everything but the graphemic form. 3 graph: (bow)- (bowl~ I?+ l ~OWkl 3.2 Static preferences There exist two basic strategies for dealing with disjunctions. One is based on the concept of backtracking. One disjunct is picked (either at random or from the top of a stack), a continuation point is set, and processing continues as if the picked disjtmct were the only one, i.e., as if it were the whole term. If processing leads to failure, the computation is set back completely to the fixed continuation point and a different (or next) disjunct is picked for continuation. If the computation with the first disjunct yields success, one has the choice of either to be satisfied with the (first) solution or to set the computation back to the continuation point and try the next disjunct. With respect to the disjunction, this strategy amounts to depth-first search for a solution. The second strategy is based on breadth-f'wst search. All disjuncts are used in the operation. If, e.g., a disjunction 3Additional information such as syntactic category might also be factored out within the entry: - ph: -synllocallcat: n] / J synllocallcat: vJ~ Ibow,+,,a 1 I ] However, all we are interested in in this context is the observation that in any case the preferences among readings have to be associated with disjuncts. 241 is unified with a nondisjunctive term, the term is unified with every disjunct. The result is again a disjunction. The strategy proposed here is to allow for combinations of depth-first and breadth-first processing. Depth-first search is useful if there are good reasons to believe that the use of one disjunct will lead to the only result or to the best result. A mix of the two basic strategies is useful if there are several disjuncts that offer better chances than the others. Preference markers (or preference values) are attached to the disjuncts of a disjunction. Assume that a preference value is a continuous value p in 0 < p _< 10. Now a global width factor w in 0 < w < 10 can be set that separates the disjuncts to be tried out fast from the ones that can only be reached through backtracking. All disjuncts are tried out f'n-st in parallel whose values Pi are in Praax-W <- Pi <- Pmax. If the width is set to 2, all disjuncts would be picked that have values Pi in Pmax -2 <- Pi < Pmax. Purely depth-first and purely breadth-fast search are forced by setting the threshold to 0 or 10 respectively. 3.3 Dynamic preferences One of the major problems in working with preferences is their contextual dependence. Although static preference values can be very helpful in guiding the derivation, especially for generation, transfer, or limiting lexical ambiguity, often different preferences apply to different contexts. Take as an example again the reduction of lexical ambiguity. It is clearly the context that influences the hearers preferences in selecting a reading. 4 The astronomer marr/ed a star. vs. The movie director married a star. The tennis player opened the ball. vs. The mayor opened the ball. Preferences among syntactic constructions, that is preferences among rules, depend on the sort of text to be A trivial but unsatisfactory solution is to substitute the preference values by a vector of values. Depending on the subject matter, the context, or the approriate style or 4 The fnst example is due to Reder [1983]. register, different fields of the vector values might be considered for controlling the processing. However, there are several reasons that speak against such a simple extension of the preference mechanism. First of all, the number of fields that would be needed is much too large. For lexical disambiguation, a mere classification of readings according to a small set of subject domains as it can be found in many dictionaries is much too coarse. Take, e.g., the English word line. The word is highly ambiguous. We can easily imagine appropriate preferred readings in the subject domains of telecommunication, geometry, genealogy, and drug culture. However, even in a single computer manual the word may, depending on the context, refer to a terminal line, to a line of characters on the screen, to a horizontal separation line between editing windows, or to many other things. (In each case there is a different translation into German.) A second reason comes from the fact that preferences are highly dynamic, i.e., they can change at any time during processing. Psycholinguistic experiments strongly suggest that the mere perception of a word totally out of context already primes the subject, i.e., influences his preferences in lexical choice. [Swinney 1979] The third reason to be mentioned here is the multifactorial dependency of preferences. Preferences can be the result of a combination of factors such as the topic of the text or discourse, previous occurrence of priming words, register, style, and many more. In order to model the dynamics of preferences, a processing model is proposed that combines techniques from connectionist research with the declarative grammar formalisms through dynamic preference values. Instead of assigning permanent preference values or value vectors to disjuncts, the values are dynamically calculated by a spreading-activation net. So far the potentials of neural nets for learning (e.g. backpropagation schemes) have not been exploited. Every other metaphor for setting up weighted connections between constraints in disjunctions would serve our purpose equally well. 5 5For an introduction to connectionist nets see Rumelhart, Hinton, and McCleUand [1986]. For an overview of different connectionist models see Feldman and Ballard [1982] and Kemke [1988]. 242 The type of net employed for our purposes is extremely simple. 6 Every term in the linguistic knowledge bases whose activation may influence a preference and every term whose preference value may be influenced is associated with a unit. These sets are not disjoint since the selection of one disjunct may influence other preferences. In addition there can be units for extralinguistic influences on preferences. Units are connected by unidirectional weighted finks. They have an input value i, an activation value a, a resting value r, and a preservation function f. The input value is the sum of incoming activation. The resting value is the minimal activation value, i.e., the degree of activation that is independent from current or previous input. The activation value is either equal to the sum of input and some fraction of the previous activation, which is determined by the preservation function or it is equal to the resting value, whichever is greater. ai+ 1 = max{r, i i +f(a/)}. In this simple model the output is equal to the activation. The weights of the links l are factors such that 0 < l < 1. If a link goes from unit Ul to unit u2, it contributes an activation of l*aul to the input of u2. 4 Conclusion and future research Strategies are proposed for combining declarative linguistic knowledge bases with an additional layer of control information. The unification grammar itself remains declarative. The grammar also retains completeness. It is the processing model that uses the control information for ordering and pruning the search graph. However, if the control information is neglected or if all solutions are demanded and sought by backtracking, the same processing model can be used to obtain exactly those results derived without control information. Yet, if control is used to prune the search tree in such a way that the number of solutions is reduced, many observations about human linguistic performance some of which are mentioned in Section 1 can be simulated. 6The selected simple model is sufficient for illustrating the basic idea. Certainly more sophisticated eormectionist models will have to be employed for eognitively plausible simulation. One reason for the simple design of the net is the lack of a learning. Kt this time, no learning model has been worked out yet for the proposed type of spreading- activation nets. For the time being it is assumed that the weights are set by hand using linguistic knowledge, corpora, and association dictionaries. Criteria for selection among alternatives can be encoded. The smaller set of actively used constructions and lexemes is simply explained by the fact that for all the items in the knowledge base that are not actively used there are alternatives that have a higher preference. The controlled linguistic deduction approach offers a new view of the competence-performance distinction, which plays an important r61e in theoretical linguistics. Uncontrolled deduction cannot serve as a plausible performance model. On the other hand, the performance model extends beyond the processing model, it also includes the structuring of the knowledge base and control information that influence processing. Linguistic Processing Linguistic Knowledge ° °l • 5 ~ arametrizatio control °°t J .~_ ,= of deduction information -'#. ° 1 ~ J linguistic declarative '5~a. L deduction j grammar 5Eo • 0 Figure 2. A new view of the competence- performance distinction Since this paper reports about the first results from a new line of research, many questions remain open and demand further research. Other types of control need to be investigated in relation with the strategies proposed in this paper. Uszkoreit [1990], e.g., argues that functional uncertainty needs to be controlled in order to reduce the search space and at the same time simulate syntactic preferences in human processing. Unification grammar formalisms may be viewed as constraint languages in the spirit of constraint logic programming (CLP). Efficiency can be gained through appropriate strategies for delaying the evaluation of different constraint types. Such schemes for delayed evaluation of constraints have been implemented for LFG. They play an even greater role in the processing of Constraint Logic Grammars (CLG) [Balari et al. 1990]. The delaying scheme is a more sophisticated 243 method for the ordering of conjuncts. More research is needed in this area before the techniques of CLP/CLG can be integrated in a general model of controlled (linguistic) deduction. So far the weight of the links for preference assignment can only be assigned on the basis of association dictionaries as they have been compiled by psy- chologists. For nonlexieal links the grammar writer has to rely on a trial and error method. A training method for inducing the best conjunct order on the basis of failure potential was described in Section 2.1. The training problem, .ie., the problem of automatic induction of the best control information is much harder for disjunctions. Parallel to the method for conjunctions, during the training phase the success potential of a disjunct needs to be determined, i.e., the average number of contributions to successful derivations for a given number of inputs. The problem is much harder for assigning weights to links in the spreading-activation net employed for dynamic preference assignment. Hirst [1988] uses the structure of a semantic net for dynamic lexical disambiguation. Corresponding to their marker passing method a strategy should be developed that activates all supertypes of an activated type in decreasing quantity. Wherever activations meet, a mutual reinforcement of the paths, that is of the hypotheses occurs. Another topic for future research is the relationship betwccn control information and feature logic. What happens if, for instance, a disjunction is transformed into a conjunction using De Morgans law? The immediate reply is that control structures are only valid on a certain formulation of the grammar and not on its logically eqtfivalent syntactic variants. However, assume that a fraction of a statically or dynamically calculated fraction involving success potential sp and failure potentialfp is attached to every subterm. For disjuncts, sp is ¢fivided by fp, for conjuncts fp is divided bysp. De Morgans law yields an intuitive result if we assume that negation of a term causes the attached fraction to be inverted. More research needs to be carried out before one can even start to argue for or against a preservation of control information under logical equivalences. Head-driven or functor-driven deduction has proven very useful. In this approach the order of processing conjuncts has been fixed in order to avoid the logically perfect but much less effcient orderings in which the complement conjuncts in the phrase structure (e.g., in the value of the daughter feature) are processed before the head conjunct. This strategy could not be induced or learned using the simple ordering criteria that are merely based on failure and success. In order to induce the strategy from experience, the relative computational effort needs to be measured and compared for the logically equivalent orderings. Ongoing work is dedicated to the task of formulating well-known processing algorithms such as the Earley algorithm for parsing or the functor-driven approach for generation purely in terms of preferences among conjuncts and disjuncts. 244 References Balari, S. and L. Damas (1990) CLG(n): Constraint Logic Grammars. In COLING 90. Bresnan, J. (Ed.) (1982) The Mental Representation of Grammatical Relations. MIT Press, Cambridge, Mass. Feldman, LA. and D.H. Ballard (1982) Connectionist models and their Properties. In Cognitive Science, 6:205-254. Gazdar, G., E. Klein, G. K. Pullum, and I. A. Sag (1985) Generalized Phrase Structure Grammar. Harvard University Press, Cambridge, Mass. Gerdemann, D. and E. W. Hinrichs (1990) Functor- Driven Natural Language Generation with Categorial- Unification Grammars. In COLING 90. Hirst, G. (1988) Resolving Lexical Ambiguity Compu- tationaUy with Spreading Activation and Polaroid Words. In S.I. Small, G.W. Cottrell and M.K. Tanen- haus (eds.), Lexical Ambiguity Resolution, pp.73- 107. San Mateo: Morgan Kaufmann Publishers. Karttunen, L. (1986) Radical Lexicalism.Technical Report CSLI-86-66, CSLI - Stanford University. Kasper, R. and W. Rounds (1986) A Logical Semantics for Feature Structures. In Proceedings of the 24th ACL. Kay, M. (1984) Functional Unification Grammar: A Formalism for Machine Translation. In COLING 84. Kemke, C. (1988) Der Neuere Konnektionismus - Ein Uberblick. lmformatik-Spektrum, 11:143-162. Kogure, K. (1990) Strategic Lazy Incremental Copy Graph Unification. In COLING 90. Pollard, C. and I. A. Sag (1988) An Information-Based Syntax and Semantics, Volumed Fundamentals, CSLI Lecture Notes 13, CSLI, Stanford, CA. Reder, L.M., (1983) What kind of pitcher can a catcher f'dl? Effects of priming in sentence comprehension. In Journal of Verbal Learning and Verbal Behavior 22 (2):189-202. Rumelhart, D.E., G.E. Hinton and J.L. McClelland (1986) A general framework for parallel distributed processing. In Rumelhart, D.E., McClelland, J.L., and the PDP Research Group, editors, Parallel Distributed Processing, Explorations in the Microstructure of Cognition: Foundations, volume 1 pages 45-76. Cambridge, MA: MIT Press. Shieber, S. (1984) The Design of a Computer Language for Linguistic Information. In S. Shieber, L. Karttunen, and F. Pereira (Eds.) Notes from the Unification Underground. SRI Technical Note 327, SRI International, Menlo Park, CA. Shieber, S. M. (1986) An Introduction to Unification- Based Approaches to Grammar. CSLI Lecture Notes 4. CSLI, Stanford, CA. Shieber, S., H. Uszkoreit, J. Robinson, and M. Tyson (1983) The Formalism and Implementation of PATR- II. In Research on Interactive Acquisition and Use of Knowledge. SRI International, Menlo Park, CA. Shieber, S., G. van Noord, R.C. Moore, and F.C.N. Pereira (1990) Semantic-Head-Driven Generation In Computational Linguistics 16(1). Smolka, G. (1988) A Feature Logic with Subsorts. LILOG Report 33, IBM Germany, Stuttgart. Smolka, G., and H. AYt-Kaci (1987) Inheritance Hierarchies: Semantics and Unification. MCC Report AI-057-87, MCC Austin, TX. Swinney, D.A. (1979) Lexical Access during sentence comprehension: (Re)Consideration of context effects. In Journal of Verbal Learning and Verbal Behavior 18(6):645-659. Uszkoreit, H. (1986) Categorial Unification Grammars. In COLING 86, Bonn. Uszkoreit, H. (1988) From Feature Bundles to Abstract Data Types: New Directions in the Representation and Processing of Linguistic Knowledge. In A. Blaser (Ed.) Natural Language at the Computer. Springer, Berlin, Heidelberg, New York. Uszkoreit, H. (1990) ,Extraposition and Adjunct Attachment in Categorial Unification Grammar" In W. Bahner (Hrsg.) Proceedings of the XIVth International Congress of Linguists, August 1987, Akademie Verlag Berlin, DDR, 1990. 7_~evat, H., E. Klein, and J. Calder (1987) Unification Categorial Grammar. In Haddock, Klein, and Morrill (Eds.) Categorial Grammar, Unification Grammar, and Parsing. Edinburgh Working Papers in Cognitive Science, Vol.1. Centre for Cognitive Science, Univer- sity of Edinburgh, Edinburgh. 245
1991
31
FINITE-STATE APPROXIMATION OF PHRASE STRUCTURE GRAMMARS Fernando C. N. Pereira AT&T Bell Laboratories 600 Mountain Ave. Murray Hill, NJ 07974 Rebecca N. Wright Dept. of Computer Science, Yale University PO Box 2158 Yale Station New Haven, CT 06520 Abstract Phrase-structure grammars are an effective rep- resentation for important syntactic and semantic aspects of natural languages, but are computa- tionally too demanding for use as language mod- els in real-time speech recognition. An algorithm is described that computes finite-state approxi- mations for context-free grammars and equivalent augmented phrase-structure grammar formalisms. The approximation is exact for certain context- free grammars generating regular languages, in- cluding all left-linear and right-linear context-free grammars. The algorithm has been used to con- struct finite-state language models for limited- domain speech recognition tasks. 1 Motivation Grammars for spoken language systems are sub- ject to the conflicting requirements of language modeling for recognition and of language analysis for sentence interpretation. Current recognition algorithms can most directly use finite-state ac- ceptor (FSA) language models. However, these models are inadequate for language interpreta- tion, since they cannot express the relevant syntac- tic and semantic regularities. Augmented phrase structure grammar (APSG) formalisms, such as unification-based grammars (Shieber, 1985a), can express many of those regularities, but they are computationally less suitable for language mod- eling, because of the inherent cost of computing state transitions in APSG parsers. The above problems might be circumvented by using separate grammars for language modeling and language interpretation. Ideally, the recog- nition grammar should not reject sentences ac- ceptable by the interpretation grammar and it should contain as much as reasonable of the con- straints built into the interpretation grammar. However, if the two grammars are built indepen- dently, those goals are difficult to maintain. For this reason, we have developed a method for con- structing automatically a finite-state approxima- tion for an APSG. Since the approximation serves as language model for a speech-recognition front- end to the real parser, we require it to be sound in the sense that the it accepts all strings in the language defined by the APSG. Without qualifica- tion, the term "approximation" will always mean here "sound approximation." If no further constraints were placed on the closeness of the approximation, the trivial al- gorithm that assigns to any APSG over alpha- bet E the regular language E* would do, but of course this language model is useless. One pos- sible criterion for "goodness" of approximation arises from the observation that many interest- ing phrase-structure grammars have substantial parts that accept regular languages. That does not mean that the grammar rules are in the stan- dard forms for defining regular languages (left- linear or right-linear), because syntactic and se- mantic considerations often require that strings in a regular set be assigned structural descriptions not definable by left- or right-linear rules. A use- ful criterion is thus that if a grammar generates a regular language, the approximation algorithm yields an acceptor for that regular language. In other words, one would like the algorithm to be ex- act for APSGs yielding regular languages. 1 While we have not proved that in general our method satisfies the above exactness criterion, we show in Section 3.2 that the method is exact for left-linear and right-linear grammars, two important classes of context-free grammars generating regular lan- guages. 1 At first sight, this requirement may be seen as conflict- ing with the undecidability of determining whether a CFG generates a regular language (Harrison, 1978). However, note that the algorithm just produces an approximation, but cannot say whether the approximation is exact. 246 2 The Algorithm Our approximation method applies to any context-free grammar (CFG), or any unification- based grammar (Shieber, 1985a) that can be fully expanded into a context-free grammar. 2 The re- sulting FSA accepts all the sentences accepted by the input grammar, and possibly some non- sentences as well. The current implementation accepts as input a form of unification grammar in which features can take only atomic values drawn from a speci- fied finite set. Such grammars can only generate context-free languages, since an equivalent CFG can be obtained by instantiating features in rules in all possible ways. The heart of our approximation method is an algorithm to convert the LR(0) characteristic ma- chine .Ad(G) (Aho and Ullman, 1977; Backhouse, 1979) of a CFG G into an FSA for a superset of the language L(G) defined by G. The characteris- tic machine for a CFG G is an FSA for the viable prefixes of G, which are just the possible stacks built by the standard shift-reduce recognizer for G when recognizing strings in L(G). This is not the place to review the character- istic machine construction in detail. However, to explain the approximation algorithm we will need to recall the main aspects of the construction. The states of .~4(G) are sets of dotted rules A ---* a . [3 where A ---, a/~ is some rule of G..A4(G) is the determinization by the standard subset construc- tion (Aho and Ullman, 1977) of the FSA defined as follows: • The initial state is the dotted rule ff ---, -S where S is the start symbol of G and S' is a new auxiliary start symbol. • The final state is S' --~ S.. • The other states are all the possible dotted rules of G. • There is a transition labeled X, where X is a terminal or nonterminal symbol, from dotted rule A -+ a. X~ to A --+ c~X.//. • There is an e-transition from A --~ a • B/~ to B --~ "7, where B is a nonterminal symbol and B -+ 7 a rule in G. 2Unification-based grammars not in this class would have to be weakened first, using techniques akin to those of Sato and Tamaki (1984), Shieber (1985b) and Haas (1989). I S' ->. S S ->. Ab A ->. A a A->. 1 Is'->s.] 'Aqk~ SA'>A'.ba Ja~[A.>Aa. j Figure 1: Characteristic Machine for G1 .A~(G) can be seen as the finite state control for a nondeterministic shift-reduce pushdown recog- nizer TO(G) for G. A state transition labeled by a terminal symbol z from state s to state s' licenses a shift move, pushing onto the stack of the recog- nizer the pair (s, z). Arrival at a state containing a completed dotted rule A --~ a. licenses a reduc- tion move. This pops from the stack as many pairs as the symbols in a, checking that the symbols in the pairs match the corresponding elements of a, and then takes the transition out of the last state popped s labeled by A, pushing (s, A) onto the stack. (Full definitions of those concepts are given in Section 3.) The basic ingredient of our approximation algo- rithm is the flattening of a shift-reduce recognizer for a grammar G into an FSA by eliminating the stack and turning reduce moves into e-transitions. It will be seen below that flattening 7~(G) directly leads to poor approximations in many interesting cases. Instead, .bq(G) must first be unfolded into a larger machine whose states carry information about the possible stacks of g(G). The quality of the approximation is crucially influenced by how much stack information is encoded in the states of the unfolded machine: too little leads to coarse ap- proximations, while too much leads to redundant automata needing very expensive optimization. The algorithm is best understood with a simple example. Consider the left-linear grammar G1 S---. Ab A---* Aa Je AJ(G1) is shown on Figure 1. Unfolding is not re- quired for this simple example, so the approximat- ing FSA is obtained from .Ad(G1) by the flatten- ing method outlined above. The reducing states in AJ(G1), those containing completed dotted rules, are states 0, 3 and 4. For instance, the reduction at state 4 would lead to a transition on nonter- 247 Figure 2: Flattened FSA 0 a Figure 3: Minimal Acceptor minal A, to state 2, from the state that activated the rule being reduced. Thus the corresponding e-transition goes from state 4 to state 2. Adding all the transitions that arise in this way we ob- tain the FSA in Figure 2. From this point on, the arcs labeled with nonterminals can be deleted, and after simplification we obtain the deterministic fi- nite automaton (DFA) in Figure 3, which is the minimal DFA for L(G1). If flattening were always applied to the LR(0) characteristic machine as in the example above, even simple grammars defining regular languages might be inexactly approximated by the algo- rithm. The reason for this is that in general the reduction at a given reducing state in the char- acteristic machine transfers to different states de- pending on context. In other words, the reducing state might be reached by different routes which use the result of the reduction in different ways. Consider for example the grammar G2 S ~ aXa ] bXb X -'* c which accepts just the two strings aca and bcb. Flattening J~4(G2) will produce an FSA that will also accept acb and bca, an undesirable outcome. The reason for this is that the e-transitions leav- ing the reducing state containing X ~ c. do not distinguish between the different ways of reach- ing that state, which are encoded in the stack of One way of solving the above problem is to un- fold each state of the characteristic machine into a set of states corresponding to different stacks at that state, and flattening the corresponding recog- nizer rather than the original one. However, the set of possible stacks at a state is in general infi- nite. Therefore, it is necessary to do the unfolding not with respect to stacks, but with respect to a finite partition of the set of stacks possible at the state, induced by an appropriate equivalence rela- tion. The relation we use currently makes two stacks equivalent if they can be made identical by collapsing loops, that is, removing portions of stack pushed between two arrivals at the same state in the finite-state control of the shift-reduce recognizer. The purpose of collapsing loops is to ~forget" stack segments that may be arbitrarily repeated, s Each equivalence class is uniquely de- fined by the shortest stack in the class, and the classes can be constructed without having to con- sider all the (infinitely) many possible stacks. 3 Formal Properties In this section, we will show here that the approx- imation method described informally in the pre- vious section is sound for arbitrary CFGs and is exact for left-linear and right-linear CFGs. In what follows, G is a fixed CFG with termi- nal vocabulary ~, nonterminal vocabulary N, and start symbol S; V = ~ U N. 3.1 Soundness Let J~4 be the characteristic machine for G, with state set Q, start state so, set of final states F, and transition function ~ : S x V --* S. As usual, transition functions such as 6 are extended from input symbols to input strings by defining 6(s, e) -- s and 6is , a/~) = 5(6(s, a),/~). The shift-reduce recognizer 7~ associated to A4 has the same states, start state and final states. Its configurations are triples Is, a, w) of a state, a stack and an input string. The stack is a sequence of pairs / s, X) of a state and a symbol. The transitions of the shift- reduce recognizer are given as follows: Shift: is, a, zw) t- (s', a/s, z), w) if 6(s, z) = s' Reduce: is, err, w) ~- /5( s', A), cr/s', A/, w) if ei- ther (1) A --~ • is a completed dotted rule 3Since possible stacks can be shown to form a regular language, loop collapsing has a direct connection to the pumping lemma for regular languages. 248 in s, s" = s and r is empty, or (2) A X1...Xn. is a completed dotted rule in s, T = is1, Xl) .. .(sn,Xn) and s" = 81. The initial configurations of ~ are (so, e, w} for some input string w, and the final configurations are ( s, (so, S), e) for some state s E F. A deriva- tion of a string w is a sequence of configura- tions c0,...,cm such that c0 = (s0,e,w), c,~ = ( s, (so, S), e) for some final state s, and ei-1 l- ci for l<i<n. Let s be a state. We define the set Stacks(s) to contain every sequence (s0,X0)... (sk,Xk) such that si = 6(si-l,Xi-1),l < i < k and s = 6(st, Xk). In addition, Stacks(s0) contains the empty sequence e. By construction, it is clear that if ( s, a, w) is reachable from an initial configura- tion in ~, then o- E Stacks(s). A stack congruence on 7¢ is a family of equiv- alence relations _=o on Stacks(s) for each state s E 8 such that if o- =, a' and/f(s, X) = d then o-(s,X} =,, ,r(s,X). A stack congruence ---- par- titions each set Stacks(s) into equivalence classes [<r]° of the stacks in Stacks(s) equivalent to o- un- der --_,. Each stack congruence - on ~ induces a cor- responding unfolded recognizer 7~-. The states of the unfolded recognizer axe pairs i s, M,), notated more concisely as [~]°, of a state and stack equiv- alence class at that state. The initial state is [e],o, and the final states are all [o-]° with s E F and o- E Stacks(s). The transition function 6- of the unfolded recognizer is defined by t-([o-]', x) = [o-is, x)] '(''x) That this is well-defined follows immediately from the definition of stack congruence. The definitions of dotted rules in states, config- urations, shift and reduce transitions given above carry over immediately to unfolded recognizers. Also, the characteristic recognizer can also be seen as an unfolded recognizer for the trivial coarsest congruence. Unfolding a characteristic recognizer does not change the language accepted: Proposition 1 Let G be a CFG, 7~ its charac- teristic recognizer with transition function ~, and = a stack congruence on T¢. Then the unfolded recognizer ~=_ and 7~ are equivalent recognizers. Proof: We show first that any string w accepted by T¢--- is accepted by 7~. Let do,...,dm be a derivation of w in ~=. Each di has the form di = ([P/]", o'i, ul), and can be mapped to an T¢ configuration di = (sl, 8i, ul), where £ = E and ((s, C), X) = 8i s, X). It is straightforward to ver- ify that do,..., d,, is a derivation of w in ~. Conversely, let w E L(G), and c0,...,em be a derivation of w in 7~, with ci = isl,o-i, ui). We define el = ([~ri] s~, hi, ui), where ~ = e and o-is, x) = aito-]', x). If ci-1 P ci is a shift move, then ui-1 = zui and 6(si-l, z) = si. Therefore, 6-@,_,]"-',~) = [o-~-,(s~-,,~)]~("-'") = [o-,]', Furthermore, ~ = o-~- l(S,- 1, ~) = ~,-1 ([o-,- 1]"-', ~) Thus we have ~',-x = ([o-l-d"-',ai-x,*u,) ~, = @d",e~-l(P~-d"-',*),~'~) with 6_=([o-i-1]"-', z) = [o-i]". Thus, by definition of shift move, 6i-1 I- 6i in 7¢_--. Assume now that ei-1 I- ci is a reduce move in ~. Then ui = ui-1 and we have a state s in 7~, a symbol A E N, a stack o- and a sequence r of state-symbol pairs such that si = 6(s,A) o-i-1 = o"1" o-, = o-(s,a) and either (a) A --* • is in si-t, s = si-1 and r = e, or (b) A ---, XI...Xn. is in si-1 , r = (ql, Xd... (q., X.) and s = ql- Let ~ = [o-]*. Then 6=(~,A) = [o-(s,A)p0,A) = [o-d" We now define a pair sequence ~ to play the same role in 7~- as r does in ~. In case (a) above, ~ = e. Otherwise, let rl = e and ri = ri-l(qi-l,Xi-1) for 2 < i ( n, and define ~ by = ([d q', xl)... @hi q', xi) • • • ([~.p-, x.) Then O'i-- 1 --~- 0"7" = o-(q1,X1)...(q.-x,x.-x) 249 Thus x.) -- ¢r(q~,X,}...(qi-hXi-l) xd--. x.) = = a([d',A) = a(#,A) ~i = (~f=(&A),a(~,A),ui) which by construction of e immediately entails that ~_ 1 ~- Ci is a reduce move in ~=. fl For any unfolded state p, let Pop(p) be the set of states reachable from p by a reduce transition. More precisely, Pop(p) contains any state pl such that there is a completed dotted rule A --* (~. in p and a state pll such that 6-(p I~, ~) - p and 6-(f*,A) -- f. Then the flattening ~r= of~- is a nondeterministic FSA with the same state set, start state and final states as ~- and nondeter- ministic transition function @= defined as follows: • If 6=(p,z) - pt for some z E E, then f E • If p~ E Pop(p) then f E ~b=(p, ~). Let co,..., cm be a derivation of string w in ~, and put ei -- (q~,~q, wl), and p~ = [~]~'. By construction, if ci_~ F ci is a shift move on z (wi-x -- zw~), then 6=(pi-l,Z) = Pi, and thus p~ ~ ~-(p~_~, z). Alternatively, assume the transi- tion is a reduce move associated to the completed dotted rule A --* a.. We consider first the case a ~ ~. Put a -- X1... X~. By definition of reduce move, there is a sequence of states rl,..., r~ and a stack # such that o'i-x = ¢(r~, X1)... (rn, Xn), qi -- #(r~,A), 5(r~,A) = qi, and 5(rj,X1) - ri+~ for 1 ~ j < n. By definition of stack congruence, we will then have = where rx = • and rj = (r~,X,)...(r~-x,X~-,) for j > 1. Furthermore, again by definition of stack congruence we have 6=([cr] r*, A) = Pi. Therefore, Pi 6 Pop(pi_l) and thus pi e ~_--(pi-x,•). A sim- ilar but simpler argument allows us to reach the same conclusion for the case a = e. Finally, the definition of final state for g= and ~r__ makes Pm a final state. Therefore the sequence P0,.-.,Pm is an accepting path for w in ~r_. We have thus proved Proposition 2 For any CFG G and stack con- gruence =_ on the canonical LR(0) shift-reduce rec- ognizer 7~(G) of G, L(G) C_ L(~r-(G)), where ~r-(G) is the flattening of ofT~(G)--. Finally, we should show that the stack collaps- ing equivalence described informally earlier is in- deed a stack congruence. A stack r is a loop if '/" -" (81, X1)... (sk, Xk) and 6(sk, Xt) = sz. A stack ~ collapses to a stack ~' if cr = pry, cr ~ = pv and r is a loop. Two stacks are equivalent if they can be collapsed to the same stack. This equiv- alence relation is closed under suffixing, therefore it is a stack congruence. 3.2 Exactness While it is difficult to decide what should be meant by a "good" approximation, we observed earlier that a desirable feature of an approximation algo- rithm would be that it be exact for a wide class of CFGs generating regular languages. We show in this section that our algorithm is exact both for left-linear and for right-linear context-free gram- mars, which as is well-known generate regular lan- guages. The proofs that follow rely on the following ba- sic definitions and facts about the LR(0) construc- tion. Each LR(0) state s is the closure of a set of a certain set of dotted rules, its core. The closure [R] of a set R of dotted rules is the smallest set of dotted rules containing R that contains B --~ "7 whenever it contains A --~ a • Bfl and B ---* 7 is in G. The core of the initial state so contains just the dotted rule ff ~ .S. For any other state s, there is a state 8 ~ and a symbol X such that 8 is the closure of the set core consisting of all dotted rules A ~ aX./~ where A --* a. X/~ belongs to s'. 3.3 Left-Linear Grammars In this section, we assume that the CFG G is left- linear, that is, each rule in G is of the form A B/~ or A --+/~, where A, B E N and/3 E ~*. Proposition 3 Let G be a left-linear CFG, and let gz be the FSA produced by the approximation algorithm from G. Then L(G) = L(3r). Proof: By Proposition 2, L(G) C. L(.~'). Thus we need only show L(~) C_ L(G). The proof hinges on the observation that each state s of At(G) can be identified with a string E V* such that every dotted rule in s is of the formA ~ ~.a for some A E N and c~ E V*. 250 Clearly, this is true for so = [S' --* .S], with ~0 = e. The core k of any other state s will by construction contain only dotted rules of the form A ~ a. with a ~ e. Since G is left linear, /3 must be a terminal string, ensuring that s = [h]. There- fore, every dotted rule A --* a. f in s must result from dotted rule A ~ .aft in so by the sequence of transitions determined by a (since ¢tq(G) is de- terministic). This means that if A ~ a. f and A' --* a'. fl' are in s, it must be the case that a - a ~. In the remainder of this proof, let ~ = s whenever a = ~. To go from the characteristic machine .M(G) to the FSA ~', the algorithm first unfolds Ad(G) us- ing the stack congruence relation, and then flat- tens the unfolded machine by replacing reduce moves with e-transitions. However, the above ar- gument shows that the only stack possible at a state s is the one corresponding to the transitions given by $, and thus there is a single stack con- gruence state at each state. Therefore, .A4(G) will only be flattened, not unfolded. Hence the transition function ¢ for the resulting flattened automaton ~" is defined as follows, where a E N~* U ]~*,a E ~, and A E N: (a) ¢(~,a) = {~} (b) ¢(5, e) = {.4 I A --, a e G} The start state of ~" is ~. The only final state is S. We will establish the connection between Y~ derivations and G derivations. We claim that if there is a path from ~ to S labeled by w then ei- ther there is a rule A --* a such that w = xy and S :~ Ay =~ azy, or a = S and w = e. The claim is proved by induction on Iw[. For the base case, suppose. [w I = 0 and there is a path from & to .~ labeled by w. Then w = e, and either a - S, or there is a path of e-transitions from ~ to S. In the latter case, S =~ A =~ e for some A E N and rule A --~ e, and thus the claim holds. Now, assume that the claim is true for all Iwl < k, and suppose there is a path from & to ,~ labeled w I, for some [wl[ = k. Then w I - aw for some ter- minal a and Iw[ < k, and there is a path from ~-~ to S labeled by w. By the induction hypothesis, S =~. Ay =~ aaz'y, where A --.* aaz ~ is a rule and zly - w (since aa y£ S). Letting z -- ax I, we have the desired result. If w E L(~), then there is a path from ~ to labeled by w. Thus, by claim just proved, S =~ Ay ::~ :cy, where A ~ • is a rule and w = ~y (since e # S). Therefore, S =~ w, so w ~ L(G), as desired. 3.4 Right-Linear Grammars A CFG G is right linear if each rule in G is of the form A --~ fB or A --* /3, where A, B E N and Proposition 4 Let G be a right-linear CFG and 9 e be the unfolded, flattened automaton produced by the approximation algorithm on input G. Then L(G) = L(Yz). Proof: As before, we need only show L(~') C L(G). Let ~ be the shift-reduce recognizer for G. The key fact to notice is that, because G is right-linear, no shift transition may follow a reduce transition. Therefore, no terminal transition in 3 c may follow an e-transition, and after any e-transition, there is a sequence of G-transitions leading to the final state [$' --* S.]. Hence ~" has the following kinds of states: the start state, the final state, states with terminal transitions entering or leaving them (we call these reading states), states with e-transitions entering and leaving them (prefinal states), and states with terminal transitions entering them and e-transitions leaving them (cr0ssover states). Any accepting path through ~" will consist of a se- quence of a start state, reading states, a crossover state, prefinal states, and a final state. The excep- tion to this is a path accepting the empty string, which has a start state, possibly some prefinal states, and a final state. The above argument also shows that unfolding does not change the set of strings accepted by ~, because any reduction in 7~= (or e-transition in jc), is guaranteed to be part of a path of reductions (e-transitions) leading to a final state of 7~_- (~). Suppose now that w = w: ... wn is accepted by ~'. Then there is a path from the start state So through reading states sl,..., s,,-1, to crossover state sn, followed by e-transitions to the final state. We claim that if there there is a path from sl to sn labeled wi+l...wn, then there is a dot- ted rule A ---* x • yB in si such B :~ z and yz = w~+1...wn, where A E N,B E NU~*,y,z ~ ~*, and one of the following holds: (a) z is a nonempty suffix of wt... wi, (b) z = e, A" =~ A, A' --* z'. A" is a dotted rule in sl, and z t is a nonempty suffix ofT1 ...wi, or (c) z=e, si=s0, andS=~A. We prove the claim by induction on n - i. For the base case, suppose there is an empty path from 251 Sn to s,. Because sn is the crossover state, there must be some dotted rule A ~ x. in sn. Letting y = z = B = e, we get that A ---* z. yB is a dotted rule of s, and B = z. The dotted rule A --', z. yB must have either been added to 8n by closure or by shifts. If it arose from a shift, z must be a nonempty suffix of wl ...wn. If the dotted rule arose by closure, z = e, and there is some dotted rule A ~ --~ z t • A" such that A" =~ A and ~l is a nonempty suffix of Wl ... wn. Now suppose that the claim holds for paths from si to sn, and look at a path labeled wi...wn from si-1 to sn. By the induction hypothesis, A ~ z • yB is a dotted rule of st, where B =~ z, uz = wi+l...wn, and (since st ~ s0), either z is a nonempty suffix of wl ... wi or z = e, A ~ -. z ~. A" is a dotted rule of si, A" :~ A, and z ~ is a nonempty suffix of wl ... wl. In the former case, when z is a nonempty suffix of wl ... wl, then z = wj ... wi for some 1 < j < i. Then A ---, wj ...wl • yB is a dotted rule of sl, and thus A ---* wj ...wi-1 • wiyB is a dotted rule ofsi_l. Ifj < i- 1, then wj...wi_l is a nonempty suffix of wl...wi-1, and we are done. Otherwise, wj ...wi-1 = e, and so A --* .wiyB is a dotted rule ofsi-1. Let y~ = wiy. Then A ~ .yJB is a dotted rule of si-1, which must have been added by closure. Hence there are nonterminals A I and A" such that A" :~ A and A I ~ z I • A" is a dotted rule of st-l, where z ~ is a nonempty sUtTLX of Wl .. • wi- 1. In the latter case, there must be a dotted rule A ~ ~ wj ...wi-1 • wiA" in si-1. The rest of the conditions are exactly as in the previous case. Thus, if w - wl...wn is accepted by ~c, then there is a path from so to sn labeled by wl ... w,. Hence, by the claim just proved, A ~ z. yB is a dotted rule of sn, and B :~ z, where yz -" wl...wa -- w. Because the st in the claim is so, and all the dotted rules of si can have nothing before the dot, and z must be the empty string. Therefore, the only possible case is case 3. Thus, S :~ A ---, yz = w, and hence w E L(G). The proof that the empty string is accepted by ~" only if it is in L(G) is similar to the proof of the claim. D 4 A Complete Example The appendix shows an APSG for a small frag- ment of English, written in the notation accepted by the current version of our grammar compiler. The categories and features used in the grammar are described in Tables 1 and 2 (categories without features are omitted). Features enforce person- number agreement, personal pronoun case, and a limited verb subcategorization scheme. Grammar compilation has three phrases: (i) construction of an equivalent CFG, (ii) approxi- mation, and (iii) determinization and minimiza- tion of the resulting FSA. The equivalent CFG is derived by finding all full instantiations of the ini- tial APSG rules that are actually reachable in a derivation from the grammar's start symbol. In the current implementation, the construction of the equivalent CFG is is done by a Prolog pro- gram, while the approximator, determinizer and minimizer are written in C. For the example grammar, the equivalent CFG has 78 nonterminals and 157 rules, the unfolded and flattened FSA 2615 states and 4096 transi- tions, and the determinized and minimized final DFA 16 states and 97 transitions. The runtime for the whole process is 4.91 seconds on a Sun SparcStation 1. Substantially larger grammars, with thousands of instantiated rules, have been developed for a speech-to-speech translation project. Compilation times vary widely, but very long compilations ap- pear to be caused by a combinatorial explosion in the unfolding of right recursions that will be dis- cussed further in the next section. 5 Informal Analysis In addition to the cases of left-linear and right- linear grammars discussed in Section 3, our algo- rithm is exact in a variety of interesting cases, in- cluding the examples of Church and Patil (1982), which illustrate how typical attachment ambigu- ities arise as structural ambiguities on regular string sets. The algorithm is also exact for some self- embedding grammars 4 of regular languages, such as S --+ aS l Sb l c defining the regular language a*eb*. A more interesting example is the following sim- plified grammar for the structure of English noun 4 A grammar is self-embedding if and only if licenses the derivation X ~ c~X~ for nonempty c~ and/3. A language is regular if and only if it can be described by some non- self-embedding grammar. 252 Figure 4: Acceptor for Noun Phrases phrases: NP -+ Det Nom [ PN Det -+ Art ] NP's Nom -+ N I Nom PP J Adj Nom PP --* P NP The symbols Art, N, PN and P correspond to the parts of speech article, noun, proper noun and preposition. From this grammar, the algorithm derives the DFA in Figure 4. As an example of inexact approximation, con- sider the the self-embedding CFG S -+ aSb I ~ for the nonregular language a'~b'~,n > O. This grammar is mapped by the algorithm into an FSA accepting ~ I a+b+. The effect of the algorithm is thus to "forget" the pairing between a's and b's mediated by the stack of the grammar's charac- teristic recognizer. Our algorithm has very poor worst-case perfor- mance. First, the expansion of an APSG into a CFG, not described here, can lead to an exponen- tial blow-up in the number of nonterminals and rules. Second, the subset calculation implicit in the LR(0) construction can make the number of states in the characteristic machine exponential on the number of CF rules. Finally, unfolding can yield another exponential blow-up in the number of states. However, in the practical examples we have con- sidered, the first and the last problems appear to be the most serious. The rule instantiation problem may be allevi- ated by avoiding full instantiation of unification grammar rules with respect to "don't care" fea- tures, that is, features that are not constrained by the rule. The unfolding problem is particularly serious in grammars with subgrammars of the form S -+ XIS I"" J X,,S J Y (I) It is easy to see that the number of unfolded states in the subgrammar is exponential in n. This kind of situation often arises indirectly in the expan- sion of an APSG when some features in the right- hand side of a rule are unconstrained and thus lead to many different instantiated rules. In fact, from the proof of Proposition 4 it follows immedi- ately that unfolding is unnecessary for right-linear grammars. Ultimately, by dividing the gram- mar into non-mutually recursive (strongly con- nected) components and only unfolding center- embedded components, this particular problem could he avoided, s In the meanwhile, the prob- lem can be circumvented by left factoring (1) as follows: S -+ Z S [ Y z-+x, I...IX. 6 Related Work and Conclu- sions Our work can be seen as an algorithmic realization of suggestions of Church and Patil (1980; 1982) on algebraic simplifications of CFGs of regular lan- guages. Other work on finite state approximations of phrase structure grammars has typically re- lied on arbitrary depth cutoffs in rule application. While this is reasonable for psycholinguistic mod- eling of performance restrictions on center embed- ding (Pulman, 1986), it does not seem appropriate for speech recognition where the approximating FSA is intended to work as a filter and not re- ject inputs acceptable by the given grammar. For instance, depth cutoffs in the method described by Black (1989) lead to approximating FSAs whose language is neither a subset nor a superset of the language of the given phrase-structure grammar. In contrast, our method will produce an exact FSA for many interesting grammars generating regular languages, such as those arising from systematic attachment ambiguities (Church and Patil, 1982). It important to note, however, that even when the result FSA accepts the same language, the origi- nal grammar is still necessary because interpreta- SWe have already implemented a version of the algo- rithm that splits the grammar into strongly connected com- ponents, approximates and minimizes separately each com- ponent and combines the results, but the main purpose of this version is to reduce approximation and determinization costs for some grmmmars. 253 tion algorithms are generally expressed in terms of phrase structures described by that grammar, not in terms of the states of the FSA. Although the algorithm described here has mostly been adequate for its intended applica- tion -- grammars sufficiently complex not to be approximated within reasonable time and space bounds usually yield automata that are far too big for our current real-time speech recognition hardware -- it would be eventually of interest to handle right-recursion in a less profligate way. In a more theoretical vein, it would also be interesting to characterize more tightly the class of exactly approximable grammars. Finally, and most spec- ulatively, one would like to develop useful notions of degree of approximation of a language by a reg- ular language. Formal-language-theoretic notions such as the rational index (Boason et al., 1981) or probabilistic ones (Soule, 1974) might be prof- itably investigated for this purpose. Acknowledgments We thank Mark Liberman for suggesting that we look into finite-state approximations and Pedro Moreno, David Roe, and Richard Sproat for try- ing out several prototypes of the implementation and supplying test grammars. References Alfred V. Aho and Jeffrey D. Ullman. 1977. Princi. pies of Compiler Design. Addison-Wesley, Reading, Massachusetts. Roland C. Backhouse. 1979. Syntaz o] Programming Languages--Theorll and Practice. Series in Com- puter Science. Prentice-Hall, Englewood Cliffs, New Jersey. Alan W. Black. 1989. Finite state machines from fea- ture grammars. In Masaru Tomita, editor, Inter. national Workshop on Parsing Technologies, pages 277-285, Pittsburgh, Pennsylvania. Carnegie Mel- lon University. Luc Boason, Bruno Courcelle, and Maurice Nivat. 1981. The rational index: a complexity measure for languages. SIAM Journal o] Computing, 10(2):284- 296. Kenneth W. Church and Ramesh Patil. 1982. Coping with syntactic ambiguity or how to put the block in the box on the table. Computational Linguistics, 8(3--4):139-149. Kenneth W. Church. 1980. On memory ]imitations in • natural language processing. Master's thesis, M.I.T. Published as Report MIT/LCS/TR-245. Andrew Haas. 1989. A parsing algorithm for unification grammar. Computational Linguistics, 15(4):219-232. Michael A. Harrison. 1978. Introduction to Formal Language Theor~l. Addison-Wesley, Reading, Mas- sachussets. Steven G. Pulman. 1986. Grammars, parsers, and memory limitations. Language and Cognitive Pro- cesses, 1(3):197-225. Taisuke Sato and Hisao Tamaki. 1984. Enumeration of success patterns in logic programs. Theoretical Computer Science, 34:227-240. Stuart M. Shieber. 1985a. An Introduction to Unification-Based Approaches to Grammar. Num- ber 4 in CSLI Lecture Notes. Center for the Study of Language and Information, Stanford, California. Distributed by Chicago University Press. Stuart M. Shieber. 1985b. Using restriction to ex- tend parsing algorithms for complex-feature-based formalisms. In ~3rd Annual Meeting of the Asso- ciation ]or Computational Linguistics, pages 145- 152, Chicago, Illinois. Association for Computa- tionai Linguistics, Morristown, New Jersey. Stephen Soule. 1974. Entropies of probabilistic gram- mars. In]ormation and Control, 25:57-74. Appendix APSG Formalism and Example Nonterminal symbols (syntactic categories) may have features that specify variants of the category (eg. sin- gular or plural noun phrases, intransitive or transitive verbs). A category cat with feature constraints is writ- ten cat# [ca, • • •, em3. Feature constraints for feature f have one of the forms .f = ,, (2) ] = c (3) .f = (c~ ..... c.) (4) where v is a variable name (which must be capitalized) and c, cl,..., c, are feature values. All occurrences of a variable v in a rule stand for the same unspecified value. A constraint with form (2) specifies a feature as having that value. A constraint of form (3) specifies an actual value for a feature, and a constraint of form (4) specifies that a feature may have any value from the specified set of values. The symbol "!" appearing as the value of a feature in the right-hand side of a rule indicates that that feature must have the same value as the feature of the same name of the category in the left-hand side of the rule. This notation, as well as variables, can be used to en- force feature agreement between categories in a rule, ¢ 254 Symbol Category Features s sentence np vp args det n pron V noun phrase verb phrase verb arguments determiner noun pronoun verb n (number), p (person) n, p, c (case) n, p, t (verb type) t n n n, p, C n, p, t Table 1: Categories of Example Grammar Feature n' (number) p (person) c (case) t (verb type) Values s (singular), p (plural) ! (first), 2 (second), 3 (third) s (subject), o (nonsubject) i (intransitive), t (transitive), d (ditransitive) Table 2: Features of Example Grammar for instance, number agreement between Subject and verb. It is convenient to declare the features and possible values of categories with category declarations appear- ing before the grammar rules. Category declarations have the form cat CatS[ /1 = (Vll .... ,V2kl), ..o, fm = (vml .... ,Vmk,) ]. giving all the possible values of all the features for the category. The declaration start cat. declares cat as the start symbol of the grammar. In the grammar rules, the symbol "'" prefixes ter- minal symbols, commas are used for sequencing and [" for alternation. start s. cat sg[n=Cs,p),p=(1,2,3)]. cat npg[n=(s,p) ,p=(1,2,3) ,c=(s,o)]. cat vpg[n=(s,p) ,l>=(1,2,3),type=(i,t,d)]. cat argsg[type=(i.t,d)]. cat detg[n=(s,p)]. cat ng[n=(s,p)]. cat prong[n=(s,p),p=(1,2,3),c=(s,o)]. cat vg[n-(s,p),p=(1,2,3),type=(i,t,d)]. s => npg[n=! ,pffi! ,c=s], vpg[n=! ,p=!]. npg[p=3] => detg[n=!], adjs, ng[n=!]. nl~[n=s,p-3] -> pn. np => prong In= !, p= !, c= ! ]. prong [n=s,p-1, c=s] => ' i. prong [p=2] => ' you. prong[n=s,p=3,c=s] => 'he I 'she. prong[n-s,p-3] => 'it. prong[nffip,l~l,c-s] => 'vs. prong[n=p,p=3,c=s] => 'they. prong[n=s,p-l,c=o] => 'me. prong[n=s,p=3,c=o] => 'him [ prong[n=p,p=1,c=o] => 'us. prong[n=p,p-3,c=o] => 'them. 'her. vp => vg[n=! ,p=! ,type=:], argsg[type=!]. adjs -> ~. adjs => adj, adjs. args#[type=i] => []. args#[type=t] => npg[c=o]. argsg[type-d] => npg[c=o], 'to, npg[cfo]. pn => 'ton I 'dick [ 'harry. det => 'soaeJ 'the. det#[n=s] => 'every [ 'a, det#[n-p] => 'all [ 'most. n#[n=s] => 'child [ 'cake. n#[n~p] => 'children I 'cakes. adj.-> 'nice J 'sgeet. v#[n=s,l~3,type=i] => 'sleeps. v#[nffip,type=i] => 'sleep. v#[n=s,l~,(1,2),type=/] => 'sleep. v#[n-s,p-3,type=t] -> 'eats. v#[n~p,type-t] => 'eat. v#[n=s,p-(1,2),type=t] ffi> 'eat. v#[n=s,pffi3,type=d] => 'gives. v#[nffip,type-d] => 'give. v#[n=s,p=(1,2),type=d] => 'give. 255
1991
32
FEATURE LOGIC WITH WEAK SUBSUMPTION CONSTRAINTS Jochen Dbere IBM Deutschland OmbH Science Center - IKBS P.O. Box 80 08 80 D-7000 Stuttgart 80, Germany ABSTRACT In the general framework of a constraint-based grammar formalism often some sort of feature logic serves as the constraint language to de- scribe linguistic objects. We investigate the ex- tension of basic feature logic with subsumption (or matching) constraints, based on a weak no- tion of subsumption. This mechanism of one- way information flow is generally deemed to be necessary to give linguistically satisfactory de- scriptions of coordination phenomena in such formalisms. We show that the problem whether a set of constraints is satisfiable in this logic is decidable in polynomial time and give a solution algorithm. 1 Introduction Many of the current constralnt-based grammar formalisms, as e.g. FUG [Kay 79, Kay 85], LFG [Kaplan/Bresnan 82], HPSG [Pollard/Sag 87], PATR-II [Shieber et al. 83] and its derivates, model linguistic knowledge in recursive fea- ture structures. Feature (or functional) equa- tions, as in LFG, or feature terms, as in FUG or STUF [Bouma et al. 88], are used as con- straints to describe declaratively what proper- ties should be assigned to a linguistic entity. In the last few years, the study of the for- real semantics and formal properties of logics involving such constraints has made substan- tial progress [Kasper/Rounds 86, Johnson 87, Smolka 88, Smolka 89], e.g., by making precise which sublanguages of predicate logic it corre- sponds to. This paves the way not only for reli- able implementations of these formalisms, but also for extensions of the basic logic with a precisely defined meaning. The extension we present here, weak subsumption constraints, is a mechanism of one-way information flow, often proposed for a logical treatment of coordination in a feature-based unification grammar. 1 It can I Another application would be type inference in a grammar formalism (or programming language) that be informally described as a device, which en- ables us to require that one part of a (solution) feature structure has to be subsumed (be an in- stance of) another part. Consider the following example of a coordina- tion with "and", taken from [Shieber 89]. (1) Pat hired [tcP a Republican] and [NP a banker]. (2) *Pat hired [NP a Republican] and lAP proud of it]. Clearly (2) is ungrammatical since the verb "hire" requires a noun phrase as object com- plement and this requirement has to be ful- filled by both coordinated complements. This subcategorization requirement is modeled in a unification-based grammar generaUy using equations which cause the features of a comple- ment (or parts thereof encoding the type) to get unified with features encoding the requirements of the respective position in the subcategoriza- tion frame of the verb. Thus we could assume that for a coordination the type-encoding fea- tures of each element have to be "unified into" the respective position in the subcategorisation frame. This entails that the coordinated ele- ments are taken to be of one single type, which then can be viewed as the type of the whole coordination. This approach works fine for the verb "hire", but certain verbs, used very fre- quently, do not require this strict identity. (3) Pat has become [NP a banker] and [AP very conservative]. (4) Pat is lAP healthy] and [pp of sound mind]. The verb "become" may have either noun- phrase or adjective-phrase complements, "to be" Mlows prepositional and verb phrases in addition, and these may appear intermixed in a coordination. In order to allow for such "polymorphic" type requirements, we want to l~e~ a-type discipline with polymorphic types. 256 state, that (the types of) coordinated arguments each should be an instance of the respective re- quirement from the verb. Expressed in a gen- eral rule for (constituent) coordination, we want the structures of coordinated phrases to be in- stances of the structure of the coordination. Us- ing subsumption constraints the rule basically looks like this: E ~ C and D E~C E~D With an encoding of the types like the one pro- posed in HPSG we can model the subcatego- risation requirements for"to be" and "to be- come" as generalizations of all allowed types (cf. Fig. 1). i n: ] ] NP= v: - AP= v: + bar: 2 bar: 2 VPffi v: + PP= v: - bar: 2 bar: 2 'to be' requires: 'to become' requires: Figure 1: Encoding of syntactic type A similar treatment of constituent coordina- tion has been proposed in [Kaplan/Maxwell 88], where the coordinated elements are required to be in a set of feature structures and where the feature structure of the whole set is defined as the generalisation (greatest lower bound w.r.t. subsumption) of its elements. This entails the requirement stated above, namely that the structure of the coordination subsumes those of its elements. In fact, it seems that especially in the context of set-valued feature structures (cf. [Rounds 88]) we need some method of inheri- tance of constraints, since if we want to state general combination rules which apply to the set-valued objects as well, we would like con- straints imposed on them to affect also their members in a principled way. Now, recently it turned out that a feature logic involving subsumption constraints, which are based on the generally adopted notion of sub- sumption for feature graphs is undecidable (cf. [D rre/Rounds 90]). In the present paper we therefore investigate a weaker notion of sub- sumption, which we can roughly characterize as 257 relaxing the constraint that an instance of a fea- ture graph contains all of its path equivalencies. Observe, that path equivalencies play no role in the subcategorisation requirements in our ex- amples above ...... ~ 2 Feature Algebras In this section we define the basic structures which are possible interpretations of feature de- scriptions, the expressions of our feature logic. Instead of restricting ourselves to a specific in- terpretation, like in [Kasper/Rounds 86] where feature structures are defined as a special kind of finite automata, we employ an open-world se- mantics as in predicate logic. We adopt most of the basic definitions from [Smolka 89]. The mathematical structures which serve us as in- terpretations are called feature algebras. We begin by assuming the pairwise disjoint sets of symbols L, A and V, called the sets of fea- tures (or labels), atoms (or constants) and vari- ables, respectively. Generally we use the letters /,g, h for features, a, b, c for atoms, and z, ~, z for variables. The letters s and t always denote variables or atoms. We assume that there are infinitely many variables. A feature algebra .A is a pair (D ~4, ..4) consisting of a nonempty set D ~t (the domain of.4) and an interpretation .~ defined on L and A such that * a ~4 E D "4 for a E A. (atoms are constants) • Ifa ~ b then a "4 ~ b ~4. (unique name as- sumption) • If f is a feature then/~4 is a unary partial function on D ~4. (features are functional) • No feature is defined on an atom. Notation. We write function symbols on the right following the notation for record fields in computer languages, so that f(d) is written dr. If f is defined at d, we write d.f ~, and other- wise d/ T. We use p,q,r to denote strings of features, called paths. The interpretation func- tion .Jr is straightforwardly extended to paths: for the empty path e, ~.4 is the identity on D~4; for a path p = fl ... f-, p~4 is the unary partial function which is the composition of the filnc- tions fi"4.., f.4, where .fl "4 is applied first. A feature algebra of special interest is the Fea- ture Graph Algebra yr since it is canonical in the sense that whenever there exists a solu- tion for a formula in basic feature logic in some feature algebra then there is also one in the Fea- ture Graph Algebra. The same holds if we ex- tend our logic to subsumption constraints (see ~DSrre/Rounds 90]). A feature graph is a rooted and connected directed graph. The nodes are either variables or atoms, where atoms may ap- pear only as terminal nodes. The edges are la- beled with features and for every node no two outgoing edges may be labeled with the same feature. We formalize feature graphs as pairs (s0, E) where So E VUA is the root and E C V x L x (V U A) is a set of triples, the edges. The following conditions hold: 1. If s0EA, thenE=0. 2. If (z, f, s) and (z, f, t) are in E, then s : t. 3. If (z, f, 8) is in E, then E contains edges leading from the root s0 to the node z. Let G - (z0, E) be a feature graph containing an edge (z0, f, s). The subgraph under f of G (written G/f) is the maximal graph (s, E') such that E t C E. Now it is clear how the Feature Graph Algebra ~" is to be defined. D ~r is the set of all feature graphs. The interpretation of an atom a ~r is the feature graph (a, ~), and for a feature f we let G.f 7~ = G/.f, if this is defined. It is easy to verify that ~r is a feature algebra. Feature graphs are normally seen as data ob- jects containing information. From this view- point there exists a natural preorder, called sub- sumptlon preorder, that orders feature graphs according to their informational content thereby abstracting away from variable names. We do not introduce subsumption on feature graphs here directly, but instead we define a subsump- tion order on feature algebras in general. Let .A and B be feature algebras. A simulation between .A and B is a relation A C D ~4 × D v satisfying the following conditions: 1. if (a ~4, d) E A then d = a B, for each atom a, and 2. for any d E D~,e E D B and f E L: if df A ~ and (d,e) E A, then ef B ~ and (dr ~4, ef B) E A. Notice that the union of two simulations and the transitive closure of a simulation are also simulations. A partial homomorphlsm "y between .A and B is a simulation between the two which is a partial function. If.A = B we also call T a partial endomorphism. Definition. Let .A be a feature algebra. The (strong) subsumption preorder ff_A and 258 the weak subsumption preorder ~4 of ~4 are defined as follows: * d (strongly) subsumes e (written d E ~4 e) iff there is an endomorphism "y such that = e. * d wealcly subsumes e (written d ~4 e) iff there is a simulation A such that dAe. It can be shown (see [Smolka 89]) that the subsumption preorder of the feature graph algebra coincides with the subsumption or- der usually defined on feature graphs, e.g. in [Kasper/Rounds 86]. Example: Consider the feature algebra de- picted in Fig. 2, which consists of the elements {1, 2, 3, 4, 5, a, b) where a and b shall be (the pic- tures of) atoms and f, g, i and j shall be features whose interpretations are as indicated. i i• simulation A f g 1A3 2A4 2A5 aAa bAb a a b Figure 2: Example of Weak Subsumption Now, element 1 does not strongly subsume 3, since for 3 it does not hold, that its f-value equals its g-value. However, the simulation A demonstrates that they stand in the weak sub- sumption relation: 1 ~ 3. 3 Constraints To describe feature algebras we use a relational language similar to the language of feature de- scriptions in LFG or path equations in PATR- II. Our syntax of constraints shall allow for the forms zp "---- ~q, zp "---- a, zp ~ ~q where p and q are paths (possibly empty), a E A, and z and ~/are variables. A feature clause is a finite set of constraints of the above forms. As usual we interpret constraints with respect to a variable assignment, in order to make sure that variables are interpreted uniformly in the whole set. An assignment is a mapping ~ of variables to the elements of some feature alge- bra. A constraint ~ is satisfied in .,4 under as- signment a, written (A, a) ~ ~, as follows: (.,4, a) ~ zp - vq iff a(z)p A = a(v)q A (.4, a) ~ zp -- a aft a(z)p A if (v)qA. The solutions of a clause C in a feature alge- bra .4 are those assignments which satisfy each constraint in C. Two clauses C1 and C2 are equivalent iff they have the same set of solu- tions in every feature algebra .A. The problem we want to consider is the follow- ing: Given a clause C with symbols from V, L and A, does C have a solution in some feature algebra? We call this problem the weak semiunification problem in feature algebras) 4 An Algorithm 4.1 Presolved Form We give a solution algorithm for feature clauses based on normalization, i.e. the goal is to de- fine a normal form which exhibits unsatisfiabil- ity and rewrite rules which transform each fea- ture clause into normal form. The normal form we present here actually is only half the way to a solution, but we show below that with the use of a standard algorithm solutions can be gener- ated from it. First we introduce the restricted syntax of the normal form. Clauses containing only con- straints of the following forms are called sim- ple: zf --y, z--s, z ~ y where s is either a variable or an atom. Each feature clause can be restated in linear time as an equisatisfiable simple feature clause whose solutions are extensions of the solutions of the original clause, through the introduction of aux- iliary variables. This step is trivial. A feature clause C is called presolved iff it is simple and satisfies the following conditions. ~The anMogous problem for (strong) subsumption constraints is undecidable, even if we restrict ourselves to finite feature algebras. Actually, this problem could be shown to be equivalent to the semiunification prob- lem for rational trees, i.e. first-order terms which may contain cycles. The interested reader is referred to [D~rre/Rounds 90]. C1. If z - ~/is in C, then z occurs exactly once in C. C2. Ifzf-yandzf-zareinC, theny=z. C3. Ifz~vandy~zareinC, thenz~zis in C (transitive closure). C4. Ifz ~V and z f-- z t and Vf -- V t are in C, then z' ~ V' is in C (downward propa- gation closure). In the first step our algorithm attempts to trans- form feature clauses to presolved form, thereby solving the equational part. In the simplifica- tion rules (cf. Fig. 3) we have adapted some of Smolka's rules for feature clauses including complements [Smolka 89]. In the rules [z/s]C denotes the clause C where every occurrence of z has been replaced with s, and ~ & C denotes the feature clause {~} U C provided ~b ~ C. Theorem 1 Let C be a simple feature clause. Then I. if C can be rewritten to 19 using one of the rules, then 1) i8 a simple feature clause equivalent to C, f. for every non-normal simple feature clause one of the rewrite rules applies, 3. there is no infinite chain C --* U1 --* C2 --, ProoL 3 The first part can be verified straight- forwardly by inspecting the rules. The same holds for the second part. To show the termina- tion claim first observe that the application of the last two rules can safely be postponed until no one of the others can apply any more, since they only introduce subsumption constraints, which cannot feed the other rules. Now, call a variable z isolated in a clause C, if C contains an equation z - 7/and z occurs exactly once in C. The first rule strictly increases the number of isolated variables and no rule ever decreases it. Application of the second and third rule de- crease the number of equational constraints or the number of features appearing in C, which no other rule increase. Finally, the last two rules strictly increase the number of subsump- tion constraints for a constant set of variables. Hence, no infinite chain of rewriting steps may be produced. [] We will show now, that the presolved form can be seen as a nondeterministic finite automaton ~Part of this proof has been directly adapted from [S molka 89]. 259 z-y&C z-z&C zf -1/ gr zf - z & C z g ~ z t z C --4 z--l/ & [z/1/]C, if z occurs in C and z~l/ --, C --+ z~y&zf "--z'gryf "--yt&zt~y'&C if z t ~ ~ ~ C (1) (2) (3) (4) Ca) Figure 3: Rewriting to presolved form with e-moves and that we can read off solutions from its deterministic equivalent, if that is of a special, trivially verifiable, form, called clash- bee. 4.2 The Transition Relation 6c of a Presolved Clause C The intuition behind this construction is, that subsumption constraints basically enfoice that information about one variable (and the space teachable hom it) has to be inherited by (copied to) another variable. For example the con- straints z H y and zp - a entail that also lip - a has to hold. 4 Now, if we have a con- straint z ~ T/, we could think of actually copying the information found under z to y, e.g. zf - z ~ would be copied to 1/f - 1/t, where 1/I is a new variable, and z I would be linked to yl by z p ~ ?/. However, this treatment is hard to control in the presence of cycles, which always can occur. In- stead of actually copying we also can regard a constraint z g 7/as a pointer ¢rom ~ back to z leading us to the information which is needed to construct the local solution of ~. To extend this view we regard the whole p~esolved chase C as a finite automaton: take variables and atoms as nodes, a feature constraint as an arc labeled with the feature, constraints z - s and 1/~ z as e-moves horn z to s or ~/. We can show then that C is unsatisfiable iff there is some z hom which we reach atom a via path p such that we can also reach b(~ a) via p or there is a path starting from z whose proper prefix is p. Formally, let NFA Arc of presolved clause C be ~F~rora this point of view the difference between weak and strong subsumption can be captured in the type of information they enforce to be inherited. Strong subsumption requires path equivalences to be inherited (x ~ y and ~p -" zq implies yp - yq), whereas weak subsumption does not. 260 defined as follows. Its states are the variables occurring in C (Vc) plus the atoms plus the states qF and the initial state q0. The set of final states is Vc U {qp}. The alphabet of Arc is vcu z, u A u {e}. 5 The transition relation is defined as follows: s 6c := vc} o {(a,a,q~)la~ A} u I • g c} u f, I -" c} v • c} As usual, let ~c be the extension of 6c to paths. Notice that zpa E L(Afc) iff (z,p,a) E ~c. The language accepted by this automaton con- tains strings of the forms zp or zpa, where a string zp indicates that in a solution a the ob- ject ol(z)p ~t should be defined and zpa tells us further that this object should be a A. A set of strings of (V x L*) U (V x L* x A) is called clash-free iff it does not contain a string zpa together with zpb (where a ~ b) or together with zpf. It is clear that the property of a reg- ular language L of being dash-free with respect to L and A can be read off immediately from a DFA D for it: if D contains a state q with 5(q, a) E F and either 6(q, b) E F (where a ~ b) or 6(q, f) E F, then it is not clash-free, other- wise it is. We now present our centrM theorem. Theorem 2 Let Co be a feature clause, C its presolved form and Arc the NFA as constructed sir L or A are infinite we restrict ourselves to the sets of symbols actually occurring in C. 6Notice that if x - s E C, then either s is an atom or occurs only once. Thus it is pointless to have an arc fr,)m s to ~, since we either have already the maximum of information for s or ~ will not provide any new arcs. above. Then the following conditions are equiv- alent: i. L(Are) is cZash- ,ee YL There exists a finite feature algebra .A and an assignment c~ such that (.A,c~) ~ Co, provided the set of atoms is finite. 3. There exists a feature algebra .4 and an as- 8ignraent ol such that (.A, c~) ~ Co. Proof. see Appendix A. Now the algorithm consists of the following sim- ple or well-understood steps: 1: (a) Solve the equationai constraints of C, which can be done using standard uni- fication methods, exemplified by rules 1) to 3). (b) Make the set of weak subsumption constraints transitively and "down- ward" closed (rules 4) and 5)). 2: The result interpreted as an NFA is made deterministic using standard methods and tested of being clash-free. 4.3 Determining Clash-Freeness Di- rectly For the purpose of proving the algorithm cor- rect it was easiest to assume that clash-freeness is determined after transforming the NFA of the presolved form into a deterministic automaton. However, this translation step has a time com- plexity which is exponential with the number of states in the worst case. In this section[A we consider a technique to determine clash-freeness directly from the NFA representation of the pre- solved form in polynomial time. We do not go into implementational details, though. Instead we are concerned to describe the different steps more from a logical point of view. It can be assumed that there is still room left for opti- mizations which improve ef[iciency. In a first step we eliminate all the e-transitions from the NFA Arc- We will call the result still Arc. For every pair of a variable node z and an atom node a let Arc[z,a] be the (sub-)automaton of all states of Arc reachable horn z, but with the atom a being the only final state. Thus, Afc[z,g] accepts exactly the lan- guage of all strings p for which zpg E L(Arc). Likewise, let Afc[z,~] be the (sub-)automaton of all states olaf C reachable from z, but where every atom node besides a is in the set of fi- nal states as well as every node with an outgo- ing feature arc. The set accepted by this ma- chine contains every string p such that zpb E L(ArC), (b ~ a) or zpf E L(Arc). If and only if the intersection of these two machines is empty for every z and a, L(Arc) is clash-free. 4.4 Complexity Let us now examine the complexity of the dif- ferent steps of the algorithm. We know that Part la) can be done (using the efficient union/find technique to maintain equivalence classes of variables and vectors of features for each representative) in nearly lin- ear time, the result being smaller or of equal size than Co. Part lb) may blow up the clause to a size at most quadratic with the number of different variables n, since we cannot have more subsumption constraints than this. For every new subsumption constraint, trying to ap- ply ruh 4) might involve at most 2n membership test to check whether we are actually adding a new constraint, whereas for rule 5) this number only depends on the size of L. Hence, we stay within cubic time until here. Determining whether the presolved form is dash-free from the NPA representation is done in three steps. The e-free representation of Arc does not increase the number of states. If n,a and l are the numbers of variables, atoms and features resp. in the initial clause, then the number of edges is in any case smaller than (n + a) ~ • l, since there are only n + a states. This computation can be performed in time of an order less than o((~z + a)3). Second, we have to build the intersections for Arc[z,a] and Arc[z,g] for every z and a. Inter- section of two NFAs is done by building a cross- product machine, requiring maximally o((~z + a) 4 • l) time and space. ¢ The test for emptiness of these intersection machines is again trivial and can be performed in constant time. Hence, we estimate a total time and space com- plexity of order n- a. (Tz + a) 4 • I. 7This is an estimate for the number of edges, since the nmuber of states is below (n + a) 2. As usual, we assume appropriate data structures where we can neglect the order of access times. Probably the space (and time) complexity can be reduced hrther, since we actually do not need the representations of the intersection machines besides for testing, whether they can accept anything. 261 5 Conclusion We proposed an extension to the basic feature logic of variables, features, atoms, and equa- tional constraints. This extension provides a means for one-way information passing. We have given a simple, but nevertheless completely formal semantics for the logic and have shown that the satisfiability (or unification) problem in the logic involving weak subsumption con- straints is decidable in polynomial time. Fur- thermote, the first part of the algorithm is a sur- prisingly simple extension of a standard unifica- tion algorithm for feature logic. We have formu- lated the second part of the problem as a simple property of the regular language which the out- come of the first part defines. Hence, we could make use of standard techniques from automata theory to solve this part of the problem. The algorithm has been proved to be correct, com- plete, and guaranteed to terminate. There are no problems with cycles or with infinite chains of subsumption relations as generated by a con- straint like z ~ zf. s The basic algorithmic requirements to solve the problem being understood, the challenge now is to find ways how solutions can be found in a more incremental way, if we already have solu- tions for subsets of a clause. To achieve this we plan to amalgamate more closely the two parts algorithms, for instance, through implementing the check for clash-freeness also with the help of (a new form of) constraints. It would be in- teresting also from a theoretical point of view to find out how much of the complexity of the second part is really necessary. Acknowledgment I am indebted to Bill Romtds for reading a first draft of this paper and pointing out to me a way to test dash-freeness in polynomial time. Of course, any remaining errors are those of the author. I would also llke to thank Gert Smolka for giving valuable comments on the first draft. References [Bouma et at. 88] Gosse Bouma, Esther K~nig and Hans Uszkoreit. A flexible graph-lmification for- realism and its application to natural-language processing. In: IBM Journal of Research and De- velopment, 1988. SSee [$hieber 89] for a discussion of this problem. [D~rre/Rounds 90] Jochen D~rre and Willimn C. Rounds. On Subsuraption and Seminnification in Feature Algebras. In Proceedings of the 5th An- nual Symposium on Logic in Computer Science, pages 300-310, Philadelphia, PA., 1990. Also ap- pears in: Journal of Symbolic Computation. [Johnson 87] Mark Jolmson. Attribute-Value Logic and the Theory of Grammar. CSLI Lecture Notes 16, CSLI, Stanford University, 1987. [Kaphm/Bresnan 82] Ronald M. Kaplan and Joan Bresnan. Lexleal Functional Granunar: A For- real System for Grammatical Representation. In: J. Bresnan (ed.), The Mental Representation o] Grammatical Relations. MIT Press, Cambridge, Massachusetts, 1982. [Kaplan/Maxwell 88] Ronald M. Kaplan and John T. Maxwell HI. Constituent Coordination in Lexieal-Functional Grammar. In: Proc. o] COL. ING'88, pp.303-305, Budapest, Hmtgary, 1988. [Kasper/Rounds 86] Robert T. Kasper and Williant C. Rounds. A Logical Semantics for Feature Structures. In: Proceedings o] the ~th Annual Meeting o] the A CL. Columbia University, New York, NY, 1986. [Kay 79] Martin Kay. Functional Grmnmar. In: C. Chiarello et al. (eds.) Proceedings o] the 5th An- nual Meeting o] the Berkeley Linguistic Society. 1979. [Kay 85] Martin Kay. Parsing in Functional Unifi- cation Grammar. In: D. Dowry, L. Karttunen, and A. Zwieky (eds.) Natural Language Parsing, Cambridge, England, 1985 [Pollard/Sag 87] Carl Pollard and Ivan A. Sag. In]ormation-Based Syntax and Semantics, Voi. 1. CSLI Lecture Notes 13, CSLI, Stanford Uni- versity, 1987. [Rounds 88] William C. Rounds. Set Values for Unification-Based Grammar Formalisms and Logic Programming. CSLI-Report 88-129, CSLI, Stanford University, 1988. [Shieber et al. 83] Stuart M. Shiebcr, Hans Uszko- reit, Fernando C.N. Perelra, J.J. Robinson, M. Tyson. The formalism and implementation of PATR-II. In: J. Bresnan (ed.), Research on In- teractive Acquisition and Use o] Knowledge, SRI International, Menlo Park, CA, 1983. [Shieber 89] Stuart M. Shieber. Parsing and Type Inference for Nahtral and Computer Languages. Technical Note 460, SRI International, Meldo Park, CA, March 1989. [Smolka 88] Gert Smolka. A Feature Logic with 5ub- sorts. LILOG-Report 33, IWBS, IBM Deutsch- land, W. Germany, May 1988. To appear in the Journal of Automated Reasoning. [SmoUm 89] Gert Smollm. Feature Constraint Log- ics ]or Unification Grammars. I"WBS Report 93, IWBS, IBM Deutschland, W. Germany, Nov. 1989. To appear in the Proceedings of the Work- shop on Unification Formalisms--Syntax, Se- mantics and Implementation, Titisee, The MIT Press, 1990. 262 Appendix A: Proof of Theorem 2 From Theorem I we know that C is equivalent to Co, i.e. it suffices to show the theorem for the existence of solutions of C in 2) and 3). Since 2) =~ 3) is obvious, it remains to show 1) =~ 2) and 3 7 ::~ 1). 1) ~ 2): We construct a finite model (¢4, a) whose domain contains partial functions from paths to atoms (D A C L*-+ A). Interpretation and variable assignment are given as follows: • a "~ = {(e,a)} for every atom a (the function mapping only the empty string to a) • for every ] • L,X • L*--M: X] "4 = {(p, a)I (Yp, a) • x} • a(w) = {(p,a) lzpa •L(Afc)}, which is apar- tial function, due to 1 ). Now let the elements of the dommn be, besides in- terpretations of atoms, just those objects (partial functions) which can be reached by application of some features to some a(z). • DA = {a(z)q "41 =eVe, q•L'} u {.-~ IaeA}. To see that D ~t is finite, we first observe that the domain of each a(z) is a regular set ({Pl zpaEA/'v, aEA}) and the range is finite. Now, for a regular set R call {p [ qp•R} the suffix language of R with respect to string q. It is clear, that there are only finitely many suit-ix languages, since each corresponds to one state in the minimal finite au- tomaton for R. But then also the number of partial functions "reachable" from an a(z) is finite, since the domain of a(z)q "4 is a suffix language of the do- main of a(=). We now show that given 17 the model (.A~ c~) satisfies all constraints in C. • Ifm "-- a • C: za e Z~(Afc) ~ (,,a) e o,(z). Now we know from I) that no other pair is in a(=), i.e. a(z) = a "a. • If m - y • C: Since = occurs only once in C, the only transition from z is (z, e, y), thus (=,p,a) • ~c m (V,p,a) • ~. We conclude that (p, a t • 0t(=) itf (p, a) • a(V ). • If my "- y • C: Let (p,a) • a(m)/"4. Then (z, ]p,a) • 5c. This implies that there is a state z' reachable with n e-moves (n > 0) from m such that z'f - V' • C and (y',p,a) • 5c, i.e. z' is the last state before f is consumed on a path consuming .fpa. But now, since e- moves on such a chain correspond to subsump- tion constraints (none of the variables in the cltain is isolated) and since C is transitively closed for subsumption constraints, C has to contain a constraint z' ~ z. But the last con- dition for normal form now tells us, that also y' ~ y is in C, implying (y,e,V') • 5c. Hence, (~, p, ~) • ~c and (p,.) • ~(v). Conversely, let (p, a) • (z(y). Then (11,P, a) • 6c. Front the construction also (z,/, V) • 6c, hence (Iv,,,) • ,~(=) and (p, a) • ~,(~.)f~. 263 • If x ~ V • C: The simulation witnessing a(z) ~a a(V) is simply the subset relation. Suppose (p, a) • a(m). We conclude (z,p, a) • ~c, but ~o (v, e, =) • ~c. Hence, (V,P, a) • ic and (p, a t E a(y). In order to show the other direction let us first show a property needed in the proof. Lemma 1 /f (.A, a) is a model/or presol~ed elaase c ,,.,t (=,p, s) e ,~c, the. o,(s) ~a ,.(,~)pa. (z! s = a let ez(a) = a A/or this p~trpose.) Proof. We show by induction over the defmltion of 5c that, given the condition above, there exists a simulation a in .,4 such that a(s)Aa(z)p'4.. 1. p=eandz=v: A=ID. 2. p = e and lt~z • C: since ot is asolution, there exists a simulation A with cr(y)Aot(z) [= ~(~)~l. 3. p = f and zf - y • C: A =ID, since ~(=)/a = ~(v). 4. p= e and=- s 6 C: A =ID, sincea(z) = ~(s). 5. p = qr ,~d (=,q,,) • ;c and (V,,',s) • ~c: by induction hypothesis there exist Al and A= such that ~,(V),',,,,(=)q "4 and ,.(s)~,.(y),-.'. Let A = (At O A=)* (the transitive clo- sure of their union), then a(y)Aa(z)q "4 and a(s)Aa(y)r "a. But now, since oz(y)r "4 I and A is a simulation, also cr(y)rAAot(z)qAr "~. H~ce, ~,(s)a~,(=)(q,') "~. o let us proof 3) ~ 1) of the main theorem by Now contradiction. s) ~ 1): Suppose t) does not hold, but (.,4, c,) ~ C. there is a string zpa • L(A/'c) such that Case 1: =pb • L(JV*c) where a # b. Then know with lemma 1 that a "4 ~A a(m)p.a and bA ~.4 ~x(z)p.4. But this Contradicts condition 1) for a simulation: oz(z)f t = a "t # b "4 = ~,(=)p,4. Case 2: zp] 6 L(A/c). As in case 1) we have ~(z)pjt __ aA. Ffonl which entails that ].4 has to be defined for ~(z)p.4, a contradiction. This completes the proof. []
1991
33
WORD-SENSE DISAMBIGUATION USING STATISTICAL METHODS Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer IBM Thomas J. Watson Research Center P.O. Box 704 Yorktown Heights, NY 10598 ABSTRACT We describe a statistical technique for assign- ing senses to words. An instance of a word is as- signed a sense by asking a question about the con- text in which the word appears. The question is constructed to have high mutual information with the translation of that instance in another lan- guage. When we incorporated this method of as- signing senses into our statistical machine transla- tion system, the error rate of the system decreased by thirteen percent. INTRODUCTION An alluring aspect of the statistical ~p- proach to machine translation rejuvenated by Brown et al. [Brown et al., 1988, Brown et al., 1990] is the systematic framework it provides for attacking the problem of lexical disam- biguation. For example, the system they de- scribe translates the French sentence Je vais prendre la ddcision as I will make the decision, correctly interpreting prendre as make. The statistical translation model, which supplies English translations of French words, prefers the more common translation take, bnt the trigram language model recognizes that the three-word sequence make the decision, is much more probable than take the decision.. The system is not always so successfifl. It incorrectly renders Je vais prendre ma propre ddcision as 1 will take my own decision. The language model does not realize that take my own decision is improbable because take and decision no longer fall within a single trigram. Errors such as this are common because the statistical models only capture local phe- nomena; if the context necessary to determine a translation falls outside the scope of the models, the word is likely to be translated in- correctly, t[owever, if the relevant context is encoded locally, the word should be translated correctly. We can achieve this within the tra- ditional paradigm of analysis, transfer, and synthesis by incorporating into the analysis phase a sense-disambiguation component that assigns sense labels to French words. If pren- dre is labeled with one sense in the context of ddcision but with a different sense in other contexts, then the translation model will learn front trMning data that the first sense usually translates to make, whereas the other sense usuMly translates to take. Previous efforts a.t algorithmic disambigua- tion of word senses [Lesk, 1986, White, 1988, Ide and V6ronis, 1990] have concentrated on information that can be extracted from elec- tronic dictionaries, and focus, therefore, on senses as determined by those dictionaries. llere, in contrast, we present a procedure for constructing a sense-disambiguation compo- nent that labels words so as to elucidate their translations in another language. We are con- 264 The proposal Les propositions will not / ne seront pas now be implemented mises en application maintenant Figure 1: Alignment Example cerned about senses as they occur in a dic- tionary only to the extent that those senses are translated differently. The French noun intdr~t, for example, is translated into Ger- man as either Zins or [nteresse according to its sense, but both of these senses are trans- lated into English as interest, and so we make no attempt to distinguish them. STATISTICAL TRANSLATION Following Brown et al. [Brown et al., 1990], we choose as the translation of a French sen- tence F that sentence E for which Pr (E[F) is greatest. By Bayes' rule, Pr (ELF) = Pr (E) Pr Pr(F) (1) Since the denominator does not depend on E, the sentence for which Pr (El/7') is great- est is also the sentence for which the product Pr (E) Pr (FIE) is greatest. The first factor in this product is a statistical characteriza- tion of the English language and the second factor is a statistical characterization of the process by which English sentences are trans- lated into French. We can compute neither factors precisely. Rather, in statistical trans- lation, we employ models from which we can obtain estimates of these values. We cM1 the model from which we compute Pr (E) the lan- guage model and that from which we compute Pr(FIE ) the translation model. The translation model used by Brown et al. [Brown et al., 1990] incorporates the concept of an alignment in which each word in E acts independently to produce some of the words in F. If we denote a typical alignment by A, then we can write the probability of F given E as a sum over all possible alignments: Pr (FIE) = ~ Pr (F, AlE ) . (2) A Although the number of possible alignments is a very rapidly growing function of the lengths of the French and English sentences, only a tiny fraction of the alignments contributes sub- stantiMly to the sum, and of these few, one makes the grea.test contribution. We ca.ll this most probable alignment the Viterbi align- ment between E a.nd F. Tile identity of tile Viterbi alignment for a pair of sentences depends on the details of the translation model, but once the model is known, probable alignments can be discovered algoritlunically [Brown et al., 1991]. Brown et al. [Brown et al., 1990], show an example of such an automatically derived alignment in their Figure 3. (For the reader's convenience, we ha.re reproduced that figure here as Figure 1.) 265 In a Viterbi alignment, a French word that is connected by a line to an English word is said to be aligned with that English word. Thus, in Figure 1, Les is aligned with The, propositions with proposal, and so on. We call a p~ir of aligned words obtained in this way a connection. From the Viterbi alignments for 1,002,165 pairs of short French and English sentences from the Canadian Hansard data [Brown et al., 1990], we have extracted a set of 12,028,485 connections. Let p(e, f) be the probability that a connection chosen at random fi:om this set will connect the English word e to the French word f. Because each French word gives rise to exactly one connection, the right marginM of this distribution is identical to the distribution of French words in these sen- tences. The left marginal, however, is not the same as the distribution of English words: English words that tend to produce several French words at a time are overrepresented while those that tend to produce no French words are underrepresented. SENSES BASED ON BINARY QUESTIONS Using p(e, f) we can compute the mutuM information between a French word and its English mate in a connection. In this section, we discuss a method for labelling a word with a sense that depends on the context in which it appears in such a way as to increase the mutual information between the members of a connection. In the sentence Je vats prendre .ma pro- pre ddeision, the French verb prendre should be translated as make because the obiect of prendre is ddcision. If we replace ddcision by voiture, then prendre should be translated as take to yield [ will take my own ear. In these examples, one can imagine assigning a sense to prendre by asking whether the first noun to the right of prendre is ddeision or voiture. We say that the noun to the right is the informant for prendre. In I1 doute que les ndtres gagnent, which means He doubts that we will win, the French word il should be translated as he. On the other hand, in II faut que les n6tres gagnent, which means It is necessary that we win, il should be translated as it. Here, we can de- termine which sense to assign to il by asking about the identity of the first verb to its right. Even though we cannot hope to determine the translation of il from this informant unam- biguously, we can hope to obtain a significant amount of information about the translation. As a final example, consider the English word is. In the sentence I think it is a prob- lem, it is best to translate is as est as in Je pense que c'est un probl~me. However, this is certainly not true in the sentence [ think there is a problem, which translates as Je pense qu'il y aun probl~me. Here we can reduce the en- tropy of the distribution of the translation of is by asking if the word to the left is there. If so, then is is less likely to be translated as est than if not. Motivated by examples like these, we in- vestigated a simple method of assigning two senses to a word w by asking a single binary question about one word of the context in which w appears. One does not know before- hand whether the informant will be the first noun to the right, the first verb to the right, or some other word in the context of w. How- ever, one can construct a question for each of a number of candidate informant sites, and then choose the most informative question. Given a potential informant such as the first noun to the right, we can construct a question that has high mutual information with the translation of w by using the flip-flop algo- rithm devised by Nadas, Nahamoo, Picheny, and Poweli [Nadas et aL, 1991]. To under- stand their algorithm, first imagine that w is a French word and that English words which are possible translations of w have been divided into two classes. Consider the prol>lem of con- structing 4. 1)inary question about the poten- tial inform ant th a.t provides maximal inform a- tion about these two English word classes. If the French vocabulary is of size V, then there 266 are 2 v possible questions, tlowever, using the splitting theorem of Breiman, Friedman, O1- shen, and Stone [Breiman et al., 1984], it is possible to find the most informative of these 2 v questions in time which is linear in V. The flip-flop Mgorithm begins by making an initiM assignment of the English transla- tions into two classes, and then uses the split- ting theorem to find the best question about the potential informant. This question divides the French vocabulary into two sets. One can then use the splitting theorem to find a di- vision of the English translations of w into two sets which has maximal mutual informa- tion with the French sets. In the flip-flop al- gorithm, one alternates between splitting the French vocabulary into two sets and the En- glish translations of w into two sets. After each such split, the mutual information be- tween the French and English sets is at least as great as before the split. Since the mutual information is bounded by one bit, the process converges to a partition of the French vocab- ulary that has high mutual information with the translation of w. A PILOT EXPERIMENT We used the flip-flop algorithm in a pilot experiment in which we assigned two senses to each of the 500 most common English words and two senses to each of the 200 most com- mon French words. For a French word, we considered ques- tions about seven informants: the word to the left, the word to the right, the first noun to the left, the first noun to the right, the first verb to the left, the first verb to the right, and the tense of either the current word, if it is a verb, or of the first verb to the left of the current word. For an English word, we only considered questions about the the word to the left and the word two to tim left. We re- stricted the English questions to the l)revious two words so that we could easily use them in our translation system which produces an English sentence from left to right. When a potential informant did not exist, because, say there was no noun to the left of some Word: Informant: Information: prendre Right noun .381 bits Sense 1 TERM_WORD mesure note exemple temps initiative part Sense 2 d~cision parole connaissance engagement fin retr~ite Common informant values for each sense Pr(English [ Sense 1) Pr(English [ Sense 2) to_take .433 to_make .061 to_do .051 to_be .045 to_make .186 to-speak .105 to_rise .066 to_take .066 to_be .058 decision .036 to-get .025 to_have .021 Probabilities of English translations Figure 2: Senses for the French word prendre word in a particular sentence, we used the spe- cial word, TERM_WORD. To find the nouns and verbs in our French sentences, we used the tagging Mgorithm described by MeriMdo [Merialdo, 1990]. Figure 2 shows the question that was con- str,cted for tile verb prendre. The noun to the right yielded the most information, .381 bits, about the English translation of prendre. The box in the top of the figure shows the words which most frequently occupy that site, that is, tile nouns which appear to the right of prendre with a probability greater than one part in fifty. All instance of prendre is assigned the first or second sense depending on whether the first noun to the right appears in the left- ha.nd or the right-hand column. So, for ex- 267 Word: Informant: Information: vouloir Verb tense .349 bits Word: Informant: Information: del)uis Word to the right .738 bits Sense 1 Sense 2 3rd p sing present 1st p sing present 3rd p plur present 1st p pint present 2nd p pint present 3rd p sing imperfect 1st p sing imperfect 3rd p sing future 1st p sing conditional 3rd p sing conditional 3rd p plur conditional 3rd p plur subjunctive 1st p plur conditional Common informant values for each sense Sense 1 longtemps de UR quelques denx 1 plus trois Sense 2 le la l' ce les 1968 Comnmn informant values for each sense Pr(English[Sense 1) Pr(English [ Sense 2) to_want .484 to_mean .056 to_be .056 to_wish .033 to_rear .022 to_like .020 toJike .391 to_want .169 to_have .083 to_wish .066 me .029 Probabilities of English translations Figure 3: Senses for the French word vouloir ample, if the noun to the right of prendre is ddeision, parole, or eonnaissance, then pren- dre is assigned the second sense. The box at the bottom of the figure shows the most prob- able translations of each of the two senses. Notice that the English verb to_make is three times as likely when prendre has the second sense as when it has the first sense. People make decisions, speeches, and acquaintances, they do not take them. Figure 3 shows our results for the verb vouloir. Here, the best informant is the tense of vouloir. The first sense is three times more likely than the second sense to translate as to_want, but twelve times less likely to trans- late as to_like. In polite English, one says I would like so and so more commonly than [ would want so and so. Pr (English I Sense 1) Pr (English I Sense 2) for .432 last .123 long .102 past .078 over .027 in .022 overdue .021 since .772 from .040 Probabilities of English translations Figure 4: Senses for the French word depuis Tile question in Figure 4 reduces the en- tropy of the translation of the French prepo- sition depuis by .738 bits. When depuis is fol- lowed by an article, it translates with proba- bility .772 to .since, and otherwise only with probability .016. Finally, consider the English word cent. In our text, it is either a denomination of cur- rency, in which case it is usually preceded by a number and translated as c., or it is the second half of per cent, in which case it is pre- ceded by per and transla,ted along with per as ~0. The results in Figure 5 show that the al- gorithm has discovered this, and in so doing has reduced the entropy of the translation of cent by .378 bits. 268 Word: cent Informant: Word to the left Information: .378 bits Sense 1 Sense 2 per 0 8 5 2 a one 4 7 Common informant values for each sense Pr(French I Sense 1) Pr(French [Sense 2) % .891 c. .592 cent .239 sou .046 % .022 Probabilities of French translations Figure 5: Senses for the English word cent Pleased with these results, we incorporated sense-assignment questions for the 500 most common English words and 200 most com- mon French words into our translation sys- tem. This system is an enhanced version of the one described by Brown et al. [Brown et al., 1990] in that it uses a trigram lan- guage model, and has a French vocabulary of 57,802 words, and an English vocabulary of 40,809 words. We translated 100 randomly selected Hansard sentences each of which is 10 words or less in length. We judged 45 of the resultant translations as acceptable as compared with 37 acceptable translations pro- duced by the same system running without sense-disambiguation questions. FUTURE WORK Although our results are promising, this particular method of assigning senses to words is quite limited. It assigns at most two senses to a word, and thus can extract no more than one bit of information about the translation of that word. Since the entropy of the transla- tion of a common word can be as high as five bits, there is reason to hope that using more senses will fitrther improve the performance of our system. Our method asks a single ques- tion about a single word of context. We can think of tlfis as the first question in a deci- sion tree which can be extended to additional levels [Lucassen, 1983, Lucassen and Mercer, 1984, Breiman et al., 1984, Bahl et al., 1989]. We are working on these and other improve- ments and hope to report better results in the future. REFERENCES [Bahl et aL, 1989] BMd, L., Brown, P., de Souza, P., and Mercer, R. (1989). A tree-based statistical language model for natural language speech recognition. IEEE Transactions on Acoustics, Speech and Sig- nal Processing, 37:1001-1008. [Breiman et ai., 1984] Breiman, L., Fried- man, J. tI., Olshen, R. A., and Stone, C. J. (1984). Classification and Regres- sion Trees. Wadsworth & Brooks/Cole Ad- vanced Books & Software, Monterey, Cali- fornia. [Brown et aL, 1990] Brown, P. F., Cocke, J., DellaPietra, S. A., DellaPietra, V. J., Je- linek, F., Lafferty, J. D., Mercer, R. L., and Roossin, P. S. (1990). A statistical ap- l)roach to machine translation. Computa- tional Linguistics, 16(2):79--85. [Brown et al., 1988] Brown, P. F., Cocke, J., DellaPietra, S. A., DellaPietra, V. J., Je- linek, F., Mercer, R. L., and Roossin, P. S. (1988). A statistical approach to language translation. I!1 Proceedings of the 12th In- ternational Conference on Computational Linguistics, Budapest, Hungary. [Brown et aL, 1991] Brown, P. F., DellaPi- etra, S. A., DellaPietta, V. J., and Mercer, R. L. (1991). Parameter estimation for ma- chine translation. In preparation. [hie and V@onis, 1990] Ide, N. and V6ronis, .I. (1990). Mapping dictionaires: A spread- 269 ing activation approach. I:! Proccedil~!ls of the Sixth Annual Conferen~:e of the UII' Centre for the New Oxford English Dictio- nary and Text Research, pages 52-6,t, Wa- terloo, Canada. [Lesk, 1986] Lesk, M. E. (1986). Auto- mated sense disambiguation using machine- readable dictionaries: How to tell a pine cone from an ice cream cone. In Proceed- ings of the SIGDOC Conference. [Lncassen, 1983] Lucassen, J. M. (1983). Dis- covering phonemic baseforms automati- cally: an information theoretic approach. Technical Report RC 9833, IBM Research Division. [Lucassen and Mercer, 1984] Lucassen, J. M. and Mercer, R. L. (1984). An information theoretic approach to automatic determi- nation of phonemic baseforms. In Proceed- ings of the IEEE International Conference on Acoustics, Speech and Signal Processing, pages 42.5.1-42.5.4, San Diego, California. [Meria]do, 1990] Merialdo, B. (1990). Tag- ging text with a probabilistic model. In Proceedii~gs of the IBM Natural Language ITL, pages 161-172, Paris, France. [Nadas et at., 1991] Nadas, A., Nahamoo, D., Picheny, M. A., and Powell, J. (1991). An iterative "flip-flop" approximation of the most informative split in the construc- tion of decision trees. In Proceedings of the IEEE International Conference on Acous- tics, Speech and Signal Processing, Toronto, Canada. [White, 1988] White, J. S. (1988). Deter- mination of lexical-semantic relations for multi-lingual terminology structures. In Relational Models of the Lexicon,. Cam- bridge University Press, Cambridge, OK. 270
1991
34
A STOCHASTIC PROCESS FOR WORD FREQUENCY DISTRIBUTIONS Harald Baayen* Maz-Planck-Institut fiir Psycholinguistik Wundtlaan 1, NL-6525 XD Nijmegen Internet: [email protected] ABSTRACT A stochastic model based on insights of Man- delbrot (1953) and Simon (1955) is discussed against the background of new criteria of ade- quacy that have become available recently as a result of studies of the similarity relations be- tween words as found in large computerized text corpora. FREQUENCY DISTRIBUTIONS Various models for word frequency distributions have been developed since Zipf (1935) applied the zeta distribution to describe a wide range of lexical data. Mandelbrot (1953, 1962)extended Zipf's distribution 'law' K f, = ?x, (i) where fi is the sample frequency of the i th type in a ranking according to decreasing frequency, with the parameter B, K f~ = B + i~ ' (2) by means of which fits are obtained that are more accurate with respect to the higher frequency words. Simon (1955, 1960) developed a stochas- tic process which has the Yule distribution f, = AB(i, p + 1), (3) with the parameter A and B(i, p + i) the Beta function in (i, p + I), as its stationary solutions. For i --~ oo, (3) can be written as f~ ~ r(p + 1)i -(.+I) , in other words, (3) approximates Zipf's law with respect to the lower frequency words, the tail of *I am indebted to Kl~as van Ham, Richard Gill, Bert Hoeks and Erlk Schils for stimulating discussions on the statistical analysis of lexical similarity relations. the distribution. Other models, such as Good (1953), Waring-Herdan (Herdan 1960, Muller 1979) and Sichel (1975), have been put forward, all of which have Zipf's law as some special or limiting form. Unrelated to Zipf's law is the lognormal hypothesis, advanced for word fre- quency distributions by Carroll (1967, 1969), which gives rise to reasonable fits and is widely used in psycholinguistic research on word fre- quency effects in mental processing. A problem that immediately arises in the con- text of the study of word frequency distribu- tions concerns the fact that these distributions have two important characteristics which they share with other so-called large number of rare events (LNRE) distributions (Orlov and Chi- tashvili 1983, Chltashvili and Khmaladze 1989), namely that on the one hand a huge number of different word types appears, and that on the other hand it is observed that while some events have reasonably stable frequencies, others occur only once, twice, etc. Crucially, these rare events occupy a significant portion of the list of all types observed. The presence of such large num- bers of very low frequency types effects a signifi- cant bias between the rank-probability distribu- tion and the rank-frequency distributions lead- ing to the contradiction of the common mean of the law of large numbers, so that expressions concerning frequencies cannot be taken to ap- proximate expressions concerning probabilities. The fact that for LNRE distributions the rank- probability distributions cannot be reliably esti- mated on the basis of rank-frequency distribu- tions is one source of the lack of goodness-of-fit often observed when various distribution 'laws' are applied to empirical data. Better results are obtained with Zipfian models when Orlov and Chitashvili's (1983) extended generalized Zipf's law is used. A second problem which arises when the ap- propriateness of the various lexical models is 271 considered, the central issue of the present dis- cussion, concerns the similarity relations among words in lexical distributions. These empirical similarity relations, as observed for large corpora of words, impose additional criteria on the ad- equacy of models for word frequency distribu- tions. SIMILARITY RELATIONS There is a growing consensus in psycholinguis- tic research that word recognition depends not only on properties of the target word (e.g. its length and frequency), but also upon the number and nature of its lexical competitors or neigh- bors. The first to study similarity relations among lexical competitors in the lexicon in re- lation to lexical frequency were Landauer and Streeter (1973). Let a seighbor be a word that differs in exactly one phoneme (or letter) from a given target string, and let the neighborhood be the set of all neighbors, i.e. the set of all words at Hamming distance 1 from the target. Landauer and Streeter observed that (1) high- frequency words have more neighbors than low- frequency words (the neighborhood density ef- fect), and that (2) high-frequency words have higher-frequency neighbors than low-frequency words (the neighborhood frequency effect). In order to facilitate statistical analysis, it is con- venient to restate the neighborhood frequency effect as a correlation between the target's num- ber of neighbors and the frequencies of these neighbors, rather than as a relation between the target's frequency and the frequencies of its neighbors -- targets with many neighbors having higher frequency neighbors, and hence a higher mean neighborhood frequency .f,~ than targets with few neighbors. In fact, both the neighbor- hood density and the neighborhood frequency effect are descriptions of a single property of lexical space, namely that its dense similarity regions are populated by the higher frequency types. A crucial property of word frequency dis- tributions is that the lexical similarity effects oc- cur not only across but also within word lengths. Figure 1A displays the rank-frequency distri- bution of Dutch monomorphemic phonologically represented stems, function words excluded, and charts the lexical similarity effects of the subset of words with length 4 by means of boxplots. These show the mean (dotted line), the median, the upper and lower quartiles, the most extreme data points within 1.5 times the interquartile range, and remaining outliers for the number of neighbors (#n) against target frequency (neigh- borhood density), and for the mean frequency of the neighbors of a target (f,~) against the hum- Table i: Spearman rank correlation analysis of the neighborhood density and frequency effects for empirical and theoretical words of length 4. Dutch Mand. Mand.-Simon dens. freq. r, 0.24 0.65 0.31 0.06 0.42 O. I0 ~e t 9.16 68.58 11.97 df 1340 6423 1348 rs 0.51 0.62 0.61 2 0.26 0.38 0.37 7" i t 21.65 63.02 28.22 df 1340 6423 1348 ber of neighbors of the target (neighborhood fre- quency), for targets grouped into frequency and density classes respectively. Observe that the rank-frequency distribution of monomorphemic Dutch words does not show up as a straight line in a double logarithmic plot, that there is a small neighborhood density effect and a some- what more pronounced neighborhood frequency effect. A Spearman rank correlation analysis reveals that the lexlcal similarity effects of fig- ure 1A are statistically highly significant trends (p <~ 0.001), even though the correlations them- selves are quite weak (see table 1, column 1): in the case of lexical density only 6% of the variance is explained. 1 STOCHASTIC MODELLING By themselves, models of the kind proposed by Zipf, Herdan and Muller or Sichel, even though they may yield reasonable fits to partic- ular word frequency distributions, have no bear- ing on the similarity relations in the lexicon. The only model that is promising in this respect is that of Mandelbrot (1953, 1962). Mandel- brot derived his modification of Zipf's law (2) on the basis of a Markovlan model for generat- ing words as strings of letters, in combination with some assumptions concerning the cost of transmitting the words generated in some op- timal code, giving a precise interpretation to Zipf's 'law of abbreviation'. Miller (1057), wish- ing to avoid a teleological explanation, showed that the Zipf-Mandelbrot law can also be de- rived under slightly different assumptions. Inter- estingiy, Nusbaum (1985), on the basis of sim- ulation results with a slightly different neighbor definition, reports that the neighborhood density and neighborhood frequency effects occur within XNote that the larger value of r~ for the neighborhood frequency eiTect is a direct consequence of the fact that the frequencies of the neighbors of each target are a~- eraged before they enter into the calculations, masking much of the variance. 272 lO t 10 a 10 ~ 10 x I0 ° ~Tt 20 16 12 8 4 0 I 2000 I000 500 100 50 ]l ]l ]I II l[ 0 FC 1 ! ! 12 456 ~,e ~e3 ~vs 3~s xe, oe ea # items o " i DC I I I | I I l I I 1 2 3 4 5 6 10° lOt 102 lO! lOt ~li J.o ~ox I,. ,o x. # itenm A: Dutch monomorphemic stems in the CELEX database, standardized at 1,00O,0OO. For the total distribution, N = 224567, V = 4455. For strings of length 4,/V = 64854, V = 1342. l0 t 55 10 a 10110 ~ "° . -- 443322 11 10° ~ , i 0 I0 ° I0 x 102 I0 ~ lO t I0 ~ , , , , , FC 1 1 3 4 5 6 7 :sso xs,41oo:svv :8v x~J s7 # itenm I000 500 I00 50 I0 iilIlii , , DC 1 2 3 4 5 6 7 3s4 ..'r .es e~x so. e.uoo~ # items B: Simulated Dutch monomorphemic stems, as generated by a Markov process. For the total distribu- tion, N = 224567, V = 58300. For strings of length 4, N = 74618, V -- 6425. /, #n /. 104 35 I000 ~ • I "" . ! [II]ll 103 28 I00 21 50 I0 s 14 10 101 7 10° , i 0 FC 1 DC 345 I0° 10z 10s 103 104 3w 2s~ 20, ~ss z~o ,. xs~ # items xg~ ~o 23~ ~ ~ov ~v a~ # items C: Simulated Dutch monomorphemic stems, as generated by the Mandelbrot-Simon model (a = 0.01, Vc = 2000). For the total distribution, N = 291944, V = 4848. For strings of length 4, N = 123317, V = 1350. Figure 1: Rank-frequency and lexical similarity characteristics of the empirical and two simulated distributions of Dutch phonological stems. From left to right: double logarithmic plot of rank i versus frequency fi, boxplot of frequency class FC (1:1;2:2-4;3:5-12;4:13-33;5:34-90;6:91-244;7:245+) versus number of neighbors #n (length 4), and boxplot of density class DC ( 1:1-3;2:4-6;3:7-9;4:10-12;5:13- 15;6:16-19;7:20+) versus mean frequency of neighbors fn (length 4). (Note that not all axes are scaled equally across the three distributions). N: number of tokens, V: number of types. 273 a given word length when the transition proba- bilities are not uniformly distributed. Unfortu- nately, he leaves unexplained why these effects occur, and to what extent his simulation is a realistic model of lexical items as used in real speech. In order to come to a more precise understand- ing of the source and nature of the lexical simi- larity effects in natural language we studied two stochastic models by means of computer simu- lations. We first discuss the Markovian model figuring in Mandelbrot's derivation of (2). Consider a first-order Markov process. Let A = {0,1,...,k} be the set of phonemes of the language, with 0 representing the terminat- ing character space, and let T ~ : (P~j)i,jeA with P00 = 0. If X,~ is the letter in the r, th position of a string, we define P(Xo = i) = po~, i E A. Let y be a finite string (/o,/1,...,/m-z) for m E N and define X (m) := (Xo, XI,... ,Xm-1), then Pv := p(X(") = l~) = Po~01~0~l...l~.._0~,_,. (4) The string types of varying length m, terminat- ing with the space and without any intervening space characters, constitute the words of the the- oretical vocabulary s,,, := {(io, i~,...,~,,_=,o): ij E A \ O,j =O,I,...,m- 2, mE N}. With N~ the token frequency of type y and V the number of different types, the vec- tor (N~,N~= , .... N~v) is multinomially dis- tributed. Focussing on the neighborhood den- sity effect, and defining the neighborhood of a target string yt for fixed length rn as Ct := ~y E such we have that the of Yt equals S,,, : 3!i e {0, 1,..., m - 2} that yl ¢ yt} , expected number of neighbors E[V(Ct)] = ~ {1 - (1 - p~)N}, (5) IIEC, with N denoting the number of trials (i.e. the number of tokens sampled). Note that when the transition matrix 7 ) defines a uniform distribu- tion (all pi# equal), we immediately have that the expected neighborhood density for length rnl is identical for all targets Yt, while for length m~ > rnl the expected density will be less than that at length ml, since p(,n=) < p(,m) given (4). With E[Ny] = Np~, we find that the neigh- borhood density effect does occur across word lengths, even though the transition probabilities are uniformly distributed. In order to obtain a realistic, non-trivial the- oretical word distribution comparable with the empirical data of figure 1A, the transition matrix 7 ~ was constructed such that it generated a sub- set of phonotactically legal (possible) monomor- phematic strings of Dutch by conditioning con- sonant CA in the string X~XjC~ on Xj and the segmental nature (C or V) of Xi, while vowels were conditioned on the preceding segment only. This procedure allowed us to differentiate be- tween e.g. phonotactically legal word initial kn and illegal word final k• sequences, at the same time avoiding full conditioning on two preced- ing segments, which, for four-letter words, would come uncomfortably close to building the prob- abilities of the individual words in the database into the model. The rank-frequency distribution of 58300 types and 224567 tokens (disregarding strings of length 1) obtained by means of this (second or- der) Markov process shows up in a double Iog- arithrnic plot as roughly linear (figure IB). Al- though the curve has the general Zipfian shape, the deviations at head and tail are present by ne- cessity in the light of Rouault (1978). A compar- ison with figure 1A reveals that the large surplus of very low frequency types is highly unsatisfac- tory. The model (given the present transition matrix) fails to replicate the high rate of use of the relatively limited set of words of natural lan- guage. The lexlcal similarity effects as they emerge for the simulated strings of length 4 are displayed in the boxplots of figure lB. A very pronounced neighborhood density effect is found, in combi- nation with a subdued neighborhood frequency effect (see table 1, column 2). The appearance of the neighborhood density effect within a fixed string length in the Marko- vian scheme with non-uniformly distributed p~j can be readily understood in the simple case of the first order Markov model outlined above. Since neighbors are obtained by substitution of a single element of the phoneme inventory A, two consecutive transitional probabilities of (4) have to be replaced. For increasing target prob- ability p~,, the constituting transition probabil- ities Pij must increase, so that, especially for non-trivial m, the neighbors y E Ct will gen- erally be protected against low probabilities py. Consequently, by (5), for fixed length m, higher frequency words will have more neighbors than lower frequency words for non-uniformly dis- tributed transition probabilities. The fact that the lexical similarity effects emerge for target strings of the same length is a strong point in favour of a Markovian source 274 for word frequency distributions. Unfortunately, comparing the results of figure 1B with those of figure 1A, it appears that the effects are of the wrong order of magnitude: the neighborhood density effect is far too strong, the neighborhood frequency effect somewhat too weak. The source of this distortion can be traced to the extremely large number of types generated (6425) for a number of tokens (74618) for which the empirical data (64854 tokens) allow only 1342 types. This large surplus of types gives rise to an inflated neighborhood density effect, with the concomi- tant effect that neighborhood frequency is scaled down. Rather than attempting to address this issue by changing the transition matrix by using a more constrained but less realistic data set, another option is explored here, namely the idea to supplement the Markovian stochastic process with a second stochastic process developed by Simon (1955), by means of which the intensive use can be modelled to which the word types of natural language are put. Consider the frequency distribution of e.g. a corpus that is being compiled, and assume that at some stage of compilation N word tokens have been observed. Let n (Jr) be the number of word types that have occurred exactly r times in these first N words. If we allow for the possibilities that both new types can be sampled, and old types can be re-used, Simon's model in its sim- plest form is obtained under the three assump- tions that (1) the probability that the (N + 1)-st word is a type that has appeared exactly r times is proportional to r~ Iv), the summed token fre- quencies of all types with token frequency r at stage N, that (2) there is a constant probability c~ that the (N-f 1)-st word represents a new type, and that (3) all frequencies grow proportionaly with N, so that n~ (Iv+l) N + 1 g~'-----V = "-W-- for all r, lv. Simon (1955) shows that the Yule-distribution (3) follows from these assumptions. When the third assumption is replaced by the assumptions that word types are dropped with a probabil- ity proportional to their token frequency, and that old words are dropped at the same rate at which new word types are introduced so that the total number of tokens in the distribution is a constant, the Yule-distribution is again found to follow (Simon 1960). By itself, this stochastic process has no ex- planatory value with respect to the similarity relations between words. It specifies use and re- use of word types, without any reference to seg- mental constituency or length. However, when a Markovian process is fitted as a front end to Si- mon's stochastic process, a hybrid model results that has the desired properties, since the latter process can be used to force the required high intensity of use on the types of its input distri- bution. The Markovian front end of the model can be thought of as defining a probability dis- tribution that reflects the ease with which words can be pronounced by the human vocal tract, an implementation of phonotaxis. The second component of the model can be viewed as simu- lating interfering factors pertaining to language use. Extralinguistic factors codetermine the ex- tent to which words are put to use, indepen- dently of the slot occupied by these words in the network of similarity relations, ~ and may effect a substantial reduction of the lexlcal similarity effects. Qualitatively satisfying results were obtained with this 'Mandelbrot-Simon' stochastic model, using the transition matrix of figure IB for the Markovlan front end and fixing Simon's birth rate a at 0.01. s An additional parameter, Vc, the critical number of types for which the switch from the front end to what we will refer to as the component of use is made, was fixed at 2000. Figure 1C shows that both the general shape of the rank-frequency curve in a double logarith- mic grid, as well as the lexical similarity effects (table 1, column 3) are highly similar to the em- pirical observations (figure 1A). Moreover, the overall number of types (4848) and the number of types of length 4 (1350) closely approximate the empirical numbers of types (4455 and 1342 respectively), and the same holds for the overall numbers of tokens (291944 and 224567) respec- tively. Only the number of tokens of length 4 is overestimated by a factor 2. Nevertheless, the type-token ratio is far more balanced than in the original Markovian scheme. Given that the tran- sition matrix models only part of the phonotaxis of Dutch, a perfect match between the theoret- ical and empirical distributions is not to be ex- pected. The present results were obtained by imple- menting Simon's stochastic model in a slightly modified form, however. Simon's derivation of the Yule-distribution builds on the assumption that each r grows proportionaly with N, an as- 2For instance, the Dutch word kuip, 'barrel', is a low- frequency type in the present-day language, due to the fact that its denotatum has almost completely dropped out of use. Nevertheless, it was a high-frequency word in earlier centuries, to which the high frequency of the surname ku~per bears witness. ~The new types entering the distribution at rate were generated by means of the tr~nsitlon matrix of figure 113. 275 sumption that does not lend itself to implemen- tation in a stochastic process. Without this as- sumption, rank-frequency distributions are gen- erated that depart significantly from the empir- ical rank-frequency curve, the highest frequency words attracting a very large proportion of all tokens. By replacing Simon's assumptions 1 and 3 by the 'rule of usage' that the probability that the (N+ 1)-st word is a type that has appeared exactly r times is proportional to H,. := ]~, ~'~ log , (6) theoretical rank-frequency distributions of the desired form can be obtained. Writing rn~ v(,') "= for the probability of re-using any type that has been used r times before, H, can be interpreted as the contribution of all types with frequency r to the total entropy H of the distribution of ranks r, i.e. to the average amount of informa- tion lz = P Selection of ranks according to (6) rather than proportional to rnT (Simon's assumption I) en- sures that the highest ranks r have lowered prob- abilities of being sampled, at the same time slightly raising the probabilities of the inter- mediate ranks r. For instance, the 58 highest ranks of the distribution of figure 1C have some- what raised, the complementary 212 ranks some- what lowered probability of being sampled. The advantage of using (6) is that unnatural rank- frequency distributions in which a small number of types assume exceedingly high token frequen- cies are avoided. The proposed rule of usage can be viewed as a means to obtain a better trade-off in the distri- bution between maximalization of information transmission and optimalization of the cost of coding the information. To see this, consider an individual word type Z/. In order to mini- malize the cost of coding C(y) = -log(Pr(y)), high-frequency words should be re-used. Unfor- tunately, these high-frequency words have the lowest information content. However, it can be shown that maximalization of information trans- mission requires the re-use of the lowest fre- quency types (H, is maximal for uniformly dis- tributed p(r)). Thus we have two opposing re- quirements, which balance out in favor of a more intensive use of the lower and intermediate fre- quency ranges when selection of ranks is propor- tional to (6). The 'rule of usage' (6) implies that higher frequency words contribute less to the average amount of information than might be expected on the basis of their relative sample frequen- cies. Interestingly, there is independent evidence for this prediction. It is well known that the higher-frequency types have more (shades of) meaning(s) than lower-frequency words (see e.g. Reder, Anderson and Bjork 1974, Paivio, Yuille and Madigan 1968). A larger number of mean- ings is correlated with increased contextual de- pendency for interpretation. Hence the amount of information contributed by such types out of context (under conditions of statistical indepen- dence) is less than what their relative sample frequencies suggest, exactly as modelled by our rule of usage. Note that this semantic motivation for se- lection proportional to H, makes it possible to avoid invoking external principles such as 'least effort' or 'optimal coding' in the mathe- matical definition of the model, principles that have been criticized as straining one's credulity (Miller 1957). 4 FUNCTION WORDS Up till now, we have focused on the modelling of monomorphemic Dutch words, to the exclu- sion of function words and morphologically com- plex words. One of the reasons for this ap- proach concerns the way in which the shape of the rank-frequency curves differs substantially depending on which kinds of words are included in the distribution. As shown in figure 2, the curve of monomorphemic words without func- tion words is highly convex. When function words are added, the head of the tail is straight- ened out, while the addition of complex words brings the tail of the distribution (more or less) in line with Zipf's law. Depending on what kind of distribution is being modelled, different crite- ria of adequacy have to be met. Interestingly, function words, -- articles, pro- nouns, conjunctions and prepositions, the so- called closed classes, among which we have also reckoned the auxiliary verbs -- typically show up as the shortest and most frequent (Zipf) words in frequency distributions. In fact, they are found with raised frequencies in the the empirical rank- frequency distribution when compared with the curve of content words only, as shown in the first 4In this respect, Miller's (1957) alternative derivation of (2) in terms of random spacing is unconvincing in the light of the phonotactlc constraints on word structure. 276 105 104 l0 s I02 101 I00 I0 5 lO s oe ee 104 104 • oo IO s lO s I02 102 101 I01 z . ~" i I0 ° , i , , , , , , , i 10 0 I0 ° 101 I0 = l0 s 104 l0 s I0 ° I01 I0= l0 s 104 l0 s I0 ° I01 I0= I0 ~ 104 l0 s Figure 2: Rank-frequency plots for Dutch phonological sterns. From left to right: monomorphemic words without function words, monomorphemic words and function words, complete distribution. two graphs of figure 2. Miller, Newman & Fried- man (1958), discussing the finding that the fre- quential characteristics of function words differ markedly from those of content words, argued that (1958:385) Inasmuch as the division into two classes of words was independent of the frequencies of the words, we might have expected it to simply divide the sam- ple in half, each half retaining the sta- tistical properties of the whole. Since this is clearly not the case, it is ob- vious that Mandelbrot's approach is incomplete. The general trends for all words combined seem to follow a stochastic pattern, but when we look at syntactic patterns, differences begin to appear which will require linguistic, rather than mere statistical, explana- tions. In the Mandelbrot-Simon model developed here, neither the Markovian front end nor the pro- posed rule of usage are able to model the ex- tremely high intensity of use of these function words correctly without unwished-for side effects on the distribution of content words. However, given that the semantics of function words are not subject to the loss of specificity that char- acterizes high-frequency content words, function words are not subject to selection proportional to H~. Instead, some form of selection propor- tional to rn~ probably is more appropriate here. MORPHOLOGY The Mandelbrot-Simon model has a single pa- rameter ~ that allows new words to enter the dis- tribution. Since the present theory is of a phono- logical rather than a morphological nature, this parameter models the (occasional) appearance of new simplex words in the language only, and cannot be used to model the influx of morpho- logically complex words. First, morphological word formation processes may give rise to consonant clusters that are per- mitted when they span morpheme boundaries, but that are inadmissible within single mor- phemes. This difference in phonotactic pattern- ing within and across morphemes already re- reales that morphologically complex words have a dLf[erent source than monomorpherpJc words. Second, each word formation process, whether compounding or affixation of sufr-txes like -mess and -ity, is characterized by its own degree of productivity. Quantitatively, differences in the degree of productivity amount to differences in the birth rates at which complex words appear in the vocabulary. Typically, such birth rates, which can be expressed as E[n~] where n~ and Nl , A r' denote the number of types occurring once only and the number of tokens of the frequency distributions of the corresponding morphologi- cal categories (Basyen 1989), assume values that are significantly higher that the birth rate c~ of monomorphemic words. Hence it is impossible to model the complete lexical distribution with- out a worked-out morphological component that specifies the word formation processes of the lan- guage and their degrees of productivity. While actual modelling of the complete distri- bution is beyond the scope of the present paper, we may note that the addition of birth rates for word formation processes to the model, neces- sitated by the additional large numbers of rare 277 words that appear in the complete distribution, ties in nicely with the fact that the frequency distributions of productive morphological cate- gories are prototypical LNRE distributions, for which the large values for the numbers of types occurring once or twice only are characteristic. With respect to the effect of morphological structure on the lexical similarity effects, we fi- nally note that in the empirical data the longer word lengths show up with sharply diminished neighborhood density. However, it appears that those longer words which do have neighbors are morphologically complex. Morphological struc- ture raises lexical density where the phonotaxis fails to do so: for long monomorphemic words the huge space of possible word types is sampled too sparcely for the lexical similarity effects to emerge. REFERENCES Baayen, R.H. 1989. A Corpus-Based Approach to Morphological Productivity. Statistical Anal- ysis and Psycholinguistic Interpretation. Diss. Vrije Universiteit, Amsterdam. Carroll, J.B. 1967. On Sampling from a Log- normal Model of Word Frequency Distribution. In: H.Ku~era 0 W.N.Francis 1967, 406-424. Carroll, 3.B. 1969. A Rationale for an Asymp- totic Lognormal Form of Word Frequency Distri- butions. Research Bulletin -- Educational Test. ing Service, Princeton, November 1969. Chitaivili, P~J. & Khmaladse, E.V. 1989. Sta- tistical Analysis of Large Number of Rare Events and Related Problems. ~Vansactions of the Tbil- isi Mathematical Instflute. Good, I.J. 1953. The population frequencies of species and the estimation of population param- eters, Biometrika 43, 45-63. Herdan, G. 1960. Type-toke~ Mathematics, The Hague, Mouton. Ku~era~ H. & Francis, W.N. 1967. Compa- Lational Analysis of Prese~t-Day American En- glish. Providence: Brown University Press. Landauer, T.K. & Streeter, L.A. 1973. Struc- tural differences between common and rare words: failure of equivalence assumptions for theories of word recognition, Journal of Verbal Learning and Verbal Behavior 12, 119-131. Mandelbrot, B. 1953. An informational the- ory of the statistical structure of language, in: W.Jackson (ed.), Communication Theory, But- terworths. Mandelbrot, B. 1962. On the theory of word frequencies and on related Markovian models of discourse, in: R.Jakobson, Structure of Lan- guage and its Mathematical Aspects. Proceedings of Symposia in Applied Mathematics Vol XII, Providence, Rhode Island, Americal Mathemat- ical Society, 190-219. Miller, G.A. 1954. Communication, Annual Review of Psychology 5, 401-420. Miller, G.A. 1957. Some effects of intermittent silence, The American Jo~trnal of Psychology 52, 311-314. Miller, G.A., Newman, E.B. & Friedman, E.A. 1958. Length-Frequency Statistics for Written English, Information and control 1, 370-389. Muller, Ch. 1979. Du nouveau sur les distri- butions lexicales: la formule de Waring-Herdan. In: Ch. Muller, Langue Frangaise et Linguis- tique Quantitative. Gen~ve: Slatkine, 177-195. Nusbaum, H.C. 1985. A stochastic account of the relationship between lexical density and word frequency, Research on Speech Perception Report # 1I, Indiana University. Orlov, J.K. & Chitashvili, R.Y. 1983. Gener- alized Z-distribution generating the well-known 'rank-distributions', Bulletin of the Academy of Sciences, Georgia 110.2, 269-272. Paivio, A., Yuille, J.C. & Madigan, S. 1968. Concreteness, Imagery and Meaningfulness Val- ues for 925 Nouns. Journal of Ezperimental Psy- chology Monograph 76, I, Pt. 2. Reder, L.M., Anderson, J.R. & Bjork, R.A. 1974. A Semantic Interpretation of Encoding Specificity. Journal of Ezperimental Psychology 102: 648-656. Rouault, A. 1978. Lot de Zipf et sources markoviennes, Ann. Inst. H.Poincare 14, 169- 188. Sichel, H.S. 1975. On a Distribution Law for Word Frequencies. Journal of Lhe American Sta- tistical Association 70, 542-547. Simon, H.A. 1955. On a class of skew distri- bution functions, Biometrika 42, 435-440. Simon, H.A. 1960. Some further notes on a class of skew distribution functions, Information and Control 3, 80-88. Zipf, G.K. 1935. The Psycho.Biology of Lan- guage, Boston, Houghton Mifflin. 278
1991
35
FROM N-GRAMS TO COLLOCATIONS AN EVALUATION OF XTRACT Frank A. Smadja Department of Computer Science Columbia University New York, NY 10027 Abstract In previous papers we presented methods for retrieving collocations from large samples of texts. We described a tool, Xtract, that im- plements these methods and able to retrieve a wide range of collocations in a two stage process. These methods a.s well as other re- lated methods however have some limitations. Mainly, the produced collocations do not in- clude any kind of functional information and many of them are invalid. In this paper we introduce methods that address these issues. These methods are implemented in an added third stage to Xtract that examines the set of collocations retrieved during the previous two stages to both filter out a number of invalid col- locations and add useful syntactic information to the retained ones. By combining parsing and statistical techniques the addition of this third stage has raised the overall precision level of Xtract from 40% to 80% With a precision of 94%. In the paper we describe the methods and the evaluation experiments. 1 INTRODUCTION In the past, several approaches have been proposed to retrieve various types of collocations from the analysis of large samples of textual data. Pairwise associations (bigrams or 2-grams) (e.g., [Smadja, 1988], [Church and Hanks, 1989]) as well as n-word (n > 2) associations (or n-grams) (e.g., [Choueka el al., 1983], [Smadja and McKeown, 1990]) were retrieved. These techniques auto- matically produced large numbers of collocations along with statistical figures intended to reflect their relevance. However, none of these techniques provides functional in- formation along with the collocation. Also, the results produced often contained improper word associations re- flecting some spurious aspect of the training corpus that did not stand for true collocations. This paper addresses these two problems. Previous papers (e.g., [Smadja and McKeown, 1990]) introduced a. set of tecl)niques and a. tool, Xtract, that produces various types of collocations from a two- stage statistical analysis of large textual corpora briefly sketched in the next section. In Sections 3 and 4, we show how robust parsing technology can be used to both filter out a number of invalid collocations as well as add useful syntactic information to the retained ones. This filter/analyzer is implemented in a third stage of Xtract that automatically goes over a the output collocations to reject the invalid ones and label the valid ones with syn- tactic information. For example, if the first two stages of Xtract produce the collocation "make-decision," the goal of this third stage'is to identify it as a verb-object collocation. If no such syntactic relation is observed, then the collocation is rejected. In Section 5 we present an evaluation of Xtract as a collocation retrieval sys- tem. The addition of the third stage of Xtract has been evaluated to raise the precision of Xtract from 40% to 80°£ and it has a recall of 94%. In this paper we use ex- amples related to the word "takeover" from a 10 million word corpus containing stock market reports originating from the Associated Press newswire. 2 FIRST 2 STAGES OF XTRACT, PRODUCING N-GRAMS In afirst stage, Xtract uses statistical techniques to retrieve pairs of words (or bigrams) whose common ap- pearances within a single sentence are correlated in the corpus. A bigram is retrieved if its frequency of occur- rence is above a certain threshold and if the words are used in relatively rigid ways. Some bigrams produced by the first stage of Xtract are given in Table 1: the bigrams all contain the word "takeover" and an adjec- tive. In the table, the distance parameter indicates the usual distance between the two words. For example, distance = 1 indicates that the two words are fre- quently adjacent in the corpus. In a second stage, Xtract uses the output bi- grams to produce collocations involving more than two words (or n-grams). It examines all the sentences con- taining the bigram and analyzes the statistical distri- bution of words and parts of speech for each position around the pair. It retains words (or parts of speech) oc- cupying a position with probability greater than a given 279 threshold. For example, the bigram "average-industrial" produces the n-gram "the Dow Jones industrial average" since the words are always used within this compound in the training corpus. Example. outputs of the second stage of Xtraet are given in Figure 1. In the figure, the numbers on the left indicate the frequency of the n-grams in the corpus, NN indicates that. a noun is expected at this position, AT indicates that an article is expected, NP stands for a proper noun and VBD stands for a verb in the past tense. See [Smadja and McKeown, 1990] and [Smadja, 1991] for more details on these two stages. Table 1: Output of Stage 1 Wi hostile hostile corporate hostile unwanted potential unsolicited unsuccessful friendly takeover takeover big wj takeovers takeover takeovers takeovers takeover takeover takeover takeover takeover expensive big takeover distance 1 1 1 2 1 1 1 1 1 2 4 1 3 STAGE THREE: SYNTACTICALLY LABELING COLLOCATIONS In the past, Debili [Debili, 1982] parsed corpora of French texts to identify non-ambiguous predicate argument rela- tions. He then used these relations for disambiguation in parsing. Since then, the advent of robust parsers such as Cass [Abney, 1990], Fidditeh [Itindle, 1983] has made it possible to process large amounts of text with good per- formance. This enabled Itindle and Rooth [Hindle and Rooth, 1990], to improve Debili's work by using bigram statistics to enhance the task of prepositional phrase at- tachment. Combining statistical and parsing methods has also been done by Church and his colleagues. In [Church et al., 1989] and [Church'et ai., 1991] they con- sider predicate argument relations in the form of ques- tions such as What does a boat typically do? They are preprocessing a corpus with the Fiddlteh parser in order to statistically analyze the distribution of the predicates used with a given argument such as "boat." Our goal is different, since we analyze a set of collocations automatically produced by Xtract to either enrich them with syntactic information or reject them. For example, if, bigram collocation produced by Xtract involves a noun and a verb, the role of Stage 3 of Xtract is to determine whether it is a subject-verb or a verb- object collocation. If no such relation can be identified, then the collocation is rejected. This section presents the algorithm for Xtract Stage 3 in some detail. For illustrative purposes we use the example words takeover and thwart with a distance of 2. 3.1 DESCRIPTION OF THE ALGORITHM Input: A bigram with some distance information in- dicating the most probable distance between the two words. For example, takeover and thwart with a distance of 2. Output/Goah Either a syntactic label for the bigram or a rejection. In the case of takeover and thwart the collocation is accepted and its produced label is VO for verb-object. The algorithm works in the following 3 steps: 3.1.1 Step 1: PRODUCE TAGGED CONCORDANCES All the sentences in the corpus that contain the two words in this given position are produced. This is done with a concord,acing program which is part of Xtraet (see [Smadja, 1991]). The sentences are labeled with part of speech information by preprocessing the cor- pus with an automatic stochastic tagger. 1 3.1.2 Step 2: PARSE THE SENTENCES Each sentence is then processed by Cass, a bottom-up incremental parser [Abney, 1990]. 2 Cass takes input sentences labeled with part of speech and attempts to identify syntactic structure. One of Cass modules identifies predicate argument relations. We use this module to produce binary syntactic relations (or la- bels) such as "verb-object" (VO), %erb-subject" (VS), "noun-adjective" (N J), and "noun-noun" ( N N ). Con- sider Sentence (1) below and all the labels as produced by Cass on it. (1) "Under the recapitalization plan it proposed to thwart the takeover." label bigrarn SV it proposed NN recapitalization plan VO thwart takeover For each sentence in the concordance set, from the output of Cass, Xtract determines the syntactic relation of the two words among VO, SV, N J, NN and assigns this label to the sentence. If no such relation is observed, Xtract associates the label U (for undefined) to the sentence. We note label[ia~ the label associated 1For this, we use the part of speech tagger described in [Church, 1988]. This program was developed at Bell Labora- tories by Ken Church. UThe parser has been developed at Bell Communication Research by Steve Abney, Cass stands for Cascaded Analysis of Syntactic Structure. I am much grateful to Steve Abney to help us use and customize Cass for this work. 280 681 .... takeover bid ...... 310 .... takeover offer ...... 258 .... takeover attempt ..... 177 .... takeover battle ...... 154 ...... NN NN takeover defense ...... 153 .... takeover target ....... 119 ..... a possible takeover NN ...... 118 ....... takeover law ....... 109 ....... takeover rumors ...... 102 ....... takeover speculation ...... 84 .... takeover strategist ...... 69 ....... AT takeover fight .... . 62 ....... corporate takeover... 50 .... takeover proposals ...... 40 ....... Federated's poison pill takeover defense ...... 33 .... NN VBD a sweetened takeover offer from . NP... Figure 1: Some n-grams containing "takeover" with Sentence id. For example, the label for Sentence (1) is: label[l] - VO. 4 A LEXICOGRAPHIC EVALUATION 3.1.3 Step 3: REJECT OR LABEL COLLOCATION This last step consists of deciding on a label for the bigram from the set of label[i~'.s. For this, we count the frequency of each label for the bigram and perform a statistical analysis of this distribution. A collocation is accepted if the two seed words are consistently used with the same syntactic relation. More precisely, the collocation is accepted if and only if there is a label 12 ~: U satisfying the following inequation: [probability(labeliid ] = £)> T I in which T is a given threshold to be determined by the experimenter. A collocation is thus rejected if no valid label satisfies the inequation or if U satisfies it. Figure 2 lists some accepted collocations in the format produced by Xtract with their syntactic labels. For these examples, the threshold T was set to 80%. For each collocation, the first line is the output of the first stage of Xtract. It is the seed bigram with the distance between the two words. The second line is the output of the second stage of Xtract, it is a multiple word collocation (or n-gram). The numbers on the left indicate the frequency of occurrence of the n-gram in the corpus. The third line indicates the syntactic label as determined by the third stage of Xtract. Finally, the last lines simply list an example sentence and the position of the collocation in the sentence. Such collocations can then be used for vari- ous purposes including lexicography, spelling correction, speech recognition and language generation. Ill [Smadja and McKeown, 1990] and [Smadja, 1991] we describe how they are used to build a lexicon for language gener- ation in the domain of stock market reports. The third stage of Xtract can thus be considered as a retrieval system which retrieves valid collocations from a set of candidates. This section describes an evaluation experiment of the third stage of Xtract as a retrieval system. Evaluation of retrieval systems is usually done with the help of two parameters: precision and recall [Salton, 1989]. Precision of a retrieval system is defined as the ratio of retrieved valid elements divided by the total number of retrieved elements [Salton, 1989]. It measures the quality of the retrieved material. Recall is defined as the ratio of retrieved valid elements divided by the total number of valid elements. It measures the effectiveness of the system. This section presents an eval- uation of the retrieval performance of the third stage of Xtract. 4.1 THE EVALUATION EXPERIMENT Deciding whether a given word combination is a valid or invahd collocation is actually a difficult task that is best done by a lexicographer. Jeffery Triggs is a lexicographer working for Oxford English Dictionary (OED) coordinating the North American Readers pro- gram of OED at Bell Communication Research. Jef- fery Triggs agreed to manually go over several thousands collocations, a We randomly selected a subset of about 4,000 collocations that contained the information compiled by Xtract after the first 2 stages. This data set was then the subject of the following experiment. We gave the 4,000 collocations to evaluate to the lexicographer, asking him to select the ones that he 3I am grateful to Jeffery whose professionalism and kind- ness helped me understand some of the difficulty of lexicog- raphy. Without him this evaluation would not have been possible. 281 takeover bid -1 681 .... takeover bid IN ..... Syntactic Label: NN 10 11 An investment partnership on Friday offered to sweeten its takeover bid for Gencorp Inc. takeover fight -1 69 ....... AT takeover fight IN ...... 69 Syntactic Label: NN 10 11 Later last year Hanson won a hostile 3.9 billion takeover fight for Imperial Group the giant British food tobacco and brewing conglomerate and raised more than 1.4 billion pounds from the sale of Imperial s Courage brewing operation and its leisure products businesses. takeover thwart 2 44 ..... to thwart AT takeover NN ....... 44 Syntactic Label: VO 13 11 The 48.50 a share offer announced Sunday is designed to thwart a takeover bid by GAF Corp. takeover make 2 68 ..... MD make a takeover NN . JJ ..... 68 Syntactic Label: VO 14 12 Meanwhile the North Carolina Senate approved a bill Tuesday that would make a takeover of North Carolina based companies more difficult and the House was expected to approve the measure before the end of the week. takeover related -1 59 .... takeover related ....... 59 Syntactic Label: SV 23 Among takeover related issues Kidde jumped 2 to 66. Figure 2: Some examples of collocations with "takeover" YY=J20% Y=20% N = 60 % T = 40% U = 60% T w. 94% T = 94% U O U = 9,5% Y ---- t0% YY = 40% N -- 92% Figure 3: Overlap of the manual and automatic evaluations 282 would consider for a domain specific dictionary and to cross out the others. The lexicographer came up with three simple tags, YY, Y and N. Both Y and YY are good collocations, and N are bad collocations. The dif- ference between YY and Y is that Y collocations are of better quality than YY collocations. YY collocations are often too specific to be included in a dictionary, or some words are missing, etc. After Stage 2, about 20% of the collocations are Y, about 20% are YY, and about 60% are N. This told us that the precision of Xtract at Stage 2 was only about 40 %. Although this would seem like a poor precision, one should compare it with the much lower rates cur- rently in practice in lexicography. For the OED, for example, the first stage roughly consists of reading nu- merous documents to identify new or interesting expres- sions. This task is performed by professional readers. For the OED, the readers for the American program alone produce some 10,000 expressions a month. These lists are then sent off to the dictionary and go through several rounds of careful analysis before actually being submitted to the dictionary. The ratio of proposed can- didates to good candidates is usually low. For example, out of the 10,000 expressions proposed each month, less than 400 are serious candidate for the OED, which rep- resents a current rate of 4%. Automatically producing lists of candidate expressions could actually be of great help to lexicographers and even a precision of 40% would be helpful. Such lexicographic tools could, for example, help readers retrieve sublanguage specific expressions by providing them with lists of candidate collocations. The lexicographer then manually examines the list to remove the irrelevant data. Even low precision is useful for lexicographers as manual filtering is much faster than manual scanning of the documents [Marcus, 1990]. Such techniques are not able to replace readers though, as they are not designed to identify low frequency expressions, whereas a human reader immediately identifies interest- ing expressions with as few as one occurrence. The second stage of this experiment was to use Xtract Stage 3 to filter out and label the sample set of collocations. As described in Section 3, there are several valid labels (VO, VS, NN, etc.). In this experiment, we grouped them under a single label: T. There is only one non-valid label: U (for unlabeled}. A T collocation is thus accepted by Xtract Stage 3, and a U collocation is rejected. The results of the use of Stage 3 on the sample set of collocations are similar to the manual evaluation in terms of numbers: about 40% of the collocations were labeled (T) by Xtract Stage 3, and about 60% were rejected (U). Figure 3 shows the overlap of the classifications made by Xtract and the lexicographer. In the figure, the first diagram on the left represents the breakdown in T and U of each of the manual categories (Y - YY and N). The diagram on the right represents the breakdown in Y - YY and N of the the T and U categories. For example, the first column of the diagram on the left rep- resents the application of Xtract Stage 3 on the YY col- locations. It shows that 94% of the collocations accepted by the lexicographer were also accepted by Xtract. In other words, this means that the recall ofthe third stage of Xtract is 94%. The first column of the diagram on the right represents the lexicographic evaluation of the collo- cations automatically accepted by Xtract. It shows that about 80% of the T collocations were accepted by the lexicographer and that about 20% were rejected. This shows that precision was raised from 40% to 80% with the addition of Xtract Stage 3. In summary, these ex- periments allowed us to evaluate Stage 3 as a retrieval system. The results are: I Precision = 80% Recall = 94% ] 5 SUMMARY AND CONTRIBUTIONS In this paper, we described a new set of techniques for syntactically filtering and labeling collocations. Using such techniques for post processing the set of colloca- tions produced by Xtract has two major results. First, it adds syntax to the collocations which is necessary for computational use. Second, it provides considerable im- provement to the quality of the retrieved collocations as the precision of Xtract is raised from 40% to 80% with a recall of 94%. By combining statistical techniques with a sophis- ticated robust parser we have been able to design and implement some original techniques for the automatic extraction of collocations. Results so far are very en- couraging and they indicate that more efforts should be made at combining statistical techniques with more sym- bolic ones. ACKNOWLEDGMENTS The research reported in this paper was partially sup- ported by DARPA grant N00039-84-C-0165, by NSF grant IRT-84-51438 and by ONR grant N00014-89-J- 1782. Most of this work is also done in collaboration with Bell Communication Research, 445 South Street, Mor- ristown, N3 07960-1910. I wish to express my thanks to Kathy McKeown for her comments on the research presented in this paper. I also wish to thank Dor~e Seligmann and Michael Elhadad for the time they spent discussing this paper and other topics with me. References [Abney, 1990] S. Abney. Rapid Incremental Parsing with Repair. In Waterloo Conference on Electronic Text Research, 1990. [Choueka el al., 1983] Y. Choueka, T. Klein, and E. Neuwitz. Automatic Retrieval of Frequent Id- iomatic and Collocational Expressions in a Large Cot- 283 pus. Journal for Literary and Linguistic computing, 4:34-38, 1983. [Church and Hanks, 1989] K. Church and K. Hanks. Word Association Norms, Mutual Information, and Lexicography. In Proceedings of the 27th meeting of the A CL, pages 76-83. Association for Computational Linguistics, 1989. Also in Computational Linguistics, vol. 16.1, March 1990. [Church et at., 1989] K.W. Church, W. Gale, P. Hanks, and D. Hindle. Parsing, Word Associations and Typ- ical Predicate-Argument Relations. In Proceedings of the International Workshop on Parsing Technologies, pages 103-112, Carnegie Mellon University, Pitts- burgh, PA, 1989. Also appears in Masaru Tomita (ed.), Current Issues in Parsing Technology, pp. 103- 112, Kluwer Academic Publishers, Boston, MA, 1991. [Church et at., 1991] K.W. Church, W. Gale, P. Hanks, and D. Hindle. Using Statistics in Lexical Analysis. In Uri ~ernik, editor, Lexical Acquisition: Using on-line resources to build a lexicon. Lawrence Erlbaum, 1991. In press. [Church, 1988] K. Church. Stochastic Parts Prograln and Noun Phrase Parser for Unrestricted Text. In Proceedings of the Second Conference on Applied Nat- ural Language Processing, Austin, Texas, 1988. [Debili, 1982] F. Debili. Analyse Syntactico-Sdmantique Fondde sur une Acquisition Automatique de Relations Lexicales Sdmantiques. PhD thesis, Paris XI Univer- sity, Orsay, France, 1982. Th~se de Doctorat D'~tat. [Hindle and Rooth, 1990] D. Hindle and M. Rooth. Structural Ambiguity and Lexieal Relations. In DARPA Speech and Natural Language Workshop, Hid- den Valley, PA, June 1990. [Hindle, 1983] D. Hindle. User Manual for Fidditch, a Deterministic Parser. Technical Memorandum 7590- 142, Naval Research laboratory, 1983. [Marcus, 1990] M. Marcus. Tutorial on Tagging and Processing Large Textual Corpora. Presented at the 28th annual meeting of the ACL, June 1990. [Salton, 1989] J. Salton. Automatic Text Processing, The Transformation, Analysis, and Retrieval of In- formation by Computer. Addison-Wesley Publishing Company, NY, 1989. [Smadja and McKeown, 1990] F. Smadja and K. McKe- own. Automatically Extracting and Representing Col- locations for Language Generation. In Proceedings of the 28th annual meeting of the ACL, Pittsburgh, PA, June 1990. Association for Computational Linguistics. [Smadja, 1988] F. Smadja. Lexical Co-occurrence, The Missing Link in Language Acquisition. Ill Program and abstracts of the 15 th International ALLC, Con- ference of the Association for Literary and Linguistic Computing, Jerusalem, Israel, June 1988. [Smadja, 1991] F. Smadja. Retrieving Collocational Knowledge from Textual Corpora. An Application: Language Generation. PhD thesis, Computer Science Department, Columbia University, New York, NY, April 1991. 284
1991
36
PREDICTING INTONATIONAL PHRASING FROM TEXT Michelle Q. Wang Churchill College Cambridge University Cambridge UK Julia Hirschberg AT&T Bell Laboratories 600 Mountain Avenue Murray Hill, NJ 07974 Abstract Determining the relationship between the intona- tional characteristics of an utterance and other features inferable from its text is important both for speech recognition and for speech synthesis. This work investigates the use of text analysis in predicting the location of intonational phrase boundaries in natural speech, through analyzing 298 utterances from the DARPA Air Travel In- formation Service database. For statistical model- ing, we employ Classification and Regression Tree (CART) techniques. We achieve success rates of just over 90%, representing a major improvement over other attempts at boundary prediction from unrestricted text. 1 Introduction The relationship between the intonational phras- ing of an utterance and other features which can be inferred from its transcription represents an important source of information for speech syn- thesis and speech recognition. In synthesis, more natural intonational phrasing can be assigned if text analysis can predict human phrasing perfor- mance. In recognition, better calculation of prob- able word durations is possible if the phrase-final- lengthening that precedes boundary sites can be predicted. Furthermore, the association of intona- tional features with syntactic and acoustic infor- mation can also be used to reduce the number of sentence hypotheses under consideration. Previous research on the location of intonational boundaries has largely focussed on the relation- ship between these prosodic boundaries and syn- tactic constituent boundaries. While current re- search acknowledges the role that semantic and discourse-level information play in boundary as- I We thank Michael Riley for helpful discussions. Code implementing the CART techniques employed here was written by Michael Riley and Daryi Pregibon. Part-of- speech tagging employed Ken Church's tagger, and syn- tactic analysis used Don Hindle's parser, Fiddltch. signment, most authors assume that syntactic con- figuration provides the basis for prosodic 'defaults' that may be overridden by semantic or discourse considerations. While most interest in boundary prediction has been focussed on synthesis (Gee and Grosjean, 1983; Bachenko and Fitzpatrick, 1990), currently there is considerable interest in predicting boundaries to aid recognition (Osten- doff et al., 1990; Steedman, 1990). The most successful empirical studies in boundary location have investigated how phrasing can disambiguate potentially syntactically ambiguous utterances in read speech (Lehiste, 1973; Ostendorf et al., 1990). Analysis based on corpora of natural speech (Ab tenberg, 1987) have so far reported very limited success and have assumed the availability of syn- tactic, semantic, and discourse-level information well beyond the capabilities of current NL systems to provide. To address the question of how boundaries are assigned in natural speech -- as well as the need for classifying boundaries from information that can be extracted automatically from text -- we examined a multi-speaker corpus of spontaneous elicited speech. We wanted to compare perfor- mance in the prediction of intonational bound- aries from information available through simple techniques of text analysis, to performance us- ing information currently available only come from hand labeling of transcriptions. To this end, we selected potential boundary predictors based upon hypotheses derived from our own observa- tions and from previous theoretical and practi- cal studies of boundary location. Our corpus for this investigation is 298 sentences from approxi- mately 770 sentences of the Texas Instruments- collected portion of the DARPA Air Travel In- formation Service (ATIS) database(DAR, 1990). For statistical modeling, we employ classification and regression tree techniques (CART) (Brieman et al., 1984), which provide cross-validated de- cision trees for boundary classification. We ob- tain (cross-validated) success rates of 90% for both automatically-generated information and hand- 285 labeled data on this sample, which represents a major improvement over previous attempts to predict intonational boundaries for spontaneous speech and equals or betters previous (hand- crafted) algorithms tested for read speech. Intonational Phrasing Intuitively, intonational phrasing divides an ut- terance into meaningful 'chunks' of information (Bolinger, 1989). Variation in phrasing can change the meaning hearers assign to tokens of a given sentence. For example, interpretation of a sen- tence like 'Bill doesn't drink because he's unhappy.' will change, depending upon whether it is uttered as one phrase or two. Uttered as a single phrase, this sentence is commonly interpreted as convey- ing that Bill does indeed drink -- but the cause of his drinking is not his unhappiness. Uttered as two phrases, it is more likely to convey that Bill does sot drink -- and the reason for his abstinence is his unhappiness. To characterize this phenomenon phonologi- cally, we adopt Pierrehumbert's theory of into- national description for English (Pierrehumbert, 1980). In this view, two levels of phrasing are sig- nificant in English intonational structure. Both types are composed of sequences of high and low tones in the FUNDAMENTAL FREQUENCY (f0) con- tour. An INTERMEDIATE (or minor) PHRASE con- slats of one or more PITCH ACCENTS (local f0 min- ima or maxima) plus a PHRASE ACCENT (a simple high or low tone which controls the pitch from the last pitch accent of one intermediate phrase to the beginning of the next intermediate phrase or the end of the utterance). INTONATIONAL (or major) PHRASES consist of one or more intermedi- ate phrases plus a final BOUNDARY TONE, which may also be high or low, and which occurs at the end of the phrase. Thus, an intonational phrase boundary necessarily coincides with an intermedi- ate phrase boundary, but not vice versa. While phrase boundaries are perceptual cate- gories, they are generally associated with certain physical characteristics of the speech signal. In addition to the tonal features described above, phrases may be identified by one of more of the following features: pauses (which may be filled or not), changes in amplitude, and lengthening of the final syllable in the phrase (sometimes ac- companied by glottalization of that syllable and perhaps preceding syllables). In general, ma- jor phrase boundaries tend to be associated with longer pauses, greater tonal changes, and more fi- nal lengthening than minor boundaries. The Experiments The Corpus and Features Used in Analysis The corpus used in this analysis consists of 298 utterances (24 minutes of speech from 26 speak- ers) from the speech data collected by Texas In- struments for the DARPA Air Travel Information System (ATIS) spoken language system evaluation task. In a Wizard-of-Oz simulation, subjects were asked to make travel plans for an assigned task, providing spoken input and receiving teletype out- put. The quality of the ATIS corpus is extremely diverse. Speaker performance ranges from close to isolated-word speech to exceptional fluency. Many utterances contain hesitations and other disfluen- cies, as well as long pauses (greater than 3 sec. in some cases). To prepare this data for analysis, we labeled the speech prosodically by hand, noting location and type of intonational boundaries and presence or absence of pitch accents. Labeling was done from both the waveform and pitchtracks of each utter- ance. Each label file was checked by several la- belers. Two levels of boundary were labeled; in the analysis presented below, however, these are collapsed to a single category. We define our data points to consist of all po- tential boundary locations in an utterance, de- fined as each pair of adjacent words in the ut- terance < wi, wj >, where wi represents the word to the left of the potential boundary site and wj represents the word to the right. 2 Given the variability in performance we observed among speakers, an obvious variable to include in our analysis is speaker identity. While for applica- tions to speaker-independent recognition this vari- able would be uninstantiable, we nonetheless need to determine how important speaker idiosyncracy may be in boundary location. We found no signif- icant increase in predictive power when this vari- able is used. Thus, results presented below are speaker-independent. One easily obtainable class of variable involves temporal information. Temporal variables include utterance and phrase duration, and distance of the 2See the appendix for a partial list of variables em- ployed, which provides a key to the node labels for the prediction trees presented in Figures 1 and 2. 286 potential boundary from various strategic points in the utterance. Although it is tempting to as- sume that phrase boundaries represent a purely intonational phenomenon, it is possible that pro- cessing constraints help govern their occurrence. That is, longer utterances may tend to include more boundaries. Accordingly, we measure the length of each utterance both in seconds and in words. The distance of the boundary site from the beginning and end of the utterance is another variable which appears likely to be correlated with boundary location. The tendency to end a phrase may also be affected by the position of the poten- tial boundary site in the utterance. For example, it seems likely that positions very close to the be- ginning or end of an utterance might be unlikely positions for intonational boundaries. We measure this variable too, both in seconds and in words. The importance of phrase length has also been proposed (Gee and Grosjean, 1983; Bachenko and Fitzpatrick, 1990) as a determiner of boundary lo- cation. Simply put, it seems may be that consecu- tive phrases have roughly equal length. To capture this, we calculate the elapsed distance from the last boundary to the potential boundary site, di- vided by the length of the last phrase encountered, both in time and words. To obtain this informa- tion automatically would require us to factor prior boundary predictions into subsequent predictions. While this would be feasible, it is not straightfor- ward in our current classification strategy. So, to test the utility of this information, we have used observed boundary locations in our current anal- ysis. As noted above, syntactic constituency infor- mation is generally considered a good predictor of phrasing information (Gee and Grosjean, 1983; Selkirk, 1984; Marcus and Hindle, 1985; Steed- man, 1990). Intuitively, we want to test the notion that some constituents may be more or less likely than others to be internally separated by intona- tional boundaries, and that some syntactic con- stituent boundaries may be more or less likely to coincide with intonational boundaries. To test the former, we examine the class of the lowest node in the parse tree to dominate both wi and wj, using Hindle's parser, Fidditch (1989) To test the latter we determine the class of the highest node in the parse tree to dominate wi, but not wj, and the class of the highest node in the tree to dominate wj but not wi. Word class has also been used often to predict boundary location, particularly in text-to-speech. The belief that phrase bound- aries rarely occur after function words forms the basis for most algorithms used to assign intona- tional phrasing for text-to-speech. Furthermore, we might expect that some words, such as preposi- tions and determiners, for example, do not consti- tute the typical end to an intonational phrase. We test these possibilities by examining part-of-speech in a window of four words surrounding each poten- tial phrase break, using Church's part-of-speech tagger (1988). Recall that each intermediate phrase is com- posed of one or more pitch accents plus a phrase accent, and each intonational phrase is composed of one or more intermediate phrases plus a bound- ary tone. Informal observation suggests that phrase boundaries are more likely to occur in some accent contexts than in others. For example, phrase boundaries between words that are deac- cented seem to occur much less frequently than boundaries between two accented words. To test this, we look at the pitch accent values of wi and wj for each < wi, wj >, comparing observed values with predicted pitch accent information obtained from (Hirschberg, 1990). In the analyses described below, we employ varying combinations of these variables to pre- dict intonational boundaries. We use classification and regression tree techniques to generate decision trees automatically from variable values provided. Classification and Regression Tree Techniques Classification and regression tree (CART) analy- sis (Brieman et al., 1984) generates decision trees from sets of continuous and discrete variables by using set of splitting rules, stopping rules, and prediction rules. These rules affect the internal nodes, subtree height, and terminal nodes, re- spectively. At each internal node, CART deter- mines which factor should govern the forking of two paths from that node. Furthermore, CART must decide which values of the factor to associate with each path. Ideally, the splitting rules should choose the factor and value split which minimizes the prediction error rate. The splitting rules in the implementation employed for this study (Ri- ley, 1989) approximate optimality by choosing at each node the split which minimizes the prediction error rate on the training data. In this implemen- tation, all these decisions are binary, based upon consideration of each possible binary partition of values of categorical variables and consideration of different cut-points for values of continuous vari- ables. 287 Stopping rules terminate the splitting process at each internal node. To determine the best tree, this implementation uses two sets of stopping rules. The first set is extremely conservative, re- sulting in an overly large tree, which usually lacks the generality necessary to account for data out- side of the training set. To compensate, the second rule set forms a sequence of subtrees. Each tree is grown on a sizable fraction of the training data and tested on the remaining portion. This step is repeated until the tree has been grown and tested on all of the data. The stopping rules thus have ac- cess to cross-validated error rates for each subtree. The subtree with the lowest rates then defines the stopping points for each path in the full tree. Trees described below all represent cross-validated data. The prediction rules work in a straightforward manner to add the necessary labels to the termi- nal nodes. For continuous variables, the rules cal- culate the mean of the data points classified to- gether at that node. For categorical variables, the rules choose the class that occurs most frequently among the data points. The success of these rules can be measured through estimates of deviation. In this implementation, the deviation for continu- ous variables is the sum of the squared error for the observations. The deviation for categorical vari- ables is simply the number of misclassified obser- vations. Results In analyzing boundary locations in our data, we have two goals in mind. First, we want to dis- cover the extent to which boundaries can be pre- dicted, given information which can be gener- ated automatically from the text of an utter- ance. Second, we want to learn how much predic- tive power can be gained by including additional sources of information which, at least currently, cannot be generated automatically from text. In discussing our results below, we compare predic- tions based upon automatically inferable informa- tion with those based upon hand-labeled data. We employ four different sets of variables dur- ing the analysis. The first set includes observed phonological information about pitch accent and prior boundary location, as well as automati- cally obtainable information. The success rate of boundary prediction from the variable set is ex- tremely high, with correct cross-validated classi- fication of 3330 out of 3677 potential boundary sites -- an overall success rate of 90% (Figure 1). Furthermore, there are only five decision points in the tree. Thus, the tree represents a clean, sim- ple model of phrase boundary prediction, assum- ing accurate phonological information. Turning to the tree itself, we that the ratio of current phrase length to prior phrase length is very important in boundary location. This variable alone (assuming that the boundary site occurs be- fore the end of the utterance) permits correct clas- sification of 2403 out of 2556 potential boundary sites. Occurrence of a phrase boundary thus ap- pears extremely unlikely in cases where its pres- ence would result in a phrase less than half the length of the preceding phrase. The first and last decision points in the tree are the most trivial. The first split indicates that utterances virtually always end with a boundary -- rather unsurpris- ing news. The last split shows the importance of distance from the beginning of the utterance in boundary location; boundaries are more likely to occur when more than 2 ½ seconds have elapsed from the start of the utterance. 3 The third node in the tree indicates that noun phrases form a tightly bound intonational unit. The fourth split in 1 shows the role of accent context in determining phrase boundary location. If wi is not accented, then it is unlikely that a phrase boundary will oc- cur after it. The significance of accenting in the phrase boundary classification tree leads to the question of whether or not predicted accents will have a similar impact on the paths of the tree. In the sec- ond analysis, we substituted predicted accent val- ues for observed values. Interestingly, the success rate of the classification remained approximately the same, at 90%. However, the number of splits in the resultant tree increased to nine and failed to include the accenting of wl as a factor in the clas- sification. A closer look at the accent predictions themselves reveals that the majority of misclas- sifications come from function words preceding a boundary. Although the accent prediction algo- rithm predicted that these words would be deac- cented, they were in fact accented. This appears to be an idiosyncracy of the corpus; such words generally occurred before relatively long pauses. Nevertheless, classification succeeds well in the ab- sence of accent information, perhaps suggesting that accent values may themselves be highly cor- related with other variables. For example, both pitch accent and boundary location appear sen- sitive to location of prior intonational boundaries and part-of-speech. 3This fact may be idiosyncratic to our data, given the fact that we observed a trend towards initial hesitations. 288 In the third analysis, we eliminate the dynamic boundary percentage measure. The result remains nearly as good as before, with a success rate of 89%. The proposed decision tree confirms the use- fulness of observed accent status of wi in bound- ary prediction. By itself (again assuming that the potential boundary site occurs before the end of the utterance), this factor accounts for 1590 out of 1638 potential boundary site classifications. This analysis also confirms the strength of the intona- tional ties among the components of noun phrases. In this tree, 536 out of 606 potential boundary sites receive final classification from this feature. We conclude our analysis by producing a clas- sification tree that uses automatically-inferrable information alone. For this analysis we use pre- dicted accent values instead of observed values and omit boundary distance percentage measures. Us- ing binary-valued accented predictions (i.e., are < wl, wj > accented or not), we obtain a suc- cess rate for boundary prediction of 89%, and using a four-valued distinction for predicted ac- cented (cliticized, deaccented, accented, 'NA') we increased this to 90%. The tree in Figure 2) presents the latter analysis. Figure 2 contains more nodes than the trees discussed above; more variables are used to ob- tain a similar classification percentage. Note that accent predictions are used trivially, to indicate sentence-final boundaries (ra='NA'). In figure 1, this function was performed by distance of poten- tial boundary site from end of utterance (at). The second split in the new tree does rely upon tem- poral distance -- this time, distance of boundary site from the beginning of the utterance. Together these measurements correctly predict nearly forty percent of the data (38.2%). Th classifier next uses a variable which has not appeared in earlier classifications -- the part-of-speech of wj. In 2, in the majority of cases (88%) where wj is a func- tion word other than 'to,' 'in,' or a conjunction (true for about half of potential boundary sites), a boundary does not occur. Part-of-speech ofwi and type of constituent dominating wi but not wj are further used to classify these items. This portion of the classification is reminiscent of the notion of 'function word group' used commonly in assigning prosody in text-to-speech, in which phrases are de- fined, roughly, from one function word to the next. Overall rate of the utterance and type of utterance appear in the tree, in addition to part-of-speech and constituency information, and distance of po- tential boundary site from beginning and end of utterance. In general, results of this first stage of analysis suggest -- encouragingly -- that there is considerable redundancy in the features predict- ing boundary location: when some features are unavailable, others can be used with similar rates of 8UCCe88. Discussion The application of CART techniques to the prob- lem of predicting and detecting phrasing bound- aries not only provides a classification procedure for predicting intonational boundaries from text, but it increases our understanding of the impor- tance of several among the numerous variables which might plausibly be related to boundary lo- cation. In future, we plan to extend the set of variables for analysis to include counts of stressed syllables, automatic NP-detection (Church, 1988), MUTUAL INFORMATION, GENERALIZED MUTUAL INFORMATION scores can serve as indicators of intonational phrase boundaries (Magerman and Marcus, 1990). We will also examine possible interactions among the statistically important variables which have emerged from our initial study. CART tech- niques have worked extremely well at classifying phrase boundaries and indicating which of a set of potential variables appear most important. How- ever, CART's step-wise treatment of variables, Ol>- timization heuristics, and dependency on binary splits obscure the possible relationships that ex- ist among the various factors. Now that we have discovered a set of variables which do well at pre- dicting intonational boundary location, we need to understand just how these variables interact. References Bengt Altenberg. 1987. Prosodic Patterns in Spo- ken English: Studies in the Correlation between Prosody and Grammar for Tezt-to-Speech Con- version, volume 76 of Land Studies in English. Lund University Press, Lund. J. Bachenko and E. Fitzpatrick. 1990. A compu- tational grammar of discourse-neutral prosodic phrasing in English. Computational Linguistics. To appear. Dwight Bolinger. 1989. Intonation and Its Uses: Melody in Grammar and Discourse. Edward Arnold, London. 289 Leo Brieman, Jerome H. Friedman, Richard A. Ol- shen, and Charles J. Stone• 1984. Classification and Regression Trees. Wadsworth & Brooks, Monterrey CA. K. W. Church. 1988. A stochastic parts pro- gram and noun phrase parser for unrestricted text. In Proceedings of the Second Conference on Applied Natural Language Processing, pages 136-143, Austin. Association for Computational Linguistics. DARPA. 1990. Proceedings of the DARPA Speech and Natural Language Workshop, Hidden Valley PA, June. J. P. Gee and F. Grosjean. 1983. Performance structures: A psycholinguistic and linguistic ap- praisal. Cognitive Psychology, 15:411-458. D. M. Hindle. 1989. Acquiring disambiguation rules from text. In Proceedings of the 27th An- nual Meeting, pages 118-125, Vancouver. Asso- ciation for Computational Linguistics. Julia Hirschberg. 1990. Assigning pitch accent in synthetic speech: The given/new distinc- tion and deaccentability. In Proceedings of the Seventh National Conference, pages 952-957, Boston. American Association for Artificial In- telligence. I. Lehiste. 1973. Phonetic disambiguation of syn- tactic ambiguity. Giossa, 7:197-222. David M. Magerman and Mitchel P. Marcus. 1990. Parsing a natural language using mu- tual information statistics. In Proceedings of AAAI-90, pages 984-989. American Association for Artifical Intelligence. Mitchell P. Marc'us and Donald Hindle. 1985. A • computational account of extra categorial ele- ments in japanese. In Papers presented at the First SDF Workshop in Japanese Syntaz. Sys- tem Development Foundation. M. Ostendorf, P. Price, J. Bear, and C. W. Wight- man. 1990. The use of relative duration in syntactic disambiguation. In Proceedings of the DARPA Speech and Natural Language Work- shop. Morgan Kanfmann, June. Janet B. Pierrehumbert. 1980. The Phonology and Phonetics of English Intonation. Ph.D. thesis, Massachusetts Institute of Technology, September. Michael D. Riley. 1989. Some applications of tree- based modelling to speech and language. In Proceedings. DARPA Speech and Natural Lan- guage Workshop, October. E. Selkirk. 1984. Phonology and Syntaz. MIT Press, Cambridge MA. M. Steedman. 1990. Structure and intonation in spoken language understanding. In Proceedings of the ~Sth Annual Meeting of the Association for Computational Linguistics. Appendix: Key to Figures for each type tt tw st et SW ew la ra per tper j{1-4} f{slr} potential boundary, < w~, wj > utterance type total # seconds in utterance total # words in utterance distance (sec.) from start to wj distance (sec.) from wj to end distance (words) from start to wj distance (words) from wj to end is wi accented or not/ or, cliticized, deaecented, accented is wj accented or not/ or, cliticized, deaccented, accented [distance (words) from last boundary]/ [length (words) of last phrase] [distance (sec.) from last boundary]/ [length (see.) of last phrase] part-of-speech of wl- l,ldd + 1 v = verb b - be-verb m -- modifier f = fn word n = noun p = preposition w=WH category of s = smallest constit dominating wl,wj 1 = largest eonstit dominating w~, not wj r = largest constit dominating wj, not wi m = modifier d = determiner v = verb p = preposition w -- WH n = noun s = sentence f = fn word 290 no el i5 yes no 01564 564 [ no j 2403/2556 fsn:N no IA no 318/367 no la '/1 no 111/137 ....... " st <~t49455St:>2.~455 Ino I,e l 61/81 157/238 Figure 1: Predictions from Automatically-Acquired and Observed Data, 90% 291 1108/1118 tr:<l ot:>O 1511198 tr:>1.~11265 tr:<l tr:<l IndNvh 1718 E~7-J B682 E R,~ ID,VBN,VBZ,NA .~D, IN,NA Figure 2: Phrase Boundary Predictions from Automatically-Inferred Information, 90% 292
1991
37
A Preference-first Language Processor Integrating the Unification Grammar and Markov Language Model for Speech Recognition-ApplicationS Lee-Feng Chien**, K. J. Chen** and Lin-Shan Lee* * Dept. of Computer Science and Information Engineering, National Taiwan University,Taipei, Taiwan, Rep. of China, Tel: (02) 362-2444. ** The Institute of Information Science, Academia Sinica, Taipei, Taiwan, Rep. of China. A language processor is to find out a most promising sentence hypothesis for a given word lattice obtained from acoustic signal recognition. In this paper a new language processor is proposed, in which unification granunar and Markov language model are integrated in a word lattice parsing algorithm based on an augmented chart, and the island-driven parsing concept is combined with various preference-first parsing strategies defined by different construction principles and decision rules. Test results"show that significant improvements in both correct rate of recognition and computation speed can be achieved . 1. Introduction In many speech recognition applications, a word lattice is a partially ordered set of possible word hypotheses obtained from an acoustic signal processor. The purpose of a language processor is then, for an input word lattice, to find the most promising word sequence or sentence hypothesis as the output (Hayes, 1986; Tomita, 1986; O'Shaughnessy, 1989). Conventionally either grammatical or statistcal approaches were used in such language processors. However, the high degree of ambiguity and large number of noisy word hypotheses in the word lattices usually make the search space huge and correct identification of the output sentence hypothesis difficult, and the capabilities of a language processor based on either grammatical or statistical approaches alone were very often limited. Because the features of these two approaches are basically complementary, Derouault and Merialdo (Derouault, 1986) first proposed a unified model to combine them. But in this model these two approaches were applied primarily separately, selecting the output sentence hypothesis based on the product of two probabilities independently obtained from these two approaches. 293 In this paper a new language processor based on a recently proposed augmented chart parsing algorithm (Chien, 1990a) is presented, in which the grammatical approach of unification grammar (Sheiber, 1986) and the statistical approach of Markov language model (Jelinek, 1976) are properly integrated in a preference-first word lattice parsing algorithm. The augmented chart (Chien, 1990b) was extended from the conventional chart. It can represent a very complicated word lattice, so that the difficult word lattice parsing problem can be reduced to essentially a well-known chart parsing problem. Unification grammars, compared with other grarnmal~cal approaches, are more declarative and can better integrate syntactic and semantic information to eliminate illegal combinations; while Markov language models are in general both effective and simple. The new language processor proposed in this paper actually integrates the unification grammar and the Markov language model by a new preference-f'u-st parsing algorithm with various preference-first parsing strategies defined by different constituent construction principles and decision rules, such that the constituent selection and search directions in the parsing process can be more appropriately determined by Markovian probabilities, thus rejecting most noisy word hypotheses and significantly reducing the search space. Therefore the global structural synthesis capabilities of the unification grammar and the local relation estimation capabilities of the Markov language model are properly integrated. This makes the present language processor not sensitive at all to the increased number of noisy word hypotheses in a very large vocabulary environment. An experimental system for Mandarin speech recognition has been implemented (Lee, 1990) and tested, in which a very high correct rate of recognition (93.8%) was obtained at a very high processing speed (about 5 sec per sentence on an IBM PC/AT). This indicates significant improvements as compared to previously proposed models. The details of this new language processor will be presented in the following sections. 2. The Proposed Language Processor The language processor proposed in this paper is shown in Fig. 1, where an acoustic signal preprocessor is included to form a complete speech recognition system. The language processor consists of a language model and a parser. The language model properly integrates the unification grammar and the Markov language model, while the parser is defined based on the augmented chart and the preference-first parsing algorithm. The input speech signal is first processed by the acoustic signal preprocessor; the corresponding word lattice will thus be generated and constructed onto the augmented chart. The parser will then proceed to build possible constituents from the word lattice on the augmented chart in accordance with the language model and the preference-first parsing algorithm. Below, except the preference-first parsing algorithm presented in detail in the next section, all of other elements are briefly summarized. The Laneua~e Model The goal of the language model is to participate in the selection of candidate constituents for a sentence to be identified. The proposed language model is composed of a PATR-II-like unification grammar (Sheiber, 1986; Chien, 1990a) and a first-order Markov language model (Jelinek, 1976) and thus, combines many features of the grammatical and statistical language modeling approaches. The PATR-II-Iike unification grammar is used primarily to distinguish between well-formed, acceptable word sequences against ill-formed ones, and then to represent the structural phrases and categories, or to fred the intended meaning depending on different applications. The first-order Markov kmguage model, on the other hand, is used to guide the parser toward correct search directions, such that many noisy word hypotheses can be rejected and many unnecessary constituents can be avoided, and the most promising sentence hypothesis can thus be easily found. In this way the weakness in either the PATR-II-like unification grammar (Sheiber, 1986), e.g., the heavy reliance on rigid linguistic information, or the first-order Markov language model (Jelinek, 1976), e.g., the need for a large training corpus and the local prediction scope can also be effectively remedied. The Augmented Chart and the Word l~attic¢ Parsing Scheme Chart is an efficient and widely used working structure in many natural language processing systems (Kay, 1980; Thompson, 1984), but it is basically designed to parse a sequence of fixed and known words instead of an ambiguous word lattice. The concept of the augmented chart has recently been successfully developed such that it can be used to represent and parse a word lattice (Chien, 1990b). Any given input word lattice for parsing can be represented by the augmented chart through a mapping procedure, in which a minimum number of vertices are used to indicate the end points for all word hypotheses in the lattice, and an inactive edge is used to represent every word hypotheses. Also, specially designed jump edges are constructed to link some edges whose corresponding word hypotheses can possibly be connected but themselves are physically separated in the chart. In this way the basic operation of a chart parser can thus be properly performed on a word lattice. The difference is that two separated edges linked by a jump edge can also be combined as long as the required condition is satisfied. Note that in such a scheme, every constituents (edge) will be constructed only once, regardless of the fact that it may be shared by many different sentence hypotheses. A Sl~-ech r~ognition system Speeeh-lnpu | Acoustic signal | V0 ]~t~qroo~.,88or J The proposed |an, ,mlal~e processor The lan~rua~e model ,rd lattices The parser parsing I Th© I Lost promising sent© :ce hypothesis Fig. 1 An abstract diagram of the proposed language processor. 294 3. The Preference-first Parsing Algorithm The preference-first parsing algorithm is developed based on the augmented chart summarized above, so that the difficult word lattice parsing problem is reduced to essentially a well-known chart parsing problem. This parsing algorithm is a general algorithm, in which various preference-first parsing strategies defined by different construction principles and decision rules can be combined with the island-driven parsing concept, so that the constituent selection and search directions can be appropriately determined by Markovian probabilities, thus rejecting many noisy word hypotheses and significantly reducing the search space. In this way, not only can the features of the grammatical and statistical approaches be combined, but the effects of the two different approaches are reflected and integrated in a single algorithm such that overall performance can be appropriately optimized. Below, more details about the algorithm will be given. Example Construction principles: random mincit)le: at 1my ~ nmd~ly select It c~adidatc conslJt ucnt to be constttlct~ probability selection l~rinciole: at any dmc the candi~llt¢ consdtucnt with file highest probability will b¢ constnlcte.d ftrst length ~,cleclion ~Hnc~ole: at any time the candidate constituent with the largest numt component word hypoth~es will be constructed ftrst len~,th~robabilltv xe/ection Drlnci~le: at any tlm¢ the c~mdldat¢ constituent with the highest probability among those with the largest number of component "~td hypotheses wltt b~ ¢otts~ctcd tint Example Decision rules: hi~hcst nrc, bab~titv rule; ~fft~r lilt grammatical scntoncc constituents have been {ound, one with the higher probability L~ taken as tlc re~uh ~rst- 1 rulG: the rtrst grlunmatlcal ~:ntcnc¢ constilucnt obtained during the con~ of parsing is ulkcn as the Rsuh first-k rule: the sontcnc¢ constltmmt with ~hc highest probability among the first k c.o~s~ct¢d ¢rammadcal scnunac~ constituents obkaincd during thc course ol'parsi;~ is taken as the result The performance of these various construction principles and decision rules will be discussed in Sections 5 and 6 based on experimental results. Probabilitv Estimation for Constructed Constituents In order to make the unification-based parsing algorithm also capable of handling the Markov language model, every constructed constituent has to be assigned a probability. In general, for each given constituent C a probability P(C) = P(W c) is assigned, where W c is the component word hypothesis sequence of C and P('W c) can be evaluated from the Markov language model. Now, when an active constituent A and an inactive constituent I form a new constituent N, the probability P(N) can be evaluated from probabilities P(A) and P(I). Let W n, W a, W i be the component word hypothesis sequences of N, A, and I respectively. Without loss of generality, assume A is to the left of I, thereby Wn = WaWi = Wal ..... Wam,Wil ..... Win, where wak is the k-th word hypothesis of Wa and Wik the k-th word hypothesis of Wi. Then, P(Wn) = P(WaWi) =P(Wal ) * 71~ P(waklWak.1) * P(WillWarn) * TI~ P(wiklWik_l) 2 < k_<. n 2~k~rn -- P(Wa)*PfWi)* I P(wil Iwam)lP(wi 1) }- This can be easily evaluated in each parsing step. The Preference-first Construction Princinles and Decision Rules Since P(C) is assigned to every constituent C in the augmented chart, various parsing strategies can be developed for the preference-first parsing algorithm for different applications. For example, there can be various construction principles to determine the order of constituent construction for all possible candidate constituents. There can also be various decision rules to choose the output sentence among all of the constructed sentence constituents. Some examples for such construction principles and decision rules are listed in the following. 295 4. The Experimental System An experimental system based on the proposed language processor has been developed and tested on a small lexicon, a Markov language model, and a simple set of unification grammar rules for the Chinese language, although the present model is in fact language independent. The system is written in C language and performed on an IBM PC/AT. The lexicon used has a total of 1550 words. They are extracted from the primary school Chinese text books currently used in Taiwan area, which arc believed to cover the most frequently used words and most of the syntactic and semantic structures in th~ everyday Chinese sentences. Each word stored in lexicon (word entry) contains such information as the. word name, the pronunciations (the phonemes), the lexical categories and the corresponding feature structures. Information contained in each word entry is relatively simple except for the verb words, because verbs have complicated behavior and will play a central role in syntactic analysis, The unification grammar constructed includes about 60 rules. It is believed that these rules cover almost all of the sentences used in the primary school Chinese text books. The Markov language model is trained using the primary school Chinese text books as training corpus. Since there are no boundary markers between adjacent words in written Chinese sentences, each sentence in the corpus was first segmented into a corresponding word string before used in the model training. Moreover, the test data include 200 sentences randomly selected from 20 articles taken from several different magazines, newspapers and books published in Taiwan area. All the words used in the test sentences are included in the lexicon. 5. Test Results (I) -- Initial Preference-first Parsing Strategies The present preference-first language processor is a general model on which different parsing strategies defined by different construction principles and decision rules can be implemented. In this and the next sections, several attractive parsing strategies are proposed, tested and discussed under the test conditions presented above. Two initial tests, test I and II, were first performed to be used as the baseline for comparison in the following. In test I, the conventional unification-based grammatical analysis alone is used, in which all the sentence hypotheses obtained from the word lattice were parsed exhaustively and a grammatical sentence constituent was selected randomly as the result; while in test II the first-order Markov modeling approach alone is used, and a sentence hypothesis with the highest probability was selected as the result regardless of the grammatical structure. The correct rate of recognition is defined as the averaged percentage of the correct words in the output sentences. The correct rate of recognition and the approximated average time required are found to be 73.8% and 25 see for Test I, as well as 82.2% and 3 see for Test II, as indicated in the first two rows of Table 1. In all the following parsing strategies, both the unification grammar and the Markov language model will be integrated in the language model to obtain better results. The parsing strategy 1 uses the random selection principle and the highest probability rule ( as listed in Section 3), and the entire word lattice will be parsed exhaustively. The total number of constituents constructed during the course of parsing for each test sentence are also recorded. The results show that the correct rate of recognition can be as high as 98.3%. This indicates that the language processor based on the integration of the unification grammar and the Markov language model can in fact be very reliable. That is, most of the interferences due to the noisy word hypotheses are actually rejected by such an integration. However, the computation load required for such an exhaustive parsing strategy turns out to be very high (similar to that in Test 13, i.e., for each test sentence in average 305.9 constituents have to be constructed and it takes about 25 sec to process a sentence on the IBM PC/AT. Such computation requirements will make this strategy practically difficult for many applications. All these test data together with the • results for the other three parsing strategies 2-4 are listed in Table 1 for comparison. The basic concept of parsing strategy 2 (using the probability selection principle and the first-1 rule, as listed in Section 3 ) is to use the probabilities of the constituents to select the search direction such that significant reduction in computation requirements can be achieved. The test results (in the fourth row of Table 1) show that with this strategy for each test sentence in average only 152.4 constituents are constructed and it takes only about 12 see to process a sentence on the PC~AT, and the high correct rate of recognition of parsing strategy 1 is almost preserved, i.e., 96.0%. Therefore this strategy represents a very good made, off, i.e., the computation requirements are reduced by a factor of 0.50 ( the constituent reduction ratio in the last second column of Table 1 is the ration of the average number of built constituents to that of Strategy 1), while the correct rate is only degraded by 2.3%. However, such a speed (12 sac for a sentence) is still very low especially if real-time operation is considered. 6. Test Results (1I) -- Improved Best-first Parsing Strategies In a further analysis all of the constituents constructed by parsing strategy 1 were first divided into two classes: correct constituents and noisy constituents. A correct constituent is a constituent without any component noisy word hypothesis; while a noisy constituent is a constituent which is not correct. These two classes of constituents were then categorized according to their length (number of word hypotheses in the constituents). The average probability values for each category of correct and noisy constituents were then evaluated. The results are plotted in Fig. 2, where the vertical axis shows the average probability values and the horizontal axis denotes the length of the constituent. Some observations can be made as in the following. First, it can be seen that the two curves in Fig. 2 apparently diverge, especially for longer constituents, which implies that the Markovian probabilities can effectively discriminate the noisy constituents against the correct constituents (note that all of thoze constituents are grammatical), especially for longer constituents. This is exactly why parsing strateg~ :I and 2 can provide very high correct rat~,~. Furthermore, Fig. 2 also shows that in gene~l the probabilities for shorter constituents wo~(i usually be much higher than those for longer constituents. This means with parsing strategy 2 almost all short constituents; no matter noisy or 296 correct, would be constructed first, and only those long noisy constituents with lower probability values can be rejected by the parsing strategy 2. This thus leads to the parsing strategies 3 and 4 discussed below. In parsing strategy 3 (using the length/probability selection principle and First-1 rule, as listed in Section 3), the length of a constituent is considered first, because it is found that the correct constituents have much better chance to be obtained very quickly by means of the Markovian probabilities for longer constituents than shorter correct constituents, as discussed in the above. In this way, the construction of the desired constituents would be much more faster and very significant reduction in computation requirements can be achieved. The test results in the fifth row of Table 1 show that with this strategy in average only 70.2 constituents were constructed for a sentence, a constituent reduction ratio of 0.27 is found, and it takes only about 4 sec to process a sentence on PC/AT, which is now very close to real-time. However, the correct rate of recognition is seriously degraded to as low as 85.8%, apparently because some correct constituents have been missed due to the high speed construction principle. Fortunately, after a series of experiments, it was found that in this case the correct sentences very often appeared as the second or the third constructed sentences, if not the first. Therefore, the parsing strategy 4 is proposed below, in which everything is the same as parsing strategy 3 except that the first-1 decision rule is replaced by the first-3 decision rule. In other words, those missed correct constituents can very possibly be picked up in the next few steps, if the final decision can be slightly delayed. The test results for parsing strategy 4 listed in the sixth row of Table 1 show that with this strategy the correct rate of recognition has been improved to 93.8% and the computation complexity is still close to that of parsing strategy 3, i.e., the average number of constructed constituents for a sentence is 91.0, it takes about 5 sec to process a sentence, and a constituent reduction ratio of 0.29 is achieved. This is apparently a very attractive approach considering both the accuracy and the computation complexity. In fact, with the parsing strategy 4, only those noisy word hypotheses which both have relatively high probabilities and can be unified with their neighboring word hypotheses can cause interferences. This is why the noisy word hypothesis interferences can be reduced, and the present approach is therefore not sensitive at all to the increased number of noisy word hypotheses in a very large vocabulary environment. Note that although intuitively the integration of grammatical and statistical approaches would imply more computation requirements, but here in fact the preference-first algorithm provides correct directions of search such that many noisy constituents are simply rejected and the reduction of the computation complexity makes such ah integration also very attractive in terms of computation requirements. 7. Concluding Remarks In this paper, we have proposed an efficient language processor for speech recognition applications, in which the unification grammar and the Markov language model are properly integrated in Test I (Unification gram mar only) Test II (Markov languag, model only) construction decision Correct rates o Number of Constituent Approximated avq rage time requirec principles rules recognition built constituent reduction ratio (See/Sentence) 73.8 % 305.9 1,00 25 82.2 % parsing ~u'ategy 1 the random the highest selection prineipl probability 98.3 % 305.9 1.00 25 parsing strategy 2 the probability First-1 96.0 % 152.4 0:50 12 ~eleetion principle rule ' First-1 parsing strategy 3 rule 85.8 % 70.2 0,27 4 the length/pro- bability selection principle the length/pro- bability selection principle First-3 93.8 % 91.0 0.29 5 rule parsing strategy 4 Table 1 Test results for the two initial tests and four parsing strategies. 297 a preference-first parsing algorithm defined on an augmented chart. Because the unification-based analysis eliminates all illegal combinations and the Markovian probabilities of constituents indicates the correct direction of processing, a very high correct rate of recognition can be obtained. Meanwhile, many unnecessary computations can be effectively eliminated and very high processing speed obtained due to the significant reduction of the huge search space. This preference-first language processor is quite general, in which many different parsing strategies defined by appropriately chosen construction principles and decision rules can be easily implemented for different speech recognition applications. References: Chien, L. F., Chen, K. J. and Lee, L. S. (1990b). An Augmented Chart Parsing Algorithm Integrating Unification Grammar and Markov Language Model for Continuous Speech Recognition. Proceedings of the IEEE 990 International Conference on Acoustics, Speech and Signal Processing, Albuquerque, NM, USA, Apr. 1990. Chien, L. F., Chert, K. J. and Lee, L. S. (1990a). An Augmented Chart Data Structure with Efficient Word Lattice Parsing Scheme in Speech Recognition Applications. To appear on Speech Communication., also in Proceedings of the 13th International Conference on Computational Linguistics, July 1990, pp. 60-65. Derouault A. and Merialdo B. (1986). Natural Language Modeling for Phoneme-to-Text Transcription, IEEE Trans. on PAM1, Vol. PAMI-8, pp. 742-749. Hayes, P. J. et al. (1986). Parsing Spoken Language:A Semantic Caseframe Approach. Proceedings of the l] th International Conference on 1(7: 10-: Average 1 O" l~ob~tbility value, s 10 "$ a61' m m m m m.- Computational Linguistics, University of Bonn, pp. 587-592. Jelinek, F. (1976). Continuous Speech Recognition by Statistical Methods, Prec. IEEE, Vol. 64(4), pp. 532-556, Apr. 1976. Kay M. (1980). Algorithm Schemata and Data Structures in Syntactic Processing. Xerox Report CSL-80-12, pp. 35-70, Pala Alto. Lee, L. S. et al. (1990). A Mandarin Dictation Machine Based Upon A Hierarchical Recognition Approach and Chinese Natural Language Analysis, IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol. 12, No. 7. July 1990, pp. 695-704. O'Shaughnessy, D. (1989). Using Syntactic Information to Improve Large Vocabulary Word Recognition, ICASSP'89, pp. 715-718. Sheiber, S. M. (1986). An Introduction to Unification-Based Approaches to Grammar. University of Chicago Press, Chicago. Thompson, H. and Ritchie, G. (1984). Implementing Natural Language Parsers, in Artificial Intelligence, Tools, Techniques, and Applications, O'shea, T. and Elsenstadt, M. (eds), Harper&Row, Publishers, Inc. Tomita, M. (1986). An Efficient Word Lattice Parsing Algorithm for Continuous Speech Recognition. Proceedings of the 1986 International Conference on Acoustic, Speech and Signal Processing, pp. 1569-1572. Cox~ct constituents Noisy constituents Fig. 2 Constituent length The average probability values for the correct and noisy constituents with different lengths constructed by parsing strategy 1. 298
1991
38
FACTORIZATION OF LANGUAGE CONSTRAINTS IN SPEECH RECOGNITION Roberto Pieraccini and Chin-Hui Lee Speech Research Department AT&T Bell Laboratories Murray Hill, NJ 07974, USA ABSTRACT Integration of language constraints into a large vocabulary speech recognition system often leads to prohibitive complexity. We propose to factor the constraints into two components. The first is characterized by a covering grammar which is small and easily integrated into existing speech recognizers. The recognized string is then decoded by means of an efficient language post-processor in which the full set of constraints is imposed to correct possible errors introduced by the speech recognizer. 1. Introduction In the past, speech recognition has mostly been applied to small domain tasks in which language constraints can be characterized by regular grammars. All the knowledge sources required to perform speech recognition and understanding, including acoustic, phonetic, lexical, syntactic and semantic levels of knowledge, are often encoded in an integrated manner using a finite state network (FSN) representation. Speech recognition is then performed by finding the most likely path through the FSN so that the acoustic distance between the input utterance and the recognized string decoded from the most likely path is minimized. Such a procedure is also known as maximum likelihood decoding, and such systems are referred to as integrated systems. Integrated systems can generally achieve high accuracy mainly due to the fact that the decisions are delayed until enough information, derived from the knowledge sources, is available to the decoder. For example, in an integrated system there is no explicit segmentation into phonetic units or words during the decoding process. All the segmentation hypotheses consistent with the introduced constraints are carried on until the final decision is made in order to maximize a global function. An example of an integrated system was HARPY (Lowerre, 1980) which integrated multiple levels of knowledge into a single FSN. This produced relatively high performance for the time, but at the cost of multiplying out constraints in a manner that expanded the grammar beyond reasonable bounds for even moderately complex domains, and may not scale up to more complex tasks. Other examples of integrated systems may be found in Baker (1975) and Levinson (1980). On the other hand modular systems clearly separate the knowledge sources. Different from integrated systems, a modular system usually make an explicit use of the constraints at each level of knowledge for making hard decisions. For instance, in modular systems there is an explicit segmentation into phones during an early stage of the decoding, generally followed by lexical access, and by syntactic/semantic parsing. While a modular system, like for instance HWIM (Woods, 1976) or HEARSAY-II (Reddy, 1977) may be the only solution for extremely large tasks when the size of the vocabulary is on the order of 10,000 words or more (Levinson, 1988), it generally achieves lower performance than an integrated system in a restricted domain task (Levinson, 1989). The degradation in performance is mainly due to the way errors propagate through the system. It is widely agreed that it is dangerous to make a long series of hard decisions. The system cannot recover from an error at any point along the chain. One would want to avoid this chain- architecture and look for an architecture which would enable modules to compensate for each other. Integrated approaches have this compensation capability, but at the cost of multiplying the size of the grammar in such a way that the computation becomes prohibitive for the recognizer. A solution to the problem is to factorize the constraints so that the size of the 299 grammar, used for maximum likelihood decoding, is kept within reasonable bounds without a loss in the performance. In this paper we propose an approach in which speech recognition is still performed in an integrated fashion using a covering grammar with a smaller FSN representation. The decoded string of words is used as input to a second module in which the complete set of task constraints is imposed to correct possible errors introduced by the speech recognition module. 2. Syntax Driven Continuous Speech Recognition The general trend in large vocabulary continuous speech recognition research is that of building integrated systems (Huang, 1990; Murveit, 1990; Paul, 1990; Austin, 1990) in which all the relevant knowledge sources, namely acoustic, phonetic, lexical, syntactic, and semantic, are integrated into a unique representation. The speech signal, for the purpose of speech recognition, is represented by a sequence of acoustic patterns each consisting of a set of measurements taken on a small portion of signal (generally on the order of 10 reset). The speech recognition process is carried out by searching for the best path that interprets the sequence of acoustic patterns, within a network that represents, in its more detailed structure, all the possible sequences of acoustic configurations. The network, generally called a decoding network, is built in a hierarehical way. In current speech recognition systems, the syntactic structure of the sentence is represented generally by a regular grammar that is typically implemented as a finite state network (syntactic FSN). The ares of the syntactic FSN represent vocabulary items, that are again represented by FSN's (lexical FSN), whose arcs are phonetic units. Finally every phonetic unit is again represented by an FSN (phonetic FSN). The nodes of the phonetic FSN, often referred to as acoustic states, incorporate particular acoustic models developed within a statistical framework known as hidden Markov model (HMM). 1 The 1. The reader is referred to Rabiner (1989) for a tutorial introduction of HMM. model pertaining to an acoustic state allows computation of a likelihood score, which represents the goodness of acoustic match for the observation of a given acoustic patterns. The decoding network is obtained by representing the overall syntactic FSN in terms of acoustic states. Therefore the recognition problem can be stated as follows. Given a sequence of acoustic patterns, corresponding to an uttered sentence, find the sequence of acoustic states in the decoding network that gives the highest likelihood score when aligned with the input sequence of acoustic patterns. This problem can be solved efficiently and effectively using a dynamic programming search procedure. The resulting optimal path through the network gives the optimal sequence of acoustic states, which represents a sequence of phonetic units, and eventually the recognized string of words. Details about the speech recognition system we refer to in the paper can be found in Lee (1990/1). The complexity of such an algorithm consists of two factors. The first is the complexity arising from the computation of the likelihood scores for all the possible pairs of acoustic state and acoustic pattern. Given an utterance of fixed length the complexity is linear with the number of distinct acoustic states. Since a finite set of phonetic units is used to represent all the words of a language, the number of possible different acoustic states is limited by the number of distinct phonetic units. Therefore the complexity of the local likelihood computation factor does not depend either on the size of the vocabulary or on the complexity of the language. The second factor is the combinatorics or bookkeeping that is necessary for carrying out the dynamic programming optimization. Although the complexity of this factor strongly depends on the implementation of the search algorithm, it is generally true that the number of operations grows linearly with the number of arcs in the decoding network. As the overall number of arcs in the decoding network is a linear function of the number of ares in the syntactic network, the complexity of the bookkeeping factor grows linearly with the number of ares in the FSN representation of the grammar. 300 The syntactic FSN that represents a certain task language may be very large if both the size of the vocabulary and the munber of syntactic constraints are large. Performing speech recognition with a very large syntactic FSN results in serious computational and memory problems. For example, in the DARPA resource management task (RMT) (Price, 1988) the vocabulary consists of 991 words and there are 990 different basic sentence structures (sentence generation templates, as explained later). The original structure of the language (RMT grammar), which is given as a non-deterministic finite state semantic grammar (Hendrix, 1978), contains 100,851 rules, 61,928 states and 247,269 arcs. A two step automatic optimization procedure (Brown, 1990) was used to compile (and minimize) the nondeterministic FSN into a deterministic FSN, resulting in a machine with 3,355 null arcs, 29,757 non-null arcs, and 5832 states. Even with compilation, the grammar is still too large for the speech recognizer to handle very easily. It could take up to an hour of cpu time for the recognizer to process a single 5 second sentence, running on a 300 Mflop Alliant supercomputer (more that 700 times slower than real time). However, if we use a simpler covering grammar, then recognition time is no longer prohibitive (about 20 times real time). Admittedly, performance does degrade somewhat, but it is still satisfactory (Lee, 1990/2) (e.g. a 5% word error rate). A simpler grammar, however, represents a superset of the domain language, and results in the recognition of word sequences that are outside the defined language. An example of a covering grammars for the RMT task is the so called word-pair (WP) grammar where, for each vocabulary word a list is given of all the words that may follow that word in a sentence. Another covering grammar is the so called null grammar (NG), in which a word can follow any other word. The average word branching factor is about 60 in the WP grammar. The constraints imposed by the WP grammar may be easily imposed in the decoding phase in a rather inexpensive procedural way, keeping the size of the FSN very small (10 nodes and 1016 arcs in our implementation (Lee, 1990/1) and allowing the recognizer to operate in a reasonable time (an average of 1 minute of CPU time per sentence) (Pieraccini, 1990). The sequence of words obtained with the speech recognition procedure using the WP or NG grammar is then used as input to a second stage that we call the semantic decoder. 3. Semantic Decoding The RMT grammar is represented, according to a context free formalism, by a set of 990 sentence generation templates of the form: Sj = ~ ai2 ...a~, (1) where a generic ~ may be either a terminal symbol, hence a word belonging to the 991 word vocabulary and identified by its orthographic transcription, or a non-terminal symbol (represented by sharp parentheses in the rest of the paper). Two examples of sentence generation templates and the corresponding production of non-terminal symbols are given in Table 1 in which the symbol e corresponds to the empty string. A characteristic of the the RMT grammar is that there are no reeursive productions of the kind: (,4) = al a2 -'. (A) ... a/v (2) For the purpose of semantic decoding, each sentence template may then be represented as a FSN where the arcs correspond either to vocabulary words or to categories of vocabulary words. A category is assigned to a vocabulary word whenever that vocabulary word is a unique element in the tight hand side of a production. The category is then identified with the symbol used to represent the non-terminal on the left hand side of the production. For instance, following the example of Table 1, the words SHIPS, FRIGATES, CRUISERS, CARRIERS, SUBMARINES, SUBS, and VESSELS belong to the category <SH/PS>, while the word LIST belongs to the category <LIST>. A special word, the null word, is included in the vocabulary and it is represented by the symbol e. Some of the non-terminal symbols in a given sentence generation template are essential for the representation of the meaning of the sentence, while others just represent equivalent syntactic variations with the same meaning. For instance, 301 GIVE A LIST OF <OPTALL> <OPTTHE> <SHIPS> <LIST> <OPTTHE> <THREATS> <OPTALL> AlJ. <OPTTHE> THE <SHIPS> <LIST> SHIPS FRIGATES CRUISERS CARRIERS SUBMARINES SUBS VESSELS SHOW <OPTME> GIVE <OFrME> LIST GET <Oil]dE> FIND <OPTME> GIVE ME A LIST OF GET <OPTME> A LIST OF <THREATS> AI .gRTS THREATS <OPTME> ME E TABLE 1. Examples of sentence generation templates and semantic categories the correct detection by the recognizer of the words uttered in place of the non-terminals <SHIPS> and <THREATS>, in the former examples, is essential for the execution of the correct action, while an error introduced at the level of the nonterminals <OPTALL>, <OP'ITHE> and <LIST> does not change the meaning of the sentence, provided that the sentence generation template associated to the uttered sentence has been correctly identified. Therefore there are non-terminals associated with essential information for the execution of the action expressed by the sentence that we call semantic variables. An analysis of the 990 sentence generation templates allowed to define a set of 69 semantic variables. The function of the semantic decoder is that of finding the sentence generation template that most likely produced the uttered sentence and give the correct values to its semantic variables. The sequence of words given by the recognizer, that is the input of the semantic decoder, may have errors like word substitutions, insertions or deletions. Hence the semantic decoder should be provided with an error correction mechanism. With this assumptions, the problem of semantic decoding may be solved by introducing a distance criterion between a string of words and a sentence template that reflects the nature of the possible word errors. We defined the distance between a string of words and a sentence generation templates as the minimum Levenshtein 2 distance between the string of words and all the string of words that can be generated by the sentence generation template. The Levenshtein distance can be easily computed using a dynamic programming procedure. Once the best matching template has been found, a traceback procedure is executed to recover the modified sequence of words. 3.1 Semantic Filter After the alignment procedure described above, a semantic check may be performed on the words that correspond to the non-terminals 2. The Levenshtein distance (Levenshtein, 1966) between two strings is defined as the minimum number of editing operations (substitutions, deletions, and insertions) for transforming one string into the other. 302 associated with semantic variables in the selected template. If the results of the check is positive, namely the words assigned to the semantic variables belong to the possible values that those variables may have, we assume that the sentence has been correctly decoded, and the process stops. In the case of a negative response we can perform an additional acoustic or phonetic verification, using the available constraints, in order to find which production, among those related to the considered non- terminal, is the one that more likely produced the acoustic pattern. There are different ways of carrying out the verification. In the current implementation we performed a phonetic verification rather than an acoustic one. The recognized sentence (i.e. the sequence of words produced by the recognizer) is transcribed in terms of phonetic units according to the pronunciation dictionary used in speech decoding. The template selected during semantic decoding is also transformed into an FSN in terms of phonetic units. The transformation is obtained by expanding all the non-terminals into the corresponding vocabulary words and each word in terms of phonetic units. Finally a matching between the string of phones describing the recognized sentence and the phone-transcribed sentence template is performed to find the most probable sequence of words among those represented by the template itself (phonetic verification). Again, the matching is performed in order to minimize the Levenshtein distance. An example of this verification procedure is shown in Table 2. The first line in the example of Table 2 shows the sentence that was actually uttered by the speaker. The second line shows the recognized sentence. The recognizer deleted the word WERE, substituted the word THERE for the word THE and the word EIGHT for the word DATE. The semantic decoder found that, among the 990 sentence generation templates, the one shown in the third line of Table 2 is the one that minimizes the criterion discussed in the previous section. There are three semantic variables in this template, namely <NUMBER>, <SHIPS> and <YEAR>. The backtracking procedure associated to them the words DATE, SUBMARINES, and EIGHTY TWO respectively. The semantic check gives a false response for the variable <NUMBER>. In fact there are no productions of the kind <NUMBER> := DATE. Hence the recognized string is translated into its phonetic representation. This representation is aligned with the phonetic representation of the template and gives the string shown in the last line of the table as the best interpretation. 3.2 Acoustic Verification A more sophisticated system was also experimented allowing for acoustic verification after semantic postprocessing. For some uttered sentences it may happen that more than one template shows the very same minimum Levenshtein distance from the recognized sentence. This is due to the simple metric that is used in computing the distance between a recognized string and a sentence template. For example, if the uttered sentence is: WHEN WILL THE PERSONNEL CASUALTY REPORT FROM THE YORKTOWN BE RESOLVED uuered WERE THERE MORE THAN EIGHT SUBMARINES EMPLOYED IN EIGHTY TWO recognized THE MORE THAN DATE SUBMARINES EMPLOYED END EIGHTY TWO .template !WERE THERE MORE THAN <NUMBER> <SHIPS> EMPLOYED IN <YEAR> semantic variable value check <NUMBER> DATE FALSE <SHIPS> SUBMARINES TRUE <YEAR> EIGHTY TWO TRuE phonetic dh aet m ao r t ay I ae n d d ey t s ah b max r iy n z ix m p i oy d eh n d ey dx iy twehniy corrected WERE THERE MORE THAN EIGHT SUBMARINES EMPLOYED IN EIGHTY TWO TABLE 2. An example of semantic postprocessing 303 and the recognized sentence is: WILL THE PERSONNEL CASUALTY REPORT THE YORKTOWN BE RESOLVED there are two sentence templates that show a minimum Levenshtein distance of 2 (i.e. two words are deleted in both cases) from the recognized sentence, namely: 1) <WHEN+LL> <OPTTHE> <C-AREA> <CASREP> FOR <OFITHE> <SHIPNAME> BE RESOLVED 2) <WHEN+LL> <OPTTHE> <C-AREA> <CASREP> FROM <OPTTHE> <SHIPNAME> BE RESOLVED. In this case both the templates are used as input to the acoustic verification system. The final answer is the one that gives the highest acoustic score. For computing the acoustic score, the selected templates are represented as a FSN in terms of the same word HMMs that were used in the speech recognizer. This FSN is used for constraining the search space of a speech recognizer that runs on the original acoustic representation of the uttered sentence. 4. Experimental Results The semantic postproeessor was tested using the speech recognizer arranged in different accuracy conditions. Results are summarized in Figures 1 and 2. Different word accuracies were simulated by using various phonetic unit models and the two covering grammars (i.e. NG and WP). The experiments were performed on a set of 300 test sentences known as the February 89 test set (Pallett. 1989) The word accuracy, defined as 1- insertions deletions'e substitutions xl00 (3) number of words uttered was computed using a standard program that provides an alignment of the recognized sentence with a reference string of words. Fig. 1 shows the word accuracy after the semantic postprocessing versus the original word accuracy of the recognizer using the word pair grammar. With the worst recognizer, that gives a word accuracy of 61.3%, the effect of the semantic postprocessing is to increase the word accuracy to 70.4%. The best recognizer gives a word accuracy of 94.9% and, after the postprocessing, the corrected strings show a word accuracy of 97.7%, corresponding to a 55% reduction in the word error rate. Fig. 2 reports the semantic accuracy versus the original sentence accuracy of the various recognizers. Sentence accuracy is computed as the percent of correct sentences, namely the percent of sentences for which the recognized sequence of words corresponds the uttered sequence. Semantic accuracy is the percent of sentences for which both the sentence generation template and the values of the semantic variables are correctly decoded, after the semantic postprocessing. With the best recognizer the sentence accuracy is 70.7% while the semantic accuracy is 94.7%. 100 90- 80- 70- O j "" J OO ¢1~ S S 0 S S S S At S S S 0 sS S S S ~ S S s S 50 s I I I I 50 60 70 80 9O 100 Original Word Accueraey Figure 1. Word accuracy after semantic postprocess- ing 100 80-- 60-- 40-- 20-- • I m • i I • S S~ S S S S S S S S S S S S S J S S S S S S I I I I 20 40 60 80 100 Original Sentence Accuracy Figure 2. Semantic accuracy after semantic postpro- cessing When using acoustic verification instead of simple phonetic verification, as described in 304 section 3.2, better word and sentence accuracy can be obtained with the same test data. Using a NG covering grammar, the final word accuracy is 97.7% and the sentence accuracy is 91.0% (instead of 92.3% and 67.0%, obtained using phonetic verification). With a WP covering grammar the word accuracy is 98.6% and the sentence accuracy is 92% (instead of 97.7% and 86.3% with phonetic verification). The small difference in the accuracy between the NG and the WP case shows the rebusmess introduced into the system by the semantic postprocessing, especially when acoustic verification is peformed. 5. Summary For most speech recognition and understanding tasks, the syntactic and semantic knowledge for the task is often represented in an integrated manner with a finite state network. However for more ambitious tasks, the FSN representation can become so large that performing speech recognition using such an FSN becomes computationally prohibitive. One way to circumvent this difficulty is to factor the language constraints such that speech decoding is accomplished using a covering grammar with a smaller FSN representation and language decoding is accomplished by imposing the complete set of task constraints in a post- processing mode using multiple word and string hypotheses generated from the speech decoder as input. When testing on the DARPA resource management task using the word-pair grammar, we found (Lee, 1990/2) that most of the word errors involve short function words (60% of the errors, e.g. a, the, in) and confusions among morphological variants of the same lexeme (20% of the errors, e.g. six vs. sixth). These errors are not easily resolved on the acoustic level, however they can easily be corrected with a simple set of syntactic and semantic rules operating in a post-processing mode. The language constraint factoring scheme has been shown efficient and effective. For the DARPA RMT, we found that the proposed semantic post-processor improves both the word accuracy and the semantic accuracy significantly. However in the current implementation, no acoustic information is used in disambiguating words; only the pronunciations of words are used to verify the values of the semantic variables in cases when there is semantic ambiguity in finding the best matching string. The performance can further be improved if the acoustic matching information used in the recognition process is incorporated into the language decoding process. 6. Acknowledgements The authors gratefully acknowledge the helpful advice and consultation provided by K.- Y. Su and K. Church. The authors are also thankful to J.L. Gauvain for the implementation of the acoustic verification module. REFERENCES I. S. Austin, C. Barry, Y.-L., Chow, A. Derr, O. Kimball, F. Kubala, J. Makhoul, P. Placeway, W. Russell, R. Schwartz, G. Yu, "Improved HMM Models fort High Performance Speech Recognition," Proc. DARPA Speech and Natural Language Workshop, Somerset, PA, June 1990. 2. J. K. Baker, "The DRAGON System - An Overview," IEEE Trans. Acoust. Speech, and Signal Process., vol. ASSP-23, pp 24-29, Feb. 1975. 3. M. K. Brown, J. G. Wilpon, "Automatic Generation of Lexical and Grammatical Constraints for Speech Recognition," Proc. 1990 IEEE Intl. Conf. on Acoustics, Speech, and Signal Processing, Albuquerque, New Mexico, pp. 733-736, April 1990. 4. G. Hendrix, E. Sacerdoti, D. Sagalowicz, J. Slocum, "Developing a Natural Lanaguge Interface to Complex Data," ACM Translations on Database Systems 3:2 pp. 105-147, 1978. 5. X. Huang, F. Alleva, S. Hayamizu, H. W. Hon, M. Y. Hwang, K. F. Lee, "Improved Hidden Markov Modeling for Speaker-Independent Continuous Speech Recognition," Proc. DARPA Speech and Natural Language Workshop, Somerset, PA, June 1990. 6. C.-H. Lee, L. R. Rabiner, R. Pieraccini and J. G. Wilpon, "Acoustic Modeling for Large Speech Recognition," Computer, Speech and Language, 4, pp. 127-165, 1990. 305 7. C.-H. Lee, E. P. Giachin, L. R. Rabiner, R. Pieraccini and A. E. Rosenberg, "Improved Acoustic Modeling for Continuous Speech Recognition," Prec. DARPA Speech and Natural Language Workshop, Somerset, PA, June 1990. 8. V.I. Levenshtein, "Binary Codes Capable of Correcting Deletions, Insertions, and Reversals," Soy. Phys.-Dokl., vol. 10, pp. 707-710, 1966. 9. S. E. Leviuson, K. L. Shipley, "A Conversational Mode Airline Reservation System Using Speech Input and Output," BSTJ 59 pp. 119-137, 1980. 10. S.E. Levinson, A. Ljolje, L. G. Miller, "Large Vocabulary Speech Recognition Using a Hidden Markov Model for Acoustic/Phonetic Classification," Prec. 1988 IEEE Intl. Conf. on Acoustics, Speech, and Signal Processing, New York, NY, pp. 505-508, April 1988. 11. S.E. Levinson, M. Y. Liberman, A. Ljolje, L. G. Miller, "Speaker Independent Phonetic Transcription of Fluent Speech for Large Vocabulary Speech Recognition," Prec. of February 1989 DARPA Speech and Natural Language Workshop pp. 75-80, Philadelphia, PA, February 21-23, 1989. 12. B. T. Lowerre, D. R. Reddy, "'The HARPY Speech Understanding System," Ch. 15 in Trends in Speech Recognition W. A. Lea, Ed. Prentice-Hall, pp. 340-360, 1980. 13. H. Murveit, M. Weintraub, M. Cohen, "Training Set Issues in SRI's DECIPHER Speech Recognition System," Prec. DARPA Speech and Natural Language Workshop, Somerset, PA, June 1990. 14. D. S. Pallett, "Speech Results on Resource Management Task," Prec. of February 1989 DARPA Speech and Natural Language Workshop pp. 18-24, Philadelphia, PA, February 21-23, 1989. 15. R. Pieraccini, C.-H. Lee, E. Giachin, L. R. Rabiner, "Implementation Aspects of Large Vocabulary Recognition Based on Intraword and Interword Phonetic Units," Prec. Third Joint DARPA Speech and Natural Language Workshop, Somerset, PA, June 1990. 16. D.B., Paul "The Lincoln Tied-Mixture HMM Continuous Speech Recognizer," Prec. DARPA Speech and Natural Language Workshop, Somerset, PA, June 1990. 17. P.J. Price, W. Fisher, J. Bemstein, D. Pallett, "The DARPA 1000-Word Resource Management Database for Continuous Speech Recognition," Prec. 1988 IEEE Intl. Conf. on Acoustics, Speech, and Signal Processing, New York, NY, pp. 651-654, April 1988. 18. L.R. Rabiner, "A Tutorial on Hidden Markov Models, and Selected Applications in Speech Recognition," Prec. IEEE, Vol. 77, No. 2, pp. 257-286, Feb. 1989. 19. D. R. Reddy, et al., "Speech Understanding Systems: Final Report," Computer Science Department, Carnegie Mellon University, 1977. 20. W. Woods, et al., "Speech Understanding Systems: Final Technical Progress Report," Bolt Beranek and Newman, Inc. Report No. 3438, Cambridge, MA., 1976. 306
1991
39
Toward a Plan-Based Understanding Model for Mixed-Initiative Dialogues This paper presents an enhanced model of plan-based dialogue understanding. Most plan-based dialogue understanding models derived from [Litman and Allen, 1987] as- sume that the dialogue speakers have access to the same domain plan library, and that the active domain plans are shared by the two speakers. We call these features shared do- main plan constraints. These assumptions, however, are too strict to account for mixed- initiative dialogues where each speaker has a different set of domain plans that are housed in his or her own plan library, and where an individual speaker's domain plans may be activated at any point in the dialogue. We propose an extension to the Litman and Allen model by relaxing the shared domain plan constraints. Our extension improves (1) the ability to track the currently active plan, (2) the ability to explain the planning be- hind speaker utterances, and (3) the ability to track which speaker controls the conver- sational initiative in the dialogue. 1. Introduction In this paper, we present an enhanced plan-based model of dialogue understanding that provides a framework for computer processing of mixed-initiative dialogues. In mixed-initiative dialogues, each speaker brings to the conversation his or her own plans and goals based on his or her own domain knowledge, and which do not necessarily match those of the other speaker, even in cooperative situations. Thus, mixed-initiative dia- logues exhibit a more complicated discourse structure than do dialogues in which a single speaker controls the conversational initiative. Hiroaki Kitano* and Carol Van Ess-Dykema t Center for Machine Translation Carnegie Mellon University Pittsburgh, PA 15213 [email protected] [email protected] ABSTRACT The existing plan-based model of dialogue under- standing (as represented by [Litman and Allen, 1987]) accounts for dialogues in which a single speaker con- trois the initiative. We call these dialogues Single- Initiative Dialogues. In modeling single-initiative di- alogues, Litman and Allen assume a shared stack that represents ajointplan (joint domain plan). This joint plan is shared by the two speakers. We claim that this assumption is too restrictive to apply to mixed- initiative dialogues, because in mixed-initiative dia- logues each speaker may have his or her own indi- vidual domain plans I. The assumption creates several functional problems in the Litman and Allen model, namely, its inability to process mixed-initiative dia- logues and the need for a large amount of schema def- inition (domain knowledge representation) to handle complex conversational interactions. The model we present builds on the framework of [Litman and Allen, 1987]. We hypothesize, how- ever, that speaker-specific plan libraries are needed, instead of a single plan library storing joint plans, for a plan-based theory of discourse to account for mixed- initiativedialogues. In our framework, the understand- ing system activates the instantiated schemata (places them on the stack) from each speaker's individual plan library 2, thus creating two domain plan stacks. We also theorize that in addition to using the domain plans that are stored in a speaker's memory (plan library), speakers incrementally expand their domain plans in response to the current context of the dialogue. These extensions enable our model to." *This author is supported, in part, by NEC Corporation, Japan. tThis author's research was made possible by a post- doctoral fellowship awarded her by the U.S. Department of Defense. The views and conclusions contained in this doc- ument are those of the authors and should not be interpreted as necessarily representing the official policies, either ex- pressed or implied, of the U.S. Department of Defense or of the United States government. • Provide a mechanism for tracking the currently active plan in mixed-initiative dialogues, • Explain the planning behind speaker utterances, • Provide a mechanism for tracking which speaker controls the conversational initiative, and for tracking the nesting of initiatives within a dia- logue segment. • Reduce the amount of schema definition required to process mixed-initiative dialogues. Throughout this paper, we use two dialogue extrac- lIn this regard, we agree with [Grosz and Sidner, 1990]'s criticism of the master-slave model of plan recognition. 2Using the [Pollack, 1990] distinction, plans are mental objects when they are on the stack, and recipes-for-action when they are in the plan library. 25 tions from our data: 1) an extraction from a Japanese dialogue in the conference registration domain, and 2) an extraction from a Spanish dialogue in the travel agency domain. 3 SpA and SpB refer to Speaker A and Speaker B, respectively. Dialogue I (Conference Registration, translated from Japanese): SpA: SpA: SpB: SpB: SpA: SpB: I would like to attend the conference. (1) What am I supposed to do? (2) First, you must register for the conference. (3) Do you have a registration form? (4) No, not yet. (5) Then we will send you one. (6) Dialogue II (Travel Agency, translated from Span- ish): Prior to the following dialogue exchanges, the traveler (SpB) asks the travel agent (SPA) for a recommenda- tion on how it is best to travel to Barcelona. They agree that travel by bus is best. SpA: SpA: SpB: SpA: SpA: SpB: You would leave at night. (1) You would take a nap in the bus on your way to Barcelona. (2) Couldn't we leave in the morning ... instead of at night? (3) Well, it would be a little difficult. (4) You would be traveling during the day which would be difficult because it's very hot. (5) Really? (6) 2. Limitations of the Current Plan-Based Dialogue Understanding Model The current plan-based model of dialogue understand- ing [Litman and Allen, 1987] assumes a single plan library that contains the domain plans of the two speak- ers, and a shared plan stack mechanism to track the current plan structure of the dialogue. The shared stack contains the domain plans and the discourse plans from the plan library that are activated by the inference module of the dialogue understanding system. The do- main plan is a joint plan shared by the two dialogue speakers. Although this shared stack mechanism ac- counts for highly task-oriented and cooperative dia- logues where one can assume that both speakers share 3Dialogue 1 is extracted from a corpus of Japanese ATR (Advanced Telecommunication Research) recorded simu- lated conference registration telephone conversations. No visual information was exchanged between the telephone speakers. Dialogue 2 is extracted from a corpus of recorded Spanish dialogues in the travel agency domain, collected by the second author of this paper. These dialogues are simu- lated telephone conversations, where no visual information was exchanged. the same domain plan, the model does not account for mixed-initiative dialogues. In this section we examine three limitations of the current plan-based dialogue understanding model: 1) the inability to track the currently active plan, 2) the inability to explain a speaker's planning behind his or her utterances, and 3) the inability to track conversa- tional initiative control transfer. A dialogue under- standing system must be able to infer the dialogue par- ticipants' goals in order to arrive at an understanding of the speakers' actions. The inability to explain the planning behind speaker utterances is a serious flaw in the design of a plan-based dialogue processing model. Tracking the conversational control initiative provides the system with a mechanism to identify which of a speaker's plans is currently activated, and which goal is presently being persued. We believe that an under- standing model for mixed-initiative dialogues must be able to account for these phenomena. 2.1. Tracking the Currently Active Plan The Litman and Allen model lacks a mechanism to track which plan is the currently active plan in mixed- initiative dialogue where the two speakers have very different domain plan schemata in their individual plan libraries. The currently active plan is the plan or action that the dialogue processing system is currently consid- ering. In Dialogue I, after utterance (2), What am I sup- posed to do?, by SpA, the stack should look like Figure 14. Although the manner in which the conference reg- istration domain plans may be expanded on the stack depends upon which domain plan schemata are avail- able in a speaker's domain plan library, we assume that a rational agent would have a schema containing the plan to attend a conference, Attend-Conference. This plan is considered the currently active plan and thus marked [Next]. When processing the subsequent utterance, (3), First, you must register for the confer- ence., the currently active plan should be understood as registration, RegS.zt:er, since SpB clearly states that the action 5 of registration is necessary to carry out the plan to attend the conference. The Litman and Allen model lacks a mechanism for instantiating a new plan within the domain unless the currently ac- 4Notational conventions in this paper follow [Litman and Allen, 1987]. In their model, the currently active plan is labeled [Next]. ID-PARAH in P lan2 refers to IDENTIFY- PARAMETER. I1 in Plan2 and AC in Plan3 are ab- breviated tags for INFORMREF (Inform with Reference to) andAttend-Conference, respectively. Proc in Plan2 stands for procedure. SThe words plan and action can be used interchangably. A sequence of actions as specified in the decomposition of a plan carry out a plan. Each action can also be a plan which has its own decomposition. Actions are not decomposed when they are primitive operators [Litman and Alien, 1987]. 26 Planl [Completed] INTRODUCE-PLAN(SpA, SpB, II,Plan2) REQUEST(SpI, SpB, II) SURFACE-REQUES~(SpA, SpB, II) Plan2 ID-PARAM(SpB, SpA, proc,AC,Plan3) If: INFORMREF(~pB,SpA,proc) Plan3 AC: Attend-Conference Reg st/er ... [Next] GetForm Fill Send Figure 1: State of the Stack after Utterance (2) in Dialogue I tive plan (or an action of the domain plan) marked by [Next], is executed. Thus, in this example, only if the plan Attend-Conference marked as [Next], is executed, can the system process the prerequisite plan, Register. Looking at this constraint from the point of view of an event timeline, the Litman and Allen model can process only temporally sequential actions, i.e., the Attend-Conference event must be completed before the Register event can begin. This problem can be clearly illustrated when we look at the state of the stack after utterance (4), Do you have a registration form?, shown in Figure 2. Utterance (4) stems from the action GetForm (GF) which is a plan for the conference office secretary to send a reg- istration form to the participant. It is an action of the Register plan. Since the Attend-Conference plan has not been executed, the system has two ac- tive plans, Attend-Conference and GetForm, both marked [Next], in the stack where only GetForm should be labeled the active plan. 2.2. Explaining Speaker Planning Behind Utterances A second limitation of the Litman and Allen model is that it cannot explain the planning behind speaker utterances in certain situations. The system cannot process utterances stemming from speaker-specific do- main plans that are enacted because they are an active response to the previous speaker's utterance. This is because the model assumes ajointplan to account for utterances spoken in the dialogue. But utterances that stem from an active response stem from neither shared domain plans currently on the stack nor from a plan Plan-4 [Completed] INTRODUCE-PLAN(SpB,SpA, I2,Plan5) I REQUEST(SpB, SpA, I2) SURFACE-RE~UEST(SpB,SpA, I2) Plan-5 ID-PARAM(SpA, SpB,have(form),GF,Plan3) I I2: INFORMIF(SpA, SpB,have(form)) Plan2 [Completed] ID-PARAM(SpB, SpA, proc,AC,Plan3) II: INFORNREF(~pB, SpA, proc) Plan3 AC : Attend-Conference Reg st/er ... [Next] GF : GetForm Fill Send [ Next ] Figure 2: State of the Stack after Utterance (4) in Dialogue I which concurrently exists in the plan libraries of the two speakers. In Figure 1, the Attend-Conference domain plan from Dialogue I is expanded with the Regis t e r plan after the first utterance because utterance (4), Do you have a registration form?, and the subsequent con- versation cannot be understood without having domain plans entailing the Regi s t e r plan in the stack. If this were a joint domain plan, SpA's utterance What am I supposed to do?, could not be explained. It can be inferred that SpA does not have a domain plan for at- tending a conference, or at least that the system did not activate it in the stack. The fact that SpA asks SpB What am I supposed to do? gives evidence that SpA and SpB do not share the Register domain plan at that point in the dialogue. Another example of speaker planning that the Lit- man and Allen model cannot explain, occurs in Dia- logue II. After a series of interactions between SpA and SpB, SpB says in utterance (3), Couldn't we leave in the morning ... instead of at night?, as an active response to SpA. In order to explain the speaker plan- ning behind these utterances, the current model would include the schemata shown in Figure 36 . Utterance (3), however, does not stem from speaker action. One way to correct this situation within the current model would be to allow for the ad hoc addition of the schema, 6This is a simplified list of schemata, excluding prereq- uisite conditions and effects. Like the Litman and Allen model, our schema definition follows that of NOAH [Sacer- doti, 1977] and STRIPS [Fikes and Nilsson, 1971]. 27 State-Preference. The consequence, however, of this approach is that too large a number of schemata are required, and stored in the plan library, This large number of schemata will explode exponentially as the size of the domain increases. 2.3. Tracking Conversational Initiative Control A third problem in the Litman and Allen model is that it cannot track which speaker controls the conversational initiative at a specific point in the dialogue, nor how initiatives are nested within a dialogue segment, e.g., within a clarification subdialogue. This is self-evident since the model accounts only for single-initiative di- alogues. Since the model calls for a joint plan, it does not track which of the two speakers maintains or initi- ates the transfer of the conversational initiative within the dialogue. Thus, that the conversational initiative is transferred from SpA to SpB at utterance (3) in Dia- logue II, Couldn't we leave in the morning ... instead of at night?, or that SpA maintains the initiative during SpB's request for clarification about the weather, utter- ance (6), Really?, cannot be explained by the Litman and Allen model. 3. An Enhanced Model In order to overcome these limitations, we propose an enhanced plan-based model of dialogue understand- ing, building on the framework described in [Litman and Allen, 1987]. Our model inherits the basic flow of processing in [Litman and Allen, 1987], such as a constraint-based search to activate the domain plan schemata in the plan library, and the stack operation. However, we incorporate two modifications that enable our model to account for mixed-initiative dialogues, which the current model cannot. These modifications include: • Speaker-Specific Domain Plan Libraries and the Individual Placement of Speaker-Specific Plans on the Stack. • Incremental Domain Plan Expansion. First, our model assumes a domain plan library for each speaker and the individual placement of the speaker-specific domain plans on the stack. Figure 4 shows how the stack is organized in our model. The domain plan, previously considered a joint plan, is separated into two domain plans, each representing a domain plan of a specific speaker. Each speaker can only be represented on the stack by his or her own domain plans. Progression from one domain plan to another can only be accomplished through the system's recognition of speaker utterances in the dialogue. Discourse Plan Domain Plans Domain Plans Speaker A Speaker B Figure 4: New Stack Structure Second, our model includes an incremental expan- sion of domain plans. Dialogue speakers use domain plans stored in their individual plan library in response to the content of the previous speaker's utterance. The domain plans can be further expanded when they ac- Ovate additional domain plans in the plan library of the current speaker. For example, if a domain plan is marked [Next] (currently active), the system de- composes the plan into its component plan sequence. Then the first element in the component plan sequence (which is an action) is marked [Next] and the previous plan is no longer marked. Figure 5 illustrates how the domain plans in Dialogue I can be incrementally expanded. In Figure 5(a), Attend-Conference is the only plan activated, and it is marked [Next]. As the plan is expanded, [Next] is moved to the first action of the decomposition sequence (Figure 5(b)). This expansion is attributed to information provided by the previous speaker, for example, First, you must register for the conference. (If such an utterance is not made, no expansion takes place.) Then, if the subsequent speaker has a plan for the registration pro- cedure, the domain plan for Register is expanded under Register. Again, [Next] is moved to the first element of the component plan sequence, GetForm (Figure 5(c)). We are implementing this model using the Span- ish travel agency domain corpus and the Japanese ATR conference registration corpus. The implemen- tation is in CMU CommonLisp, and uses the CMU FrameKit frame-based knowledge representation sys- tem. The module accepts output from the Generalized LR Parsers developed at Carnegie Mellon University [Tomita, 1985]. 4. Examples 4.1. Tracking the Currently Active Plan In our model, we provide a mechanism for consis- tently tracking the individual speaker's currently ac- tive plans. First, we show how the model keeps track of a speaker's plans within mixed-initiative dialogue. The state of the stack after utterance (2), What am I supposed to do?, in Dialogue I, should look like Fig- ure 6. Plan 3 represents a domain plan of SpA, 28 ((HEADER: Set-Itinerary) (Decomposition: Set-Destination Decide-Transportation ...) ((HEADER: Decide-Transportation) (Decomposition: Tell-Depart-Times Tell-Outcomes Establish-Agreement)) Figure 3: Domain Plan Schemata for Dialogue II (Partial Listing) Attend-Conference [Next] (a) Attend-Conference Registe/r [Next] (b) Attend-Conference Regite/r ,,4",, GetForm Fill Send [Next] (c) Figure 5: Incremental Domain Plan Expansion for Dialogue I and Plan 4 represents a domain plan of SpB. Since SpA does not know what he or she is supposed to do to attend the conference, the only plan in the stack is Attend-Conference. SpB knOWS the regis- tration procedure details, so his or her domain plan is expanded to include Register, and then its de- composition into the GetForm Fill Send action sequence. The first element of the decomposition is further expanded, and an action sequence notHave GetAdrs Send is created under GetForn~ The action sequence notHave GetAdrs Send is a se- quence where the secretary's plan is to ask whether SpA already has a registration form (notHave), and if not, to ask his or her name and address (GetAdrs), and to send him or her a form (Send). Figure 7 shows the state of the stack in Dialogue I after SpB's question, utterance (4), Do you have a registration form?. From the information given in his or her previous utterance, (3), First, you must register for the conference., SpA's domain plan (Plan3) was expanded downward. Thus, Plan3 has a Register plan, and it is marked [Next]. For SpB, notHave is marked [Next], indicating that it is his or her plan currently under consideration. Although SpB's cur- rently active plan is notHave, SpA considers the Register plan to be the current plan because SpA does not have the schema that includes the decompo- sition of the Register plan. 4.2. Explaining Speaker Planning Behind Utterances Second, our model explains a speaker's active plan- ning behind an utterance. In the Litman and Allen model, SpA's utterance (2) in Dialogue I, What am I supposed to do ?, cannot be explained if the domain plan Attend-Conference is shared by the two speak- ers. In such a jointplan both speakers would know that a conference participant needs to register for a confer- ence. However, the rational agent will not ask What am I supposed to do? if he or she already knows the details of the registration procedure. But, if such an expansion is not made on the stack, the system cannot process SpB's reply, First, you must register for the conference., because there would be no domain plan on the stack for Register. This dilemma cannot be solved with ajointplan. It, however, can be resolved by assuming individual domain plan libraries and an active domain plan for each speaker. As shown in Figure 6, when SpA asks What am I supposed to do?, the active domain plan is solely Attend-Conference, with no decomposition. SpB's domain plan, on the other hand, contains the full details of the conference regis- tration procedure. This enables SpB to say First, you must register for the conference. It also enables SpB to ask Do you have a registration form?, because the ac- tion to ask whether SpA has a form or not (notHave) is already on the stack due to action decomposition. Our model also explains speaker planning in Dia- logue II. In this dialogue, the traveler (SpB)'s utterance (3), Couldn't we leave in the morning ... instead of at 29 Planl Plan2 [Completed] INTRODUCE-PLAN(SpA, SpB, II,Plan2) REQUEST(SpI, SpB, II) SURFACE-REQUES$(SpA, SpB, II) ID-PARAM(SpB,SpA,proc,AC,Plan3) II: INFORMREF(~pB, SpA,proc) Plan3 AC : Attend-Conference [Next ] Plan-4 Attend-Conference Reg st/er ... GetForm Flll Send n o t ~ [Nextl Figure 6: State of the Stack after Utterance (2) in Dialogue I Plan-5 [Completed] INTRODUCE-PLAN (SpB, SpA, I2, Plan6) i REQUEST ( Sp~, SpA, I2 ) i SURFACE-REQUeST ( SpB, SpA, I2 ) Plan-6 'Plan2 ID-PARAM (SPA, SpB, have ( form), NH, P lan-4 ) | I2 : INFORMIF (~pA, SpB, have (form)) [ Completed] ID-PARAM (SpB, SpA, proc, AC, Plan3) | I 1 : INFORMREF (~pB, SpA, proc) Plan3 AC : Attend-Conference Regist/er [Next] Plan-4 Attend-Conference Reg st/er ... GetForm Fill Send NH: not~ [Next] Figure 7: State of the Stack after Utterance (4) in Dialogue I 30 night?, can be explained by the plan specific tO SpB which is to State-Depart-Preference. In our model, we assign plans to a specific speaker, depend- ing upon his or her role in the dialogue, e.g., traveler or travel agent. This eliminates the potential combina- torial explosion of the number of schemata required in the current model. 4.3. Tracking Conversational Initiative Control Third, our model provides a consistent mechanism to track who controls the conversational initiative at any given utterance in the dialogue. This mechanism pro- vides an explanation for the initiative control rules pro- posed by [Walker and Whittaker, 1990], within the plan-based model of dialogue understanding. Our data allow us to state the following rule: • When Sp-X makes an utterance that instantiates a discourse plan based on his or her domain plan, then Sp-X controls the conversational initiative. This rule also holds in the nesting of initiatives, such as in a clarification dialogue segment: • When Sp-X makes an utterance that instantiates a discourse plan based on his or her domain plans and Sp-Y replies with an utterance that instantiates a discourse plan, then Sp-X maintains control of the conversational initiative. In Dialogue II, illustrated in Figure 8, SpB's question, utterance (3), Couldn't we leave in the morning ... instead of at night?, instantiates dis- course Plan 5. It stems from SpB's domain plan State-Depart-Preference. In this case, the first conversational initiative tracking rule applies, and the initiative is transferred to SpB. In contrast, SpB's response of Really? to SpA's utterance (5), You would be traveling during the day which would be difficult because it's very hot., is a re- quest for clarification. This time, the second rule cited above for nested initiatives applies, and the initiative remains with SpA. 5. Related Works allows other embedded turn-takings. 2) Communica- tion plans - plans that determine how to execute or achieve an utterance goal or dialogue goals. 3) Di- alogue plans - plans for establishing a dialogue con- struction. 4) Domain plans. The ATR model attempts to capture complex conversational interaction by using a hierarchy of plans whereas our model tries to capture the same phenomena by speaker-specific domain plans and discourse plans. Their interaction, communica- tion, and dialogue plans operate at a level above our speaker-specific domain plans. Their plans serve as a type of meta-planning to their and our domain plans. An extension enabling their plan hierarchy to operate orthogonally to our model would be possible. Our model is consistent with the initiative control rules presented in [Walker and Whittaker, 1990]. In their control rules scheme, however, the speaker con- trois the initiative when the dialogue utterance type (surface structure analysis) is an assertion (unless the utterance is a response to a question), a command, or a question (unless the utterance is a response to a question or command). In our model, the conversa- tional initiative control is explained by the speaker's planning. In our model, control is transferred from the INITIATING CONVERSATIONAL PARTICIPANT (ICP) tO the OTHER CONVERSATIONAL PARTICIPANT (OCP) when the utterance by the OCP is made based on the OCP's domain plan, not as a reply tO the utterance made by the ICP based on the ICP's domain plan. Cases where no initiative control transfer takes place despite the utterance type (assertion, command or question) substantiate that these utterances are (1) an assertion which is a response by the ICP through rD-PARAM tO answer a question, and (2) a question to clarify the command or question uttered by the ICP, and which includes a question functioning as a clarification dis- course plan. Our model provides an explanation for the initiative control rules proposed by [Walker and Whit- taker, 1990] within the framework of the plan-based model of dialogue understanding. [Walker and Whit- taker, 1990] only provide a descriptive explanation of this phenomenon. Carberry [Carberry, 1990] discusses plan disparity in which the plan inferred by the user modeling program differs from the actual plan of the user. However, her work does not address mixed-initiative dialogue understanding where either of the speakers can control the conversational initaitive. The ATR dialogue understanding system [Yarnaoka and Iida, 1990] incorporates a plan hierarchy com- prising three kinds of universal pragmatic and domain plans to process cooperative and goal-oriented dia- logues. They simulated the processing of such dia- logues using the following plans: 1) Interaction plans - plans characterized by dialogue turn-taking that de- scribes a sequence of communicative acts. Turn-taking 6. Conclusion In this paper we present an enhanced model of plan- based dialogue understanding. Our analysis demon- strates that the joint-plan assumption employed in the [Litman and Allen, 1987] model is too restrictive to track an individual speaker's instantiated plans, ac- 31 Plan5 [Completed] INTRODUCE-PLAN (SpB, SpA, If, Plan6) REQUEST ( Sp~, SpA, I 1 ) SURFACE-REQUEST (SpB)SpA, Ask-If (depart (morning)) ) Plan6 ID-PARAM(SpA, SpB,possible(depart(morning)),PREF,Plan4) If: INFORMIF(SpA, SpB!possible(depart(morning))) P lan3 Set-Itinerary Set-Destin~ Decide-Transportatlon Tell-Depart- Tell- Establish- Times Outcomes Agreement [Next] P lan4 Go-Travel / Visit-Travel-Agent PREF: Tell ~- State~'-Depart- Destination Preference [Next] Figure 8: State of the Stack after Utterance (3) in Dialogue II count for active planning behind speaker utterances and track the transfer of conversational initiative control in dialogues, all of which characterize mixed-initiative dialogues. Our model employs speaker-specific do- main plan libraries and the incremental expansion of domain plans to account for these mixed-initiative di- alogue phenomena. We have used representative dia- logues in two languages to demonstrate how our model accounts for these phenomena. 7. Acknowledgements We would like to thank Dr. John Fought, Linguistics Department, University of Pennsylvania, for his help in collecting the Spanish travel agency domain corpus, and Mr. Hitoshilida and Dr. Akira Kurematsu for pro- viding us with their Japanese ATR conference registra- tion domain corpus. We also thank Mr. Ikuto Ishizuka, Hitachi, Japan and Dr. Michael Mauldin, Center for Machine Translation, Carnegie Mellon University for implementation support. References [Carberry, 1990] Carberry, S., Plan Recognition in Natural Language Dialogue, The MIT Press, 1990. [Fikes and Nilsson, 1971] Fikes, R., and Nilsson, N., "STRIPS: A new apporach to the application of the- orem proving to problem solving," Artificial Intelli- gence, 2, 189-208, 1971. [Grosz and Sidner, 1990] Grosz, B. and Sidner, C., '~Plans for Discourse," In Cohen, Morgan and Pol- lack, eds. Intentions in Communication, MIT Press, Cambridge, MA., 1990. [Litman and Allen, 1987] Litman, D. and Allen, J., "A Plan Recognition Model for Subdialogues in Con- versation", Cognitive Science 11 (1987), 163-200. [Pollack, 1990] Pollack, M., '~Plans as Complex Men- tal Attitudes," In Cohen, Morgan and Pollack, eds. Intentions in Communication, MIT Press, Cam- bridge, MA., 1990. [Sacerdoti, 1977] Sacerdoti, E. D., A Structure for Plans and Behavior, New York: American Elsevier, 1977. [Tomita, 1985] Tomita, M., Efficient Algorithms for Parsing Natural Language, Kluwer Academic, 1985. [Van Ess-Dykema and Kitano, Forthcoming] Van Ess-Dykema, C. and Kitano, H., Toward a Compu- tational Understanding Model for Mixed-Initiative Telephone Dialogues, Carnegie Mellon University: Technical Report, (Forthcoming). [Walker and Whittaker, 1990] Walker, M, and Whit- laker, S., "Mixed Initiativein Dialogue: An Investi- gation into Discourse Segmentation," Proceedings of ACL-90, Pittsburgh, 1990. [Yamaoka and Iida, 1990] Yamaoka, T. and Iida, H., "A Method to Predict the Next Utterance Using a Four-layered Plan Recognition Model," Proceed- ings of the European Conference on Artificial Intel- ligence, Stockholm, 1990. 32
1991
4
CONSTRAINT PROJECTION: AN EFFICIENT TREATMENT OF DISJUNCTIVE FEATURE DESCRIPTIONS Mikio Nakano NTT Basic Research Laboratories 3-9-11 Midori-cho, Musashino-shi, Tokyo 180 JAPAN e-mail: [email protected] Abstract Unification of disjunctive feature descriptions is important for efficient unification-based pars- ing. This paper presents constraint projection, a new method for unification of disjunctive fea- ture structures represented by logical constraints. Constraint projection is a generalization of con- straint unification, and is more efficient because constraint projection has a mechanism for aban- doning information irrelevant to a goal specified by a list of variables. 1 Introduction Unification is a central operation in recent com- putational linguistic research. Much work on syntactic theory and natural language parsing is based on unification because unification-based approaches have many advantages over other syn- tactic and computational theories. Unification- based formalisms make it easy to write a gram- mar. In particular, they allow rules and lexicon to be written declaratively and do not need trans- formations. Some problems remain, however. One of the main problems is the computational inefficiency of the unification of disjunctive feature struc- tures. Functional unification grammar (FUG) (Kay 1985) uses disjunctive feature structures for economical representation of lexical items. Using disjunctive feature structures reduces the num- ber of lexical items. However, if disjunctive fea- ture structures were expanded to disjunctive nor- mal form (DNF) 1 as in definite clause grammar (Pereira and Warren 1980) and Kay's parser (Kay 1985), unification would take exponential time in the number of disjuncts. Avoiding unnecessary expansion of disjunction is important for efficient disjunctive unification. Kasper (1987) and Eisele and DSrre (1988) have tackled this problem and proposed unification methods for disjunctive fea- ture descriptions. ~DNF has a form ¢bt Vq~ V¢3 V.-. Vq~n, where ¢i includes no disjunctions. These works are based on graph unification rather than on term unification. Graph unifica- tion has the advantage that the number of argu- ments is free and arguments are selected by la- bels so that it is easy to write a grammar and lexicon. Graph unification, however, has two dis- advantages: it takes excessive time to search for a specified feature and it requires much copying. We adopt term unification for these reasons. Although Eisele and DSrre (1988) have men- tioned that their algorithm is applicable to term unification as well as graph unification, this method would lose term unification's advantage of not requiring so much copying. On the con- trary, constraint unification (CU) (Hasida 1986, Tuda et al. 1989), a disjunctive unification method, makes full use of term unification ad- vantages. In CU, disjunctive feature structures are represented by logical constraints, particu- larly by Horn clauses, and unification is regarded as a constraint satisfaction problem. Further- more, solving a constraint satisfaction problem is identical to transforming a constraint into an equivalent and satisfiable constraint. CU unifies feature structures by transforming the constraints on them. The basic idea of CU is to transform constraints in a demand-driven way; that is, to transform only those constraints which may not be satisfiable. This is why CU is efficient and does not require excessive copying. However, CU has a serious disadvantage. It does not have a mechanism for abandoning irrel- evant information, so the number of arguments in constraint-terms (atomic formulas) becomes so large that transt'ormation takes much time. Therefore, from the viewpoint of general natu- ral language processing, although CU is suitable for processing logical constraints with small struc- tures, it is not suitable for constraints with large structures. This paper presents constraint projection (CP), another method for disjunctive unifica- tion. The basic idea of CP is to abandon in- formation irrelevant to goals. For example, in 307 bottom-up parsing, if grammar consists of local constraints as in contemporary unification-based formalisms, it is possible to abandon informa- tion about daughter nodes after the application of rules, because the feature structure of a mother node is determined only by the feature structures of its daughter nodes and phrase structure rules. Since abandoning irrelevant information makes the resulting structure tighter, another applica- tion of phrase structure rules to it will be efficient. We use the term projection in the sense that CP returns a projection of the input constraint on the specified variables. We explain how to express disjunctive feature structures by logical constraints in Section 2. Sec- tion 3 introduces CU and indicates its disadvan- tages. Section 4 explains the basic ideas and the algorithm of CP. Section 5 presents some results of implementation and shows that adopting CP makes parsing efficient. 2 Expressing Disjunctive Feature Structures by Logical Constraints This section explains the representation of dis- junctive feature structures by Horn clauses. We use the DEC-10 Prolog notation for writing Horn clauses. First, we can express a feature structure with- out disjunctions by a logical term. For example, (1) is translated into (2). FP°'" ] (1) / agr [num sin L subj [agr Inure [per ~irndg ] ] (2) cat (v, agr (sing, 3rd), cat (_, agr (sing, 3rd), _) ) The arguments of the functor cat correspond to the pos (part of speech), agr (agreement), and snbj (subject) features. Disjunction and sharing are represented by the bodies of Horn clauses. An atomic formula in the body whose predicate has multiple defini- tion clauses represents a disjunction. For exam- ple, a disjunctive feature structure (3) in FUG (Kay 1985) notation, is translated into (4). "pos v { [numsing .] }...~ plural] agr [] [per j 1st t/ 12nd j'J (3) subj [ gr ! [num L agr per (4) p(cat (v, Agr, cat (_, Agr,_))) • - not_3s (Agr). p(cat (n, agr (s ing, 3rd), _) ). not_3s ( agr ( sing, Per) ) : - Ist_or_2nd (Per). not_3s (agr(plural, _)). Ist_or_2nd(Ist). Ist_or_2nd(2nd). Here, the predicate p corresponds to the specifica- tion of the feature structure. A term p(X) means that the variable I is a candidate of the disjunc- tive feature structure specified by the predicate p. The ANY value used in FUG or the value of an unspecified feature can be represented by an anonymous variable '_'. We consider atomic formulas to be constraints on the variables they include. The atomic formula lst_or_2nd(Per) in (4) constrains the variable Per to be either 1st or hd. In a similar way, not_3s (Agr) means that Agr is a term which has the form agr(l~um,Per), and that//am is sing and Per is subject to the constraint lst_or_2nd(Per) or that }lure is plural. We do not use or consider predicates with- out their definition clauses because they make no sense as constraints. We call an atomic formula whose predicate has definition clauses a constraint-term, and we call a sequence of constraint-terms a constraint. A set of definition clauses like (4) is called a structure of a constraint. Phrase structure rules are also represented by logical constraints. For example, If rules are bi- nary and if L, R, and M stand for the left daughter, the right daughter, and the mother, respectively, they stand in a ternary relation, which we repre- sent as psr(L,R,M). Each definition clause ofpsr corresponds to a phrase structure rule. Clause (5) is an example. (5) psr(Subj, cat (v, Agr, Subj ), cat ( s, Agr, _) ). Definition clauses ofpsr may have their own bod- ies. If a disjunctive feature structure is specified by a constraint-term p(X) and another is specified by q(Y), the unification of X and Y is equivalent to the problem of finding X which satisfies (6). (6) [p(X),q(X)] Thus a unification of disjunctive feature struc- tures is equivalent to a constraint satisfaction problem. An application of a phrase structure rule also can be considered to be a constraint sat- isfaction problem. For instance, if categories of left daughter and right daughter are stipulated by el(L) and c2(R), computing a mother cate- gory is equivalent to finding M which satisfies con- straint (7). (7) [cl (L), c2 (R) ,psr (L,R, M)] A Prolog call like (8) realizes this constraint 308 satisfaction. (8) :-el (L), c2(R) ,psr (L,R,M), assert (c3(M)) ,fail. This method, however, is inefficient. Since Pro- log chooses one definition clause when multiple definition clauses are available, it must repeat a procedure many times. This method is equivalent to expanding disjunctions to DNF before unifica- tion. 3 Constraint Unification and Its Problem This section explains constraint unification ~ (Hasida 1986, Tuda et al. 1989), a method of dis- junctive unification, and indicates its disadvan- tage. 3.1 Basic Ideas of Constraint Unification As mentioned in Section 1, we can solve a con- straint satisfaction problem by constraint trans- formation. What we seek is an efficient algo- rithm of transformation whose resulting structure is guaranteed satisfiability and includes a small number of disjuncts. CU is a constraint transformation system which avoids excessive expansion of disjunctions. The goal of CU is to transform an input con- straint to a modular constraint. Modular con- straints are defined as follows. (9) (Definition: modular) A constraint is mod- ular, iff 1. every argument of every atomic formula is a variable, 2. no variable occurs in two distinct places, and 3. every predicate is modularly defined. A predicate is modularly defined iff the bodies of its definition clauses are either modular or NIL. For example, (10) is a modular constraint, while (11), (12), and (13) are not modular, when all the predicates are modularly defined. (10) [p(X,Y) ,q(Z,•)] (11) [p(X.X)] (12) [p(X,¥) ,q(Y.Z)] (13) [pCf(a) ,g(Z))] Constraint (10) is satisfiable because the predi- cates have definition clauses. Omitting the proof, a modular constraint is necessarily satisfiable. Transforming a constraint into a modular one is equivalent to finding the set of instances which satisfy the constraint. On the contrary, non- modular constraint may not be satisfiable. When ~Constralnt unification is called conditioned unifi- cation in earlier papers. a constraint is not modular, it is said to have de- pendencies. For example, (12) has a dependency concerning ¥. The main ideas of CU are (a) it classi- fies constraint-terms in the input constraint into groups so that they do not share a variable and it transforms them into modular constraints sepa- rately, and (b) it does not transform modular con- straints. Briefly, CU processes only constraints which have dependencies. This corresponds to avoiding unnecessary expansion of disjunctions. In CU, the order of processes is decided accord- ing to dependencies. This flexibility enables CU to reduce the amount of processing. We explain these ideas and the algorithm of CU briefly through an example. CU consists of two functions, namely, modularize(constraint) and integrate(constraint). We can execute CU by calling modularize. Function modularize di- vides the input constraint into several constraints, and returns a list of their integrations. If one of the integrations fails, modularization also fails. The function integrate creates a new constraint- term equivalent to the input constraint, finds its modular definition clauses, and returns the new constraint-term. Functions rnodularize and integrate call each other. Let us consider the execution of (14). (14) modularize( [p(X, Y), q(Y. Z), p(A. B) ,r(A) ,r(C)]) The predicates are defined as follows. (15) pCfCA),C):-rCA),rCC). (16) p(a.b). (17) q(a,b). (18) q(b,a). (19) rCa). (20) r(b). The input constraint is divided into (21), (22), and (23), which are processed independently (idea (a)). (21) [p(x,Y),q(Y,z)] (22) [p(A,B) ,r(A)] (23) [r(C)] If the input constraint were not divided and (21) had multiple solutions, the processing of (22) would be repeated many times. This is one rea- son for the efficiency of CU. Constraint (23) is not transformed because it is already modular (idea (b)). Prolog would exploit the definition clauses of r and expend unnecessary computation time. This is another reason for CU's efficiency. To transform (21) and (22) into modular constraint-terms, (24) and (25) are called. (24) integrate([p(X,Y),q(Y, Z)]) (25) integrate([p(A,B), r(A)]) 309 Since (24~ and (25) succeed and return e0(X,Y,Z)" and el(A,B), respectively, (14) re- turns (26). (26) [c0(X,Y,Z), el (A,B) ,r(C)] This modularization would fail if either (24) or (25) failed. Next, we explain integrate through the exe- cution of (24). First, a new predicate c0 is made so that we can suppose (27). (27) cO (X,Y, Z) 4=:#p(X,Y), q(Y,Z) Formula (27) means that (24) returns c0(X,Y,Z) if the constraint [p(X,Y) ,q(Y,Z)] is satisfiable; that is, e0(X,¥,Z) can be modularly defined so that c0(X,Y,Z) and p(X,Y),q(Y,Z) constrain X, Y, and Z in the same way. Next, a target constraint-term is chosen. Although some heuris- tics may be applicable to this choice, we simply choose the first element p(X,Y) here. Then, the definition clauses of p are consulted. Note that this corresponds to the expansion of a disjunc- tion. First, (15) is exploited. The head of (15) is unified with p(X,Y) in (27) so that (27) be- comes (28). (28) c0(~ CA) ,C,Z)C=~r(A) ,r(C) ,q(C,Z) The term p(f(A),C) has been replaced by its body r(A),r(C) in the right-hand side of (28). Formula (28) means that cO(f (A) ,C,Z) is true if the variables satisfy the right-hand side of (28). Since the right-hand side of (28) is not modu- lar, (29) is called and it must return a constraint like (30). (29) modularize(Er(A) ,rCC), qCC, Z)'l) (30) It(A) ,c2(C,Z)] Then, (31) is created as a definition clause of cO. (31) cOCf(l) ,C,Z):-rCA) ,c2(C,Z). Second, (16) is exploited. Then, (28) be- comes (32), (33) is called and returns (34), and (35) is created. (32) c0(a,b,Z) ¢==~q(b,Z) (33) modularize( [q(b,Z) ] ) (34) [c3(Z)] (35) cO(a,b,Z):-c3(Z). As a result, (24) returns c0(X,Y,Z) because its definition clauses are made. All the Horn clauses made in this CU invoked by (14) are shown in (36). (36) c0(fCA) ,C,Z) :-r(A) ,c2(C,Z). c0(a,b,Z) :-c3(Z). c2(a,b). aWe use cn (n = 0, 1, 2,.- -) for the names of newly- made predicates. c2(b,a). c3(a). cl(a,b). When a new clause is created, if the predicate of a term in its body has only one definition clause, the term is unified with the head of the definition clause and is replaced by the body. This opera- tion is called reduction. For example, the second clause of (36) is reduced to (37) because c3 has only one definition clause. (37) c0(a,b,a). CU has another operation called folding. It avoids repeating the same type of integrations so that it makes the transformation efficient. Folding also enables CU to handle some of the recursively-defined predicates such as member and append. 3.2 Parsing with Constraint Unification We adopt the CYK algorithm (Aho and Ull- man 1972) for simplicity, although any algorithms may be adopted. Suppose the constraint-term caZ_n_m(X) means X is the category of a phrase from the (n + 1)th word to the ruth word in an input sentence. Then, application of a phrase structure rule is reduced to creating Horn clauses like (38). (38) ¢at_n_m(M) :- modularize( Ecat_n_k (L), cat_k_m(R), psr(L,R,M)]). (2<re<l, 0<n<m - 2, n + l<_k<m - 1, where I is the sentence length.) The body of the created clause is the constraint returned by the modularization in the right-hand side. If the modularization fails, the clause is not created. 3.3 Problem of Constraint Unification The main problem of a CU-based parser is that the number of constraint-term arguments increases as parsing proceeds. For example, cat_0_2(M) is computed by (39). (39) modularize([cat_O_l (L), cat_l_2 (R), psr(L,R,M)]) This returns a constraint like [cO(L,R,N)]. Then (40) is created. (40) cat_0 2(M):-c0(L,R,M). Next, suppose that (40) is exploited in the follow- ing application of rules. (41) modularize( [cat_0_2(M), cat_2_3(Rl), psr(M,RI,MI)]) 310 Then (42) will be called. (42) modutarize( leo (L, It, H), cat_2_3(R1), psr(H,Rl,M1)]) It returns a constraint like cl(L,R,M,R1,M1). Thus the number of the constraint-term argu- ments increases. This causes computation time explosion for two reasons: (a) the augmentation of arguments increases the computation time for making new terms and environments, dividing into groups, unification, and so on, and (b) resulting struc- tures may include excessive disjunctions because of the ambiguity of features irrelevant to the mother categories. 4 Constraint Projection This section describes constraint projection (CP), which is a generalization of CU and overcomes the disadvantage explained in the previous section. 4.1 Basic Ideas of Constraint Projection Inefficiency of parsing based on CU is caused by keeping information about daughter nodes. Such information can be abandoned if it is assumed that we want only information about mother nodes. That is, transformation (43) is more useful in parsing than (44). (43) rclCL),c2CR),psrCL,a,H)'l ~ [c3(H)] (44) [cl (L), c2(R) ,psr(L,R,H)] :=~ [c3(L,R,R)] Constraint [c3(M)] in (43) must be satisfiable and equivalent to the left-hand side concerning H. Since [c3(M)] includes only information about H, it must be a normal constraint, which is defined in (45). (45) (Definition: Normal) A constraint is normal iff (a) it is modular, and (b) each definition clause is a normal defini- tion clause; that is, its body does not include variables which do not appear in the head. For example, (46) is a normal definition clause while (47) is not. (46) p(a,X) :-r(X). (47) q(X) :-s(X,¥). The operation (43) is generalized into a new operation constraint projection which is defined in (48). (48) Given a constraint C and a list of variables which we call goal, CP returns a normal con- straint which is equivalent to C concerning the variables in the goal, and includes only variables in the goal. * Symbols used: - X, Y .... ; lists of variables. - P, Q .... ; constraint-terms or sometimes "fail". - P, Q .... ; constraints or sometimes "fail". - H, ~ .... ; lists of constraints. • project(P, X) returns a normal constraint (list of atomic formulas) on X. 1. If P = NIL then return NIL. 2. IfX=NIL, If not(satisfiable(P)), then return "fail", Else return NIL. 3. II := divide(P). 4. Hin := the list of the members of H which include variables in X. 5. ]-[ex :--- the list of the members of H other than the members of ~in. 6. For each member R of ]]cx, If not(satisfiable(R)) then return "fail" 7. S := NIL. 8. For each member T of Hi,=: -V := intersection(X, variables ap- pearing in T). - R := normalize(T, V). If R = 'faT', then return "fail", Else add R to S. 9. Return S. • normalize(S, V) returns a normal constraint- term (atomic formula) on V. 1. If S does not include variables appearing in V, and S consists of a modular term, then Return S. 2. S := a member of S that includes a variable in V. 3. S' := the rest of S. 4. C := a term c.(v], v2 ..... vn). where v], .... vn are all the members of V and c. is a new functor. 5. success-flag := NIL. 6. For each definition clause H :- B. of the predicate of S: - 0 := mgu(S, H). If 0 = fail, go to the next definition clause. - X := a list of variables in C8. - Q := pro~ect(append(BO, S'0), X ). If. Q = fall, then go to the next defini- tton clause Else add C0:-Q. to the database with reduction. 7. If success-flag = NIL, then return "fail", else return C. • mgu returns the most general unifier (Lloyd 1984) • divide(P) divides P into a number of constraints which share no variables and returns the list of the constraints. • satisfiable(P) returns T if P is satisfiable, and NIL otherwise. (satisfiable is a slight modifica- tion of modularize of CU.) Figure 1: Algorithm of Constraint Projection 311 project([p(X,Y),q(Y,Z),p(A,S),r(A),r(e)],[X,e]) [pll,Y[,qlT.gll [plA,B),z(l)li [r(Cll ~heck normalize([pll,[l,qlT,Zll,[g]) ~a|isfiabilit~ cO(l) r(C) I I [co(l).r(c)] Figure 2: A Sample Execution of project CP also divides input constraint C into several constraints according to dependencies, and trans- forms them separately. The divided constraints are classified into two groups: constraints which include variables in the goal, and the others. We call the former goal-relevant constraints and the latter goal-irrelevant constraints. Only goal- relevant constraints are transformed into normal constraints. As for goal-irrelevant constraints, only their satisfiability is examined, because they are no longer used and examining satisfiability is easier than transforming. This is a reason for the efficiency of CP. 4.2 Algorithm of Constraint Projection CP consists of two functions, project(constraint, goal(variable list)) and normalize(constraint, goal(variable list)), which respectively correspond to modularize and integrate in CU. We can ex- ecute CP by calling project. The algorithm of constraint projection is shown in Figure 14. We explain the algorithm of CP through the execution of (49). (49) project( [p(X,Y) ,q(Y ,Z) ,p(A,B) ,r(A) ,r (C)], Ix,c]) The predicates are defined in the same way as (15) to (20). This execution is illustrated in Figure 2. First, the input constraint is divided into (50), (51) and (52) according to dependency. (50) [p(x,Y),q(~,z)] (51) [p(A,B) ,r(h)] (52) [r(C)] Constraints (50) and (52) are goal-relevant be- cause they include X and C, respectively. Since 4Since the current version of CP does not have an operation corresponding to folding, it cannot handle recursively-defined predicates. normalize( I'p (X, Y) ,q(Y ,Z)], [X]) / ¢o(x)o ~(x Y) (Y Z) PJ " ,q • exploit ~ p(f(l),C):-r(l),r(C). l unify cO(I(A))o r(A),r(C),q(C,Z) t project([rlll.,rlCl,qlC,g)],[l]) [rlJll a$$erf cO(f(l)):-r(l). I cO(I) e~loit ~ p(a,b). [ uniJ~ cO(a)CO, q(b,g) t r~ojea(ta(b, .z)], tl) t O a~sert t cO(a). I Figure 3: A Sample Execution of normalize (51) is goal-irrelevant, only its satisfiability is ex- amined and confirmed. If some goal-irrelevant constraints were proved not satisfiable, the pro- jection would fail. Constraint (52) is already nor- mal, so it is not processed. Then (53) is called to transform (50). (53) normalize ( [p(X, Y), q(¥, Z) ], [X]) The second argument (goal) is the list of variables that appear in both (50) and the goal of (49). Since this normalization must return a constraint like [c0(X)], (49) returns (54). (54) [c0(X) ,r(C)] This includes only variables in the goal. This con- straint has a tighter structure than (26). Next, we explain the function normalize through the execution of (53). This execution is illustrated in Figure 3. First, a new term c0(X) is made so that we can suppose (55). Its arguments are all the variables in the goal. (55) c0 (x)c=~p(x,Y) ,q(Y,Z) The normal definition of cO should be found. Since a target constraint must include a variable in the goal, p(X,Y) is chosen. The definition clauses of p are (15) and (16). (15) pCfCA) ,C) :-rCA),r(C). (16) p(a,b). The clause (15) is exploited at first. Its head is unified with p(X,Y) in (55) so that (55) becomes (56). (If this unification failed, the next definition clause would be exploited.) (56) c0 (f CA)) ¢=:¢,r (A) ,r (C), q(C, Z) Tlm right-hand side includes some variables which 312 do not appear in the left-hand side. Therefore, (57) is called. (57) project([r(h),r(C),q(C,Z)], [AJ) This returns r(A), and (58) is created. (58) c0(f(a)):-r(A). Second, (16) is exploited and (59) is created in the same way. (59) c0(a). Consequently, (53) returns c0(X) because some definition clauses of cO have been created. All the Horn clauses created in this CP are shown in (60). (60) c0(f(A)) :-r(A). cO(a). Comparing (60) with (36), we see that CP not only is efficient but also needs less memory space than CU. 4.3 Parsing with Constraint Projection We can construct a CYK parser by using CP as in (61). (61) cat_n_m(M) "- project( [cat_ n_k (L), cat_k_m(R), psr(L,R,M)], [.] ). (2<m<l, 0<n<m - 2, n + l<k<m - 1, where l is the sentence length.) For a simple example, let us consider parsing the sentence "Japanese work." by the following projection. (62) project([cat_of_japanese(L), cat_of_work (R). psr(L,R,M)], [M] ) The rules and leyScon are defined as follows: (63) psr(n(Num,Per), v(Num,Per, Tense), s (Tense)). (64) cat_of_j apanes e (n (Num, third) ). (65) cat_of_work (v (Num, Per, present) ) : -not_3s (Num, Per). (66) not_3s (plural,_). (67) not_3s (singular,Per) : -first_or_second(Per). (68) first_or_second(first). (69) first_or_second(second). Since the constraint cannot be divided, (70) is called. (70) normalize([cat_of_japanese(L), cat_of_work(R), psr(L,R,M)], [M] ) The new term c0(M) is made, and (63) is ex- ploited. Then (71) is to be created if its right- hand side succeeds. (71) c0(s(Tense)) :- project( [cat_of _] apanese (n(llum, Per) ), cat_of_work (v(Num, Per ,Tense) )], [Tense] ). This projection calls (72). (72) normalize([cat_of_j apanese (n (gum, Per)), cat_of_work (v ( ]lum, Per, Tens e) ) ], [Tense]). New term cl(Tense) is made and (65) is ex- ploited. Then (73) is to be created if the right- hand side succeeds. (73) el(present) :- project( [cat_of_j apanese (n(Num, Per) ), not_3s (Num, Per) ], :]). Since the first of argument of the projection is satisfiable, it returns NIL. Therefore, (74) is cre- ated, and (75) is created since the right-hand side of (71) returns cl(Tense). (74) cl (present). (75) c0(s (Tense)) : -cl (Tense). When asserted, (75) is reduced to (76). (76) c0(s(present)). Consequently, [c0(M)] is returned. Thus CP can he applied to CYK parsing, but needless to say, CP can be applied to parsing al- gorithms other than CYK, such as active chart parsing. 5 Implementation Both CU and CP have been implemented in Sun Common Lisp 3.0 on a Sun 4 spare station 1. They are based on a small Prolog interpreter written in Lisp so that they use the same non- disjunctive unification mechanism. We also im- plemented three CYK parsers that adopt Prolog, CU, and CP as the disjunctive unification mecha- nism. Grammar and lexicon are based on ttPSG (Pollard and Sag 1987). Each lexical item has about three disjuncts on average. Table I shows comparison of the computation time of the three parsers. It indicates CU is not as efficient as CP when the input sentences are long. 313 Input sentence He wanted to be a doctor. You were a doctor when you were young. I saw a man with a telescope on the hill. He wanted to be a doctor when he was a student. CPU time (see.) Prolog CU CP 3.88 6.88 5.64 29.84 19.54 12.49 (out of memory) 245.34 17.32 65.27 19.34 14.66 Table h Computation Time 6 Related Work In the context of graph unification, Carter (1990) proposed a bottom-up parsing method which abandons information irrelevant to the mother structures. His method, however, fails to check the inconsistency of the abandoned information. Furthermore, it abandons irrelevant information after the application of the rule is completed, while CP abandons goal-irrelevant constraints dy- namically in its processes. This is another reason why our method is better. Another advantage of CP is that it does not need much copying. CP copies only the Horn clauses which are to be exploited. This is why CP is expected to be more efficient and need less memory space than other disjunctive unification methods. Hasida (1990) proposed another method called dependency propagation for overcoming the problem explained in Section 3.3. It uses tran- sclausal variables for efficient detection of depen- dencies. Under the assumption that informa- tion about daughter categories can be abandoned, however, CP should be more efficient because of its simplicity. 7 Concluding Remarks We have presented constraint projection, a new operation for efficient disjunctive unification. The important feature of CP is that it returns con- straints only on the specified variables. CP can be considered not only as a disjunctive unifica- tion method but also as a logical inference sys- tem. Therefore, it is expected to play an impor- tant role in synthesizing linguistic analyses such as parsing and semantic analysis, and linguistic and non-linguistic inferences. Acknowledgments I would like to thank Kiyoshi Kogure and Akira Shimazu for their helpful comments. I had pre- cious discussions with KSichi Hasida and Hiroshi Tuda concerning constraint unification. References Aho, A. V. and Ullman, J. D. (1972) The Theory of Parsing, Translation, and Compiling, Vol- ume I: Parsing. Prentice-Hall. Carter, D. (1990) EffÉcient Disjunctive Unifica- tion for Bottom-Up Parsing. In Proceedings of the 13th International Conference on Computa- tional Linguistics, Volume 3. pages 70-75. Eisele, A. and DSrre, J. (1988) Unification of Disjunctive Feature Descriptions. In Proceedings of the 26th Annual Meeting of the Association for Computational Linguistics. Hasida, K. (1986) Conditioned Unification for Natural Language Processing. In Proceedings of the llth International Conference on Computa- tional Linguistics, pages 85--87. Hasida, K. (1990) Sentence Processing as Con- straint Transformation. In Proceedings of the 9th European Conference on Artificial Intelligence, pages 339-344. Kasper, R. T. (1987) A Unification Method for Disjunctive Feature Descriptions. In Proceedings of the 25th Annual Meeting of the Association for Computational Linguistics, pages 235-242. Kay, M. (1985) Parsing in Functional Unifi- cation Grammar. In Natural Language Pars- ing: Psychological, Computational and Theoreti- cal Perspectives, pages 251-278. Cambridge Uni- versity Press. Lloyd, J. W. (1984) Foundations of Logic Pro- gramming. Springer-Verlag. Pereira, F. C. N. and Warren, D. H. D. (1980) Definite Clause Grammar for Language Analysis--A Survay of the Formalism and a Comparison with Augmented Transition Net- works. Artificial Intelligence, 13:231-278. Pollard, C. J. and Sag, I. A. (1987) Information- Based Syntax and Semantics, Volume 1 Funda- mentals. CSLI Lecture Notes Series No.13. Stan- ford:CSLI. Tuda, H., Hasida, K., and Sirai, H. (1989) JPSG Parser on Constraint Logic Programming. In Proceedings of 4th Conference of the European Chapter of the Association for Computational Linguistics, pages 95-102. 314
1991
40
Quasi-Destructive Graph Unification Hideto Tomabechi Carnegie Mellon University ATR Interpreting Telephony 109 EDSH, Pittsburgh, PA 15213-3890 Research Laboratories* [email protected] Seika-cho, Sorakugun, Kyoto 619-02 JAPAN ABSTRACT Graph unification is the most expensive part of unification-based grammar parsing. It of- ten takes over 90% of the total parsing time of a sentence. We focus on two speed-up elements in the design of unification algo- rithms: 1) elimination of excessive copying by only copying successful unifications, 2) Finding unification failures as soon as possi- ble. We have developed a scheme to attain these two elements without expensive over- head through temporarily modifying graphs during unification to eliminate copying dur- ing unification. We found that parsing rel- atively long sentences (requiring about 500 top-level unifications during a parse) using our algorithm is approximately twice as fast as parsing the same sentences using Wrob- lewski's algorithm. 1. Motivation Graph unification is the most expensive part of unification-based grammar parsing systems. For ex- ample, in the three types of parsing systems currently used at ATR ], all of which use graph unification algo- rithms based on [Wroblewski, 1987], unification oper- ations consume 85 to 90 percent of the total cpu time devoted to a parse. 2 The number of unification opera- tions per sentence tends to grow as the grammar gets larger and more complicated. An unavoidable paradox is that when the natural language system gets larger and the coverage of linguistic phenomena increases the writers of natural language grammars tend to rely more on deeper and more complex path equations (cy- cles and frequent reentrancy) to lessen the complexity of writing the grammar. As a result, we have seen that the number of unification operations increases rapidly as the coverage of the grammar grows in contrast to the parsing algorithm itself which does not seem to *Visiting Research Scientist. Local email address: tomabech%al~-la.al~.co.jp@ uunet.UU.NET. 1The three parsing systems are based on: 1. Earley's algorithm, 2. active chartparsing, 3. generalized LR parsing. 2In the large-scale HPSG-based spoken Japanese analy- sis system developed at ATR, sometimes 98 percent of the elapsed time is devoted to graph unification ([Kogure, 1990]). grow so quickly. Thus, it makes sense to speed up the unification operations to improve the total speed performance of the natural language systems. Our original unification algorithm was based on [Wroblewskl, 1987] which was chosen in 1988 as the then fastest algorithm available for our applica- tion (HPSG based unification grammar, three types of parsers (Earley, Tomita-LR, and active chart), unifica- tion with variables and cycles 3 combined with Kasper's ([Kasper, 1987]) scheme for handling disjunctions. In designing the graph unification algorithm, we have made the following observation which influenced the basic design of the new algorithm described in this paper: Unification does not always succeed. As we will see from the data presented in a later section, when our parsing system operates with a relatively small grammar, about 60 percent of the unifications attempted during a successful parse result in failure. If a unification falls, any computation performed and memory consumed during the unification is wasted. As the grammar size increases, the number of unification failures for each successful parse increases 4. Without completely rewriting the grammar and the parser, it seems difficult to shift any significant amount of the computational burden to the parser in order to reduce the number of unification failures 5. Another problem that we would like to address in our design, which seems to be well documented in the existing literature is that: Copying is an expensive operation. The copying of a node is a heavy burden to the pars- ing system. [Wroblewski, 1987] calls it a "computa- tional sink". Copying is expensive in two ways: 1) it takes time; 2) it takes space. Copying takes time and space essentially because the area in the random access memory needs to be dynamically allocated which is an expensive operation. [Godden, 1990] calculates the computation time cost of copying to be about 67 per- 3Please refer to [Kogure, 1989] for trivial time modifica- tion of Wroblewski's algorithm to handle cycles. 4We estimate over 80% of unifications to be failures in our large-scale speech-to-speech translation system under development. 5Of course, whether that will improve the overall perfor- mance is another question. 315 cent of the total parsing time in his TIME parsing sys- tem. This time/space burden of copying is non-trivial when we consider the fact that creation of unneces- sary copies will eventually trigger garbage collections more often (in a Lisp environment) which will also slow down the overall performance of the parsing sys- tem. In general, parsing systems are always short of memory space (such as large LR tables of Tomita-LR parsers and expan~ng tables and charts of Farley and active chart parsers"), and the marginal addition or sub- traction of the amount of memory space consumed by other parts of the system often has critical effects on the performance of these systems. Considering the aforementioned problems, we pro- pose the following principles to be the desirable con- ditions for a fast graph unification algorithm: • Copying should be performed only for success- ful unifications. • Unification failures should be found as soon as possible. By way of definition we would like to categorize ex- cessive copying of dags into Over Copying and Early Copying. Our definition of over copying is the same as Wroblewski's; however, our definition of early copying is slightly different. • Over Copying: Two dags are created in order to create one new dag. - This typically happens when copies of two input dags are created prior to a destructive unification operation to build one new dag. ([Godden, 1990] calls such a unifica- tion: Eager Unification.). When two arcs point to the same node, over copying is often unavoidable with incremental copying schemes. • Early Copying: Copies are created prior to the failure of unification so that copies created since the beginning of the unification up to the point of failure are wasted. Wroblewski defines Early Copying as follows: "The argument dags are copied before unification started. If the unification falls then some of the copying is wasted effort" and restricts early copying to cases that only apply to copies that are created prior to a unification. Restricting early copying to copies that are made prior to a unification leaves a number of wasted copies that are created during a unification up to the point of failure to be uncovered by either of the above definitions for excessive copying. We would like Early Copying to mean all copies that are wasted due to a unification fail- ure whether these copies are created before or during the actual unification operations. Incremental copying has been accepted as an effec- tive method of minimizing over copying and eliminat- 6For example, our phoneme-based generalized LR parser for speech input is always running on a swapping space be- cause the LR table is too big. ing early copying as defined by Wroblewski. How- ever, while being effective in minimizing over copying (it over copies only in some cases of convergent arcs into one node), incremental copying is ineffective in eliminating early copying as we define it. 7 Incremen- tal copying is ineffective in eliminating early copying because when a gra_ph unification algorithm recurses for shared arcs (i.e. the arcs with labels that exist in both input graphs), each created unification operation recursing into each shared arc is independent of other recursive calls into other arcs. In other words, the re- cursive calls into shared arcs are non-deterministic and there is no way for one particular recursion into a shared arc to know the result of future recursions into other shared arcs. Thus even if a particular recursion into one arc succeeds (with minimum over copying and no early copying in Wroblewski's sense), other arcs may eventually fail and thus the copies that are created in the successful arcs are all wasted. We consider it a drawback of incremental copying schemes that copies that are incrementally created up to the point of fail- ure get wasted. This problem will be particularly felt when we consider parallel implementations of incre- mental copying algorithms. Because each recursion into shared arcs is non-deterministic,parallel processes can be created to work concurrently on all arcs. In each of the parallelly created processes for each shared arc, another recursion may take place creating more paral- lel processes. While some parallel recursive call into some arc may take time (due to a large number of sub- arcs, etc.) another non-deterministic call to other arcs may proceed deeper and deeper creating a large num- ber of parallel processes. In the meantime, copies are incrementally created at different depths of subgraphs as long as the subgraphs of each of them are unified successfully. This way, when a failure is finally de- tected at some deep location in some subgraph, other numerous processes may have created a large number of copies that are wasted. Thus, early copying will be a significant problem when we consider the possibility of parallelizing the unification algorithms as well. 2. Our Scheme We would like to introduce an algorithm which ad- dresses the criteria for fast unification discussed in the previous sections. It also handles cycles without over copying (without any additional schemes such as those introduced by [Kogure, 1989]). As a data structure, a node is represented with eight fields: type, arc-list, comp-arc-list, forward, copy, comp-arc-mark, forward-mark, and copy-mark. Al- though this number may seem high for a graph node data structure, the amount of memory consumed is not significantly different from that consumed by other 7'Early copying' will henceforth be used to refer to early copying as defined by us. 316 algorithms. Type can be represented by three bits; comp-arc-mark, forward-mark, and copy-mark can be represented by short integers (i.e. fixnums); and comp- arc-list (just like arc-lis0 is a mere collection of pointers to memory locations. Thus this additional information is trivial in terms of memory cells consumed and be- cause of this dam structure the unification algorithm itself can remain simple. NODE type + ............... + arc-list + ............... + comp-arc-list + ............... + forward + ............... + copy + ............... + comp-arc-mark + ............... + forward-mark + ............... + copy-mark ARC I label I + ............... + I value I + ............... + Figure 1: Node and Arc Structures The representation for an arc is no different from that of other unification algorithms. Each arc has two fields for 'label' and 'value'. 'Label' is an atomic symbol which labels the arc, and 'value' is a pointer to a node. The central notion of our algorithm is the depen- dency of the representational content on the global timing clock (or the global counter for the current generation of unification algorithms). This scheme was used in [Wroblewski, 1987] to invalidate the copy field of a node after one unification by incrementing a global counter. This is an extremely cheap operation but has the power to invalidate the copy fields of all nodes in the system simultaneously. In our algorithm, this dependency of the content of fields on global tim- ing is adopted for arc lists, forwarding pointers, and copy pointers. Thus any modification made, such as adding forwarding links, copy links or arcs during one top-level unification (unify0) to any node in memory can be invalidated by one increment operation on the global timing counter. During unification (in unifyl) and copying after a successful unification, the global timing ID for a specific field can be checked by compar- ing the content of mark fields with the global counter value and if they match then the content is respected; if not it is simply ignored. Thus the whole operation is a trivial addition to the original destructive unification algorithm (Pereira's and Wroblewski's unifyl). We have two kinds of arc lists 1) arc-list and comp- arc-list. Arc-list contains the arcs that are permanent (i.e., usual graph arcs) and compare-list contains arcs that are only valid during one graph unification oper- ation. We also have two kinds of forwarding links, i.e., permanent and temporary. A permanent forward- ing link is the usual forwarding link found in other algorithms ([Pereira, 1985], [Wroblewski, 1987], etc). Temporary forwarding links are links that are only valid during one unification. The currency of the temporary links is determined by matching the content of the mark field for the links with the global counter and if they match then the content of this field is respected 8. As in [Pereira, 1985], we have three types of nodes: 1) :atomic, 2) :bottom 9, and 3) :complex. :atomic type nodes represent atomic symbol values (such as Noun), :bottom type nodes are variables and :complex type nodes are nodes that have arcs coming out of them. Arcs are stored in the arc-list field. The atomic value is also stored in the arc-list if the node type is :atomic. :bottom nodes succeed in unifying with any nodes and the result of unification takes the type and the value of the node that the :bottom node was unified with. :atomic nodes succeed in unifying with :bottom nodes or :atomic nodes with the same value (stored in the arc-lis0. Unification of an :atomic node with a :com- plex node immediately fails. :complex nodes succeed in unifying with :bottom nodes or with :complex nodes whose subgraphs all unify. Arc values are always nodes and never symbolic values because the :atomic and :bottom nodes may be pointed to by multiple arcs (just as in structure sharing of :complex nodes) depending on grammar constraints, and we do not want arcs to contain terminal atomic values. Figure 2 is the cen- tral quasi-destructive graph unification algorithm and Figure 3 shows the algorithm for copying nodes and arcs (called by unify0) while respecting the contents of comp-arc-lists. The functions Complementarcs(dg 1,dg2) and Inter- sectarcs(dgl,dg2) are similar to Wroblewski's algo- rithm and return the set-difference (the arcs with la- bels that exist in dgl but not in rig2) and intersec- tion (the arcs with labels that exist both in dgl and dg2) respectively. During the set-difference and set- intersection operations, the content of comp-arc-lists are respected as parts of arc lists if the comp-arc- marks match the current value of the global timing counter. Dereference-dg(dg) recursively traverses the forwarding link to return the forwarded node. In do- ing so, it checks the forward-mark of the node and if the forward-mark value is 9 (9 represents a perma- nent forwarding link) or its value matches the current 8We do not have a separate field for temporary forwarding links; instead, we designate the integer value 9 to represent a permanent forwarding link. We start incrementing the global counter from 10 so whenever the forward-mark is not 9 the integer value must equal the global counter value to respect the forwarding link. 9Bottom is called leaf in Pereira's algorithm. 317 value of *unify-global-counter*, then the function re- turns the forwarded node; otherwise it simply returns the input node. Forward(dgl, dg2, :forward-type) puts (the pointer to) dg2 in the forward field of dgl. If the keyword in the function call is :temporary, the cur- rent value of the *unify-global-counter* is written in the forward-mark field of dgl. If the keyword is :per- manent, 9 is written in the forward-mark field of dgl. Our algorithm itself does not require any permanent forwarding; however, the functionality is added be- cause the grammar reader module that reads the path equation specifications into dg feature-structures uses permanent forwarding to merge the additional gram- matical specifications into a graph structure 1°. The temporary forwarding links are necessary to handle reentrancy and cycles. As soon as unification (at any level of recursion through shared arcs) succeeds, a tem- porary forwarding link is made from dg2 to dgl (dgl to dg2 if dgl is of type :bottom). Thus, during unifi- cation, a node already unified by other recursive calls to unifyl within the same unify0 call has a temporary forwarding link from dg2 to dgl (or dgl to dg2). As a result, if this node becomes an input argument node, dereferencing the node causes dgl and dg2 to become the same node and unification immediately succeeds. Thus a subgraph below an already unified node will not be checked more than once even if an argument graph has a cycle. Also, during copying done subsequently to a successful unification, two ares converging into the same node will not cause over copying simply because if a node already has a copy then the copy is returned. For example, as a case that may cause over copies in other schemes for dg2 convergent arcs, let us consider the case when the destination node has a corresponding node in dgl and only one of the convergent arcs has a corresponding are in dgl. This destination node is al- ready temporarily forwarded to the node in dgl (since the unification check was successful prior to copying). Once a copy is created for the corresponding dgl node and recorded in the copy field of dgl, every time a convergent arc in dg2 that needs to be copied points to its destination node, dereferencing the node returns the corresponding node in dgl and since a copy of it already exists, this copy is returned. Thus no duplicate copy is created H. roWe have been using Wroblewski's algorithm for the uni- fication part of the parser and thus usage of (permanent) forwarding links is adopted by the grammar reader module to convert path equations to graphs. For example, permanent forwarding is done when a :bottom node is to be merged with other nodes. nCopying of dg2 ares happens for arcs that exist in dg2 but not in dgl (i.e., Complementarcs(dg2,dgl)). Such arcs are pushed to the cornp-arc-list of dgl during unify1 and are copied into the are-list of the copy during subsequent copying. If there is a cycle or a convergence in arcs in dgl or in ares in dg2 that do not have corresponding arcs in dg 1, then the mechanism is even simpler than the one discussed here. A copy is made once, and the same copy is simply returned QUASI-DESTRUCTIVE GRAPH UNIFICATION I FUNCTION unify-dg(dg 1,dg2); result ~ catch with tag 'unify-fail calling unify0(dgl,dg2); increment *unify-global-counter*; ;; starts from 10 12 retum(result); END; FUNCTION unify0(dg 1,dg2); if '*T* = unifyl(dgl,dg2); THEN copy .--- eopy-dg-with-comp-arcs(dgl); return(copy); END; FUNCTION unify1 (dgl-underef, dg2-underef); dgl ,-- dereference-dg(dgl-underef); dg2 ~-- dereference-dg(dg2-underef); IF (dgl = dg2)I3THEN return('*T*); ELSE IF (dgl.type = :bottom) THEN forward-dg(dg 1,dg2,:ternporary); return('*T*); ELSE IF (dg2.type = :bottom) THEN forward-dg(dg2,dg 1,:temporary); return('*T*); ELSE IF (dgl.type = :atomic AND dg2.type = :atomic) THEN IF (dgl.arc-list = dg2.are-list)14THEN forward-dg(dg2,dg 1,:temporary); return('*T*); ELSE throwlSwith keyword 'unify-fail; ELSE IF (dgl.type = :atomic OR dg2.type = :atomic) THEN throw with keyword 'unify-fail; ELSE new ~ complementarcs(dg2,dgl); shared ~-- intersectarcs(dgl,dg2); FOR EACH arc IN shared DO unifyl (destination of the shared arc for dgl, destination of the shared arc for dg2); forward-dg(dg2,dg 1,:temporary); 1~ dg 1.comp-arc-mark *-- *unify-global-counter*; dgl.comp-arc-list ,-- new; return ('*T*); END; Figure 2: The Q-D. Unification Functions every lime another convergent arc points to the original node. It is because axes are copied only from either dgl or dg2. 129 indicates a permanent forwarding link. 13Equal in the 'eq' sense. Because of forwarding and cycles, it is possible that dgl and dg2 are 'eq'. X4Arc-list contains atomic value if the node is of type :atomic. lSCatch/throw construct; i.e., immediately return to un/fy- dg. 16This will be executed only when all recursive calls into unifyl succeeded. Otherwise, a failure would have caused 318 QUASI-DESTRUCTIVE COPYING ] FUNCTION copy-dg-with-comp-arcs(dg-undere0; dg ~ dereference-dg(dg-undere0; IF (dg.copy is non-empty AND dg.copy-mark = *unify-global-counter*) THEN return(dg.copy);a7 ELSE IF (dg.type = :atomic) THEN copy ,-- create-node0; Is copy.type ,-- :atomic; copy.are-list ,--- rig.are-list; dg.copy ,-- copy; dg.eopy-mark ,--- *unify-global-counter*; return(copy); ELSE IF (dg.type = :bottom) THEN copy *- ereate-nodeO; copy.type .-- :bottom; dg.copy ,-- copy; dg.copy-mark ~-- *unify-global-counter*; return(copy); ELSE copy *- create-node(); copy.type ,-- :complex; FOR ALL are IN dg.are-list DO newarc ,- copy-are-and-comp-arc(are); push newarc into copy.are-list; IF (dg.comp-are-list is non-empty AND dg.comp-arc-mark = *unify-global-counter*) THEN FOR ALL comp-arc IN dg.comp-are-list DO neware ,-- copy-arc-and-comp-arc(comp-arc); push neware into copy.are-list; dg.copy 4-- copy; dg.copy-mark ,-- *unify-global-counter*; return (copy); END; FUNCTION copy-arc-and-comp-arcs(input-arc); label ,--- input-arc.label; value ,-- copy-dg-with-comp-arcs(input-are.value); return a new arc with label and value; END; Figure 3: Node and Arc Copying Functions Figure 4 shows a simple example of quasi- destructive graph unification with dg2 convergent arcs. The round nodes indicate atomic nodes and the rect- angular nodes indicate bottom (variable) nodes. First, top-level unifyl finds that each of the input graphs has arc-a and arc-b (shared). Then unifyl is recursively called. At step two, the recursion into arc-a locally succeeds, and a temporary forwarding link with time- stamp(n) is made from node [-]2 to node s. At the third step (recursion into arc-b), by the previous forwarding, node f12 already has the value s (by dereferencing). Then this unification returns a success and a tempo- rary forwarding link with time-stamp(n) is created from an immediate return to unify.dg. 17I.e., the existing copy of the node. lSCreates an empty node structure. node [-] 1 to node s. At the fourth step, since all recur- sive unifications (unifyls) into shared arcs succeeded, top-level unifyl creates a temporary forwarding link with time-stamp(n) from dag2's root node to dagl's root node, and sets arc-c (new) into comp-arc-list of dagl and returns success ('*T*). At the fifth step, a copy of dagl is created respecting the content of comp- arc-list and dereferencing the valid forward links. This copy is returned as a result of unification. At the last step (step six), the global timing counter is incremented (n =:, n+ 1). After this operation, temporary forwarding links and comp-arc-lists with time-stamp (< n+l) will be ignored. Therefore, the original dagl and dag2 are recovered in a constant time without a costly reversing operations. (Also, note that recursions into shared-arcs can be done in any order producing the same result). unifyl(dagl,dag2) SHARF~-Ia, b} S " t For each node with arc-a. unifyl( s, [ ]2) dag 1 dag2 a b forward(n) For each node witbare-b. unifyl( [ ]i, [ ]2) forward(n) dagl. forwxd(n) dag2 a/.., ]b-'.fist(n)={c} a//Jb~C ot forward(n) copy-comp-ar¢-list(dag 1) copy. of dagl (n) d a g ~ d a g 2 S t S ~ . ~ ~ . . ~ j ~ t forward(n) copy ofdagl(n) dagl dag2 Figure4: A Simple Example of Quasi-Destructive Graph Unification 319 As we just saw, the algorithm itself is simple. The basic control structure of the unification is similar to Pereira's and Wroblewski's unifyl. The essential dif- ference between our unifyl and the previous ones is that our unifyl is non-destructive. It is because the complementarcs(dg2,dgl) are set to the comp-arc-list of dgl and not into the are-list of dgl. Thus, as soon as we increment the global counter, the changes made to dgl (i.e., addition of complement arcs into comp- are-list) vanish. As long as the comp-arc-mark value matches that of the global counter the content of the comp-arc-list can be considered a part of arc-list and therefore, dgl is the result of unification. Hence the name quasi-destructive graph unification. In order to create a copy for subsequent use we only need to make a copy of dgl before we increment the global counter while respecting the content of the comp-arc-list of dgl. Thus instead of calling other unification functions (such as unify2 of Wroblewski) for incrementally ere- ating a copy node during a unification, we only need to create a copy after unification. Thus, if unifica- tion fails no copies are made at all (as in [Karttunen, 1986]'s scheme). Because unification that recurses into shared ares carries no burden of incremental copy- ing (i.e., it simply checks if nodes are compatible), as the depth of unification increases (i.e., the graph gets larger) the speed-up of our method should get conspic- uous if a unification eventually fails. If all unifica- tions during a parse are going to be successful, our algorithm should be as fast as or slightly slower than Wroblewski's algorithm 19. Since a parse that does not fail on a single unification is unrealistic, the gain from our scheme should depend on the amount of unification failures that occur during a unification. As the number of failures per parse increases and the graphs that failed get larger, the speed-up from our algorithm should be- come more apparent. Therefore, the characteristics of our algorithm seem desirable. In the next section, we will see the actual results of experiments which com- pare our unification algorithm to Wroblewski's algo- rithm (slightly modified to handle variables and cycles that are required by our HPSG based grammar). 3. Experiments Table 1 shows the results of our experiments using an HPSG-based Japanese grammar developed at ATR for a conference registration telephone dialogue domain. 19h may be slightly slower becauseour unification recurses twice on a graph: once to unify and once to copy, whereas in incremental unification schemes copying is performed dur- ing the same recursion as unifying. Additional bookkeeping for incremental copying and an additional set-difference op- eration (i.e, complementarcs(dgl,dg2)) during unify2 may offset this, however. 'Unifs' represents the total number of unifications dur- ing a parse (the number of calls to the top-level 'unify- dg', and not 'unifyl'). 'USrate' represents the ratio of successful unifications to the total number of uni- fications. We parsed each sentence three times on a Symbolics 3620 using both unification methods and took the shortest elapsed time for both methods ('T' represents our scheme, 'W' represents Wroblewski's algorithm with a modification to handle cycles and variables2°). Data structures are the same for both uni- fication algorithms (except for additional fields for a node in our algorithm, i.e., comp-arc-list, comp-arc- mark, and forward-mark). Same functions are used to interface with Earley's parser and the same subfunc- tions are used wherever possible (such as creation and access of arcs) to minimize the differences that are not purely algorithmic. 'Number of copies' represents the number of nodes created during each parse (and does not include the number of arc structures that are cre- ated during a parse). 'Number of conses' represents the amount of structure words consed during a parse. This number represents the real comparison of the amount of space being consumed by each unification algorithm 0ncluding added fields for nodes in our algorithm and arcs that are created in both algorithms). We used Earley's parsing algorithm for the experi- ment. The Japanese grammar is based on HPSG anal- ysis ([Pollard and Sag, 1987]) covering phenomena such as coordination, case adjunction, adjuncts, con- trol, slash categories, zero-pronouns, interrogatives, WH constructs, and some pragmatics (speaker, hearer relations, politeness, etc.) ([Yoshimoto and Kogure, 1989]). The grammar covers many of the important linguistic phenomena in conversational Japanese. The grammar graphs which are converted from the path equations contain 2324 nodes. We used 16 sentences from a sample telephone conversation dialog which range from very short sentences (one word, i.e., iie 'no') to relatively long ones (such as soredehakochi- rakarasochiranitourokuyoushiwoookuriitashimasu " In that case, we [speaker] will send you [hearer] the reg- istration form.'). Thus, the number of (top-level) uni- fications per sentence varied widely (from 6 to over 500). ~Cycles can be handled in Wroblewski's algorithm by checking whether an arc with the same label already exists when arcs are added to a node. And ff such an arc already exists, we destructively unify the node which is the destina- tion of the existing arc with the node which is the destination of the arc being added. If such an arc does not exist, we simply add the arc. ([Kogure, 1989]). Thus, cycles can be handled very cheaply in Wroblewski's algorithm. Handling variables in Wroblewski's algorithm is basically the same as in our algorithm (i.e., Pereira's scheme), and the addition of this functionality can be ignored in terms of comparison to our algorithm. Our algorithm does not require any additional scheme to handle cycles in input dgs. 320 sent# 1 2 3 4 5 6 7 8 9 i0 ii 12 13 14 15 16 Unifs 6 i01 24 71 305 59 6 81 480 555 109 428 559 52 77 77 USrate 0.5 0.35 0.33 0.41 0.39 0.38 0.38 0.39 0.38 0.39 0.40 0.38 0.38 0.38 0.39 0.39 Elapsed time(sec) T W 1.066 1 113 1.897 2 899 1.206 1 290 3.349 4 102 12.151 17 309 1.254 1 601 1.016 1 030 3.499 4 452 18.402 34 653 26.933 47 224 4.592 5 433 13.728 24 350 15.480 42 357 1.977 2 410 3.574 4 688 3.658 4 431 Num of Copies Num of Conses T W T W 85 107 1231 1451 1418 2285 15166 23836 129 220 1734 2644 1635 2151 17133 22943 5529 9092 57405 93035 608 997 6873 10763 85 107 1175 1395 1780 2406 18718 24978 9466 15756 96985 167211 11789 18822 119629 189997 2047 2913 21871 30531 7933 13363 81536 135808 9976 17741 102489 180169 745 941 8272 10292 1590 2137 16946 22416 1590 2137 16943 22413 Table 1: Comparison of our algorithm with Wroblewski's 4. Discussion: Comparison to Other Approaches The control structure of our algorithm is identical to that of [Pereira, 1985]. However, instead of stor- ing changes to the argument (lags in the environment we store the changes in the (lags themselves non- destructively. Because we do not use the environment, the log(d) overhead (where d is the number of nodes in a dag) associated with Pereira's scheme that is re- quired during node access (to assemble the whole dag from the skeleton and the updates in the environment) is avoided in our scheme. We share the principle of storing changes in a restorable way with [Karttunen, 1986]'s reversible unification and copy graphs only after a successful unification. Karttunen originally introduced this scheme in order to replace the less efficient structure-sharing implementations ([Pereira, 1985], [Karttunen and Kay, 1985]). In Karttunen's method 21, whenever a destructive change is about to be made, the attribute value pairs 22 stored in the body of the node are saved into an array. The dag node struc- ture itself is also saved in another array. These values are restored after the top level unification is completed. (A copy is made prior to the restoration operation if the unification was a successful one.) The difference between Karttunen's method and ours is that in our al- gorithm, one increment to the global counter can invali- date all the changes made to nodes, while in Karttunen's algorithm each node in the entire argument graph that has been destructively modified must be restored sep- arately by retrieving the attribute-values saved in an 21The discussion ofKartunnen's method is based on the D- PATR implementation on Xerox 1100 machines ([Karttunen, 1986]). ~'Le., arc structures: 'label' and 'value' pairs in our vocabulary. array and resetting the values into the dag structure skeletons saved in another array. In both Karttunen's and our algorithm, there will be a non-destructive (re- versible, and quasi-destructive) saving of intersection arcs that may be wasted when a subgraph of a partic- ular node successfully unifies but the final unification fails due to a failure in some other part of the argument graphs. This is not a problem in our method because the temporary change made to a node is performed as push- ing pointers into already existing structures (nodes) and it does not require entirely new structures to be created and dynamically allocated memory (which was neces- sary for the copy (create-node) operation), z3 [Godden, 1990] presents a method of using lazy evaluation in unification which seems to be one SUCC~sful actual- ization of [Karttunen and Kay, 1985]'s lazy evaluation idea. One question about lazy evaluation is that the ef- ficiency of lazy evaluation varies depending upon the particular hardware and programming language envi- ronment. For example, in CommonLisp, to attain a lazy evalaa_tion, as soon as a function is delayed, a clo- sure (or a structure) needs to be created receiving a dy- namic allocation of memory Oust as in creating a copy node). Thus, there is a shift of memory and associated computation consumed from making copies to making closures. In terms of memory cells saved, although the lazy scheme may reduce the total number of copies created, if we consider the memory consumed to create closures, the saving may be significantly canceled. In terms of speed, since delayed evaluation requires addi- tional bookkeeping, how schemes such as the one in- troduced by [Godden, 1990] would compare with non- lazy incremental copying schemes is an open question. Unfortunately Godden offers a comparison of his algo- Z3Although, in Karttunen's method it may become rather expensive ff the arrays require resizing during the saving operation of the subgraphs. 321 rithm with one that uses a full copying method (i.e. his Eager Copying) which is already significantly slower than Wroblewski's algorithm. However, no compari- son is offered with prevailing unification schemes such as Wroblewski's. With the complexity for lazy evalu- ation and the memory consumed for delayed closures added, it is hard to estimate whether lazy unification runs considerably faster than Wroblewski's incremen- tal copying scheme, ~ 5. Conclusion The algorithm introduced in this paper runs signifi- cantly faster than Wroblewski's algorithm using Ear- ley's parser and an HPSG based grammar developed at ATR. The gain comes from the fact that our algo- rithm does not create any over copies or early copies. In Wroblewski's algorithm, although over copies are essentially avoided, early copies (by our definition) are a significant problem because about 60 percent of unifications result in failure in a successful parse in our sample parses. The additional set-difference oper- ation required for incremental copying during unify2 may also be contributing to the slower speed of Wrob- lewski's algorithm. Given that our sample grammar is relatively small, we would expect that the difference in the performance between the incremental copying schemes and ours will expand as the grammar size increases and both the number of failures ~ and the size of the wasted subgraphs of failed unifications be- come larger. Since our algorithm is essentially paral- lel, patallelization is one logical choice to pursue fur- ther speedup. Parallel processes can be continuously created as unifyl reeurses deeper and deeper without creating any copies by simply looking for a possible failure of the unification (and preparing for successive copying in ease unification succeeds). So far, we have completed a preliminary implementation on a shared memory parallel hardware with about 75 percent of effective parallelization rate. With the simplicity of our algorithm and the ease of implementing it (com- pared to both incremental copying schemes and lazy schemes), combined with the demonstrated speed of the algorithm, the algorithm could be a viable alterna- tive to existing unification algorithms used in current ~That is, unless some new scheme for reducing exces- sive copying is introduced such as scucture-sharing of an unchanged shared-forest ([Kogure, 1990]). Even then, our criticism of the cost of delaying evaluation would still be valid. Also, although different in methodology from the way suggested by Kogure for Wroblewski's algorithm, it is possi- ble to at~in structure-sharing of an unchanged forest in our scheme as well. We have already developed a preliminary version of such a scheme which is not discussed in this paper. Z~For example, in our large-scale speech-to-speech trans- lation system under development, the USrate is estimated to be under 20%, i.e., over 80% of unifications are estimated to be failures. natural language systems. ACKNOWLEDGMENTS The author would like to thank Akira Kurematsu, Tsuyoshi Morimoto, Hitoshi Iida, Osamu Furuse, Masaaki Nagata, Toshiyuki Takezawa and other mem- bers of ATR and Masaru Tomita and Jaime Carbonell at CMU. Thanks are also due to Margalit Zabludowski and Hiroaki Kitano for comments on the final version of this paper and Takako Fujioka for assistance in im- plementing the parallel version of the algorithm. Appendix: Implementation The unification algorithms, Farley parser and the HPSG path equation to graph converter programs are implemented in CommonLisp on a Symbolics ma- chine. The preliminary parallel version of our uni- fication algorithm is currently implemented on a Se- quent/Symmetry closely-coupled shared-memory par- allel machine running Allegro CLiP parallel Common- Lisp. References [Godden, 1990] Godden, K. "Lazy Unification" In Proceed- ings of ACL-90, 1990. [Karttunen, 1986] Karttunen, L. "D-PATR: A Development Environment for Unificadon-Based Grammars". In Pro- ceedingsofCOLING-86, 1986. (Also, Report CSLL86-61 Stanford University). [Karttunen and Kay, 1985] Karttunen, L. and M. Kay "Structure Sharing with Binary Trees". In Proceedings of ACL-85, 1985. [Kasper, 1987] Kasper, R. "A Unification Method for Dis- junctive Feature Descriptions'. In ProceedingsofACL-87, 1987. [Kogure, 1989] Kogure, K. A Study on Feature Structures and Unification. ATR Technical Report. TR-1-0032,1988. [Kogure, 1990] Kogure, K. "Strategic Lazy Incremental Copy Graph Unification". In Proceedings of COLING-90, 1990. [Pereira, 1985] Pereira, E "A Structure-Sharing Represen- tation for Unificadon-Based Grammar Formalisms". In Proceedings of ACL-85, 1985. [Pollard and Sag, 1987] Pollard, C. and I. Sag Information- based Syntax and Semantics. Vol 1, CSLI, 1987. [Yoshimoto and Kogure, 1989] Yoshimoto, K. and K. Kogure Japanese Sentence Analysis by means of Phrase Structure Grammar. ATR Technical Report. TR-1-0049, 1989. [Wroblewski, 1987] Wroblewski, D."Nondestrucdve Graph Unification" In Proceedings of AAAI87, 1987. 322
1991
41
UNIFICATION WITH LAZY NON-REDUNDANT COPYING Martin C. Emele* Project Polygloss University of Stuttgart IMS-CL/IfLAIS, KeplerstraBe 17 D 7000 Stuttgart 1, FRG emele~informatik.uni-stut tgaxt .de Abstract This paper presents a unification pro- cedure which eliminates the redundant copying of structures by using a lazy in- cremental copying appr0a~:h to achieve structure sharing. Copying of structures accounts for a considerable amount of the total processing time. Several meth- ods have been proposed to minimize the amount of necessary copying. Lazy In- cremental Copying (LIC) is presented as a new solution to the copying problem. It synthesizes ideas of lazy copying with the notion of chronological dereferencing for achieving a high amount of structure sharing. Introduction Many modern linguistic theories are using fea- ture structures (FS) to describe linguistic objects representing phonological, syntactic, and semantic properties. These feature structures are specified in terms of constraints which they must satisfy. It seems to be useful to maintain the distinction between the constraint language in which feature structure constraints are expressed, and the struc- tures that satisfy these constraints. Unification is the primary operation for determining the satisfia- bility of conjunctions of equality constraints. The efficiency of this operation is thus crucial for the overall efficiency of any system that uses feature structures. Typed Feature Structure Unification In unification-based grammar formalisms, unifica- tion is the meet operation on the meet semi-lattice formed by partially ordering the set of feature structures by a subsumption relation [Shieber 86]. Following ideas presented by [Ait-Kaci 84] and introduced, for example, in the unification-based formMism underlying HPSG [Pollard and Sag 87], first-order unification is extended to the sorted case using an order-sorted signature instead of a flat one. In most existing implementations, descriptions of feature structure constraints are not directly used as models that satisfy these constraints; in- stead, they are represented by directed graphs (DG) serving as satisfying models. In particular, in the case where we are dealing only with con- junctions of equality constraints, efficient graph unification algorithms exist. The graph unifica- tion algorithm presented by Ait-Kaci is a node • merging process using the UNION/FIND method (originMly used for testing the equivalence of fi- nite automata [Hopcroft/Karp 71]). It has its analogue in the unification algorithm for rational terms based on a fast procedure for congruence closure [Huet 76]. Node merging is a destructive operation Since actual merging of nodes to build new node equivalence classes modifies the argument DGs, they must be copied before unification is in- voked if the argument DGs need to be preserved. For example, during parsing there are two kinds of representations that must be preserved: first, lexi- cal entries and rules must be preserved. They need to be copied first before a destructive unification operation can be applied to combine categories to form new ones; and second, nondeterminism in parsing requires the preservation of intermediate representations that might be used later when the parser comes back to a choice point to try some yet unexplored options. *Research reported in this paper is partly supported by the German Ministry of Research and Technology (BMFT, Bun- desmlnister filr Forschung und Technologie), under grant No. 08 B3116 3. The views and conclusions contained herein are those of the authors and should not be interpreted as representing official policies. 323 DG copying as a source of inefficiency Previous research on unification, in partic- ular on graph unification [Karttunen/Kay 85, Pereira 85], and others, identified DG copying as the main source of inefficiency. The high cost in terms of time spent for copying and in terms of space required for the copies themselves accounts for a significant amount of the total processing time. Actually, more time is spent for copying than for unification itself. Hence, it is crucial to reduce the amount of copying, both in terms of the number and of the size of copies, in order to improve the efficiency of unification. A naive implementation of unification would copy the arguments even before unification starts. That is what [Wroblewski 87] calls early copying. Early copying is wasted effort in cases of fail- ure. He also introduced the notion of over copy- ing, which results from copying both arguments in their entirety. Since unification produces its result by merging the two arguments, the result usually contains significantly fewer nodes than the sum of the nodes of the argument DGs. Incremental Copying Wroblewski's nondestructive graph unification with incremental copying eliminates early copy- ing and avoids over copying. His method pro- duces the resulting DG by incrementally copying the argument DGs. An additional copy field in the DG structure is used to associate temporary forwarding relationships to copied nodes. Only those copies are destructively modified. Finally, the copy of the newly constructed root will be re- turned in case of success, and all the copy pointers will be invalidated in constant time by increment- ing a global generation counter without traversing the arguments again, leaving the arguments un- changed. Redundant Copying A problem arises with Wroblewski's account, because the resulting DG consists only of newly created structures even if parts of the input DGs that are not changed could be shared with the re- sultant DG. A better method would avoid (elim- inate) such redundant copying as it is called by [Kogure 90]. Structure Sharing The concept of structure sharing has been intro- duced to minimize the amount of copying by allow- ing DGs to share common parts of their structure. The Boyer and Moore approach uses a skeleton/environment representation for structure sharing. The basic idea of structure sharing pre- sented by [Pereira 85], namely that an initial ob- ject together with a list of updates contains the same information as the object that results from applying the updates destructively to the initial object, uses a variant of Boyer and Moore's ap- proach for structure sharing of term structures [Boyer/Moore 72]. The method uses a skeleton for representing the initial DG that will never change and an environment for representing updates to the skeleton. There are two kinds of updates: reroutings that forward one DG node to another; arc bindings that add to a node a new arc. Lazy Copying as another method to achieve structure sharing is based on the idea of lazy evaluation. Copying is delayed until a destruc- tive change is about to happen. Lazy copy- ing to achieve structure sharing has been sug- gested by [Karttunen/Kay 85], and lately again by [Godden 90] and [Uogure 90]. Neither of these methods fully avoids redun- dant copying in cases when we have to copy a node that is not the root. In general, all nodes along the path leading from the root to the site of an update need to be copied as well, even if they are not affected by this particular unifica- tion step, and hence could be shared with the re- sultant DG. Such cases seem to be ubiquitous in unification-based parsing since the equality con- straints of phrase structure rules lead to the unifi- cation of substructures associated with the imme- diate daughter and mother categories. With re- spect to the overall structure that serves as the re- sult of a parse, these unifications of substructures are even further embedded, yielding a considerable amount of copying that should be avoided. All of these methods require the copying of arcs to a certain extent, either in the form of new arc bindings or by copying arcs for the resultant DG. Lazy Incremental Copying We now present Lazy Incremental Copying (LIC) as a new approach to the copying problem. The method is based on Wroblewski's idea of incremen- tally producing the resultant DG while unification proceeds, making changes only to copies and leav- ing argument DGs untouched. Copies are associ- ated with nodes of the argument DGs by means 324 s b ¢ d • | Figure 1: Chronological dereferencing. of an additional copy field for the data structures representing nodes. But instead of creating copies for all of the nodes of the resultant DG, copying is done lazily. Copying is required only in cases where an update to an initial node leads to a de- structive change. The Lazy Incremental Copying method con- stitutes a synthesis of Pereira's structure sharing approach and Wroblewski's incremental copying procedure combining the advantages and avoid- ing disadvantages of both methods. The struc- ture sharing scheme is imported into Wroblewski's method eliminating redundant copying. Instead of using a global branch environment as in Pereira's approach, each node records it's own updates by means of the copy field and a generation counter. The advantages are a uniform unification proce- dure which makes complex merging of environ- ments obsolete and which can be furthermore eas- ily extended to handle disjunctive constraints. Data Structures CopyNode structure type: arcs: copy: generation: <symbol> <a list of ARCs> <a pointer to a CopyNode> <an integer> ARC structure label: <symbol> dest: <a CopyNode> Dereferencing The main difference between standard unification algorithms and LIC is the treatment of dereference pointers for representing node equivalence classes. The usual dereferencing operation follows a possi- ble pointer chain until the class representative is found, whereas in LIC dereferencing is performed according to the current environment. Each copy- node carries a generation counter that indicates to which generation it belongs. This means that every node is connected with its derivational con- text. A branch environment is represented as a se- quence of valid generation counters (which could be extended to trees of generations for represent- ing local disjunctions). The current generation is defined by the last element in this sequence. A copynode is said to be an active node if it was created within the current generation. Nondeterminism during parsing or during the processs of checking the satisfiability of constraints is handled through chronological backtracking, i.e. in case of failure the latest remaining choice is re- examined first. Whenever we encounter a choice point, the environment will be extended. The length of the environment corresponds to the num- ber of stacked choice points. For every choice point with n possible continuations, n- 1 new gener- ation counters are created. The last alternative pops the last element from the environment, con- tinues with the old environment and produces n DG representations, one for each alternative. By going back to previous generations, already exist- ing nodes become active nodes, and are thus mod- ified destructively. This technique resembles the last call optimization technique of some Prolog ira- • plementations, e.g. for the WAM [Warren83]. The history of making choices is reflected by the deref- erencing chain for each node which participated in different unifications. Figure 1 is an example which illustrates how dereferencing works with respect to the environ- ment: node b is the class representative for envi- ronment <0>, node c is the result of dereferenc- ing for environments <0 1> and <0 1 2>, and fi- nally node f corresponds to the representative for the environment <0 I 2 3> and all further exten- sions that did not add a new forwarding pointer to newly created copynodes. 325 Q~I, .Q Cam) 1: ck~ructive merge Cmo 2: tJmvm~JJng to tho ~tJvo nodo ® -q)-- .... --q) Cme S: i~wemen~ ~ i by cn~lng a nm t.:tlve mtde Figure 2: Node merging. Advantages ale: of this new dereferencing scheme • It is very easy to undo the effects of uni- fication upon backtracking. Instead of us- ing trail information which records the nodes that must be restored in case of returning to a previous choice point, the state of com- putation at that choice point is recovered in constant time by activating the environment which is associated with that choice point. Dereferencing with respect to the environ- ment will assure that the valid class repre- sentative will always be found. Pointers to nodes that do not belong to the current en- vironment are ignored. • It is no longer necessary to distinguish be- tween the forward and copy slot for repre- senting permanent and temporary relation- ships as it was needed in Wroblewski's algo- rithm. One copy field for storing the copy pointer is sufficient, thus reducing the size of node structures. Whether a unification leads to a destructive change by performing a rerouting that can not be undone, or to a nondestructive update by rerouting to a copynode that belongs to a higher genera- tion, is reexpressed by means of the environ- ment. Lazy Non-redundant Copying Unification itself proceeds roughly like a standard destructive graph unification algorithm that has been adapted to the order-sorted case. The dis- tinction between active and non-active nodes al- lows us to perform copying lazily and to eliminate redundant copying completely. Recall that a node is an active one if it belongs to the current generation. We distinguish between three cases when we merge two nodes by unifying them: (i) both are active nodes, (ii) either one of them is active, or (iii) they are both non-active. In the first case, we yield a destructive merge ac- cording to the current generation. No copying has to be done. If either one of the two nodes is ac- tive, the non-active node is forwarded to the ac- tive one. Again, no copying is required. When we reset computation to a previous state where the non-active node is reactivated, this pointer is ig- nored. In the third case, if there is no active node yet, we know that a destructive change to an en- vironment that must be preserved could occur by building the new equivalence class. Instead, a new copynode will be created under the current active generation and both nodes will be forwarded to the new copy. (For illustration cf. Figure 2.) Notice that it is not necessary to copy arcs for the method presented here. Instead of collecting all arcs while dereferencing nodes, they are just carried over to new copynodes without any modification. That is done as an optimization to speed up the compu- tation of arcs that occur in both argument nodes to be unified (Sharedhrcs) and the arcs that are unique with respect to each other (Un±queArcs). 326 "1 v \ \ - . . . . 0; . rein (lender Figure 3: A unification example. The unification algorithm is shown in Fig- ure 4 and Figure 3 illustrates its application to a concrete example of two successive unifications. Copying the nodes that have been created by the first unification do not need to be copied again for the second unification that is applied at the node appearing as the value of the path pred.verb, saving five Copies in comparison to the other lazy copying methods. Another advantage of the new approach is based on the ability to switch easily between de- structive and non-destructive unification. During parsing or during the process of checking the satis- fiability of constraints via backtracking, there are in general several choice points. For every choice point with n possible continuations, n - 1 lazy incremental copies of the DG are made using non- destructive unification. The last alternative con- tinues destructively, resembling the last cMl op- timization technique of Prolog implemelitations, yielding n DG representations, one for each al- ternative. Since each node reflects its own up- date history for each continuation path, all un- changed parts of the DG are shared. To sum up, derived DG instances are shared with input DG representations and updates to certain nodes by means of copy nodes are shared by different branches of the search space. Each new update corresponds to a new choice point in chronological order. The proposed environment representation facilitates memory management for allocating and deallocating copy node structures which is very important for the algorithm to be efficient. This holds, in particular, if it takes much more time to create new structures than to update old reclaimed structures. Comparison with other Approaches Karttunen's Reversible Unification [Karttunen 86] does not use structure sharing at M1. A new DG is copied from the modified arguments after success- ful unification, and the argument DGs are then restored to their original state by undoing all the changes made during unification hence requiring a second pass through the DG to assemble the result and adding a constant time for the save operation before each modification. As it has been noticed by [Godden 90] and [Kogure 90], the key idea of avoiding "redundant copying" is to do copying lazily. Copying of nodes will be delayed until a destructive change is about to take place. Godden uses active data structures (Lisp clo- sures) to implement lazy evaluation of copying, and Kogure uses a revised copynode procedure which maintains copy dependency information in order to avoid immediate copying. 327 ~. procedure unify(nodel,node2 : CopyNode) nodel *-- deref(nodel) node2 ~- deter(node2) IF node1 = node2 THEN return(nodel) ELSE newtype ~- nodel.type A node2.type IF newtype = I THEN return(l) ELSE <SharedArcsl, SharedArcs2> ~- SharedArcs(nodel,node2) <UniqueArcsl, UniqueArcs2> ~- UniqueArcs(nodel,node2) IF ActiveP(nodel) THEN node ~- nodel node.arcs ~- node.arcs U UniqueArcs2 node2.copy ~- node ELSE IF ActiveP(node2) THEN node ~- node2 node.arcs ~- node.arcs LJ UniqueArcsl nodel,copy *- node ELSE node ~- CreateCopyNode nodel.copy *- node node2.copy ~- node node.arcs ~- UniqueArcsl U SharedArcsl U UniqueArcs2 ENDIF ENDIF node.type ~- newtype FOR EACH <SharedArcl, SharedArc2> IN <SharedArcsl, SharedArcs2> DO unify(SharedArcl.dest,SharedArc2.dest) return(node) ENDIF ENDIF END unify Figure 4: The unification procedure 328 approach early copying over copying methods yes lazy copying redundant incr. copying copying yes no no no yes no yes no yes yes yes no yes no no yes structure sharing yes naive yes yes no no Pereira 85 no no no yes Karttunen/Kay 85 no no yes yes Karttunen 86 no no no no Wroblewski 87 no yes no no Godden 90 no no yes yes Kogure 90 no yes yes yes LIC no Figure 5: Comparison of unification approaches yes Both of these approaches suffer from difficul- ties of their own. In Godden's case, part of the copying is substituted/traded for by the creation of active data structures (Lisp closures), a poten- tially very costly operation, even where it would turn out that those closures remain unchanged in the final result; hence their creation is unneces- sary. In addition, the search for already existing instances of active data structures in the copy en- vironment and merging of environments for suc- cessive unifications causes an additional overhead. Similarly, in Kogure's approach, not all redun- dant copying is avoided in cases where there exists a feature path (a sequence of nodes connected by arcs) to a node that needs to be copied. All the nodes along such a path must be copied, even if they are not affected by the unification procedure. Furthermore, special copy dependency informa- tion has to be maintained while copying nodes in order to trigger copying of such arc sequences lead- ing to a node where copying is needed later in the process of unification. In addition to the overhead of storing copy dependency information, a second traversal of the set of dependent nodes is required for actually performing the copying. This copying itself might eventually trigger further copying of new dependent nodes. The table of Figure 5 summarizes the different unification approaches that have been discussed and compares them according to the concepts and methods they use. Conclusion The lazy-incremental copying (LIC) method used for the unification algorithm combines incremen- tal copying with lazy copying to achieve structure sharing. It eliminates redundant copying in all cases even where other methods still copy over. The price to be paid is counted in terms of the time spent for dereferencing but is licensed for by the gain of speed we get through the reduction both in terms of the number of copies to be made and in terms of the space required for the copies themselves. The algorithm has been implemented in Com- mon Lisp and runs on various workstation ar- chitectures. It is used as the essential oper- ation in the implementation of the interpreter for the Typed Features Structure System (TFS [gmele/Zajac 90a, Emele/Zajac 90b]). The for- malism of TFS is based on the notion of inher- itance and sets of constraints that categories of the sort signature must satisfy. The formalism supports to express directly principles and gen- eralizations of linguistic theories as they are for- mulated for example in the framework of HPSG [Pollard and Sag 87]. The efficiency of the LIC ap- proach has been tested and compared with Wrob- lewski's method on a sample grammar of HPSG using a few test sentences for parsing and gener- ation. The overall processing time is reduced by 60% - 70% of the original processing time. See [Emele 91] for further discussion of optimizations available for specialized applications of this gen- eral unification algorithm. This paper also pro- vides a detailed metering statistics for all of the other unification algorithms that have been com- pared. References [A'ft-Kaci 84] Hassan Ait-Kaci. A Lattice Theo- retic Approach to Computation based on a Cal- culus of Partially Ordered Types Structures. Ph.D Dissertation, University of Pennsylvania, 1984. [A'/t-Kaci 86] Hassan A'/t-Kaci. An algebraic se- mantics approach to the effective resolution of type equations. Theoretical Computer Science 45, pp. 293-351, 1986 [Boyer/Moore 72] R. S. Boyer and J. S. Moore. The sharing of structures in theorem-proving programs. In B. Meltzer and D. Mitchie (eds.), Machine Intelligence 7, pp. 101-116, John Wi- ley and Sons, New York, New York, 1972. [Emele/Zajac 90a] Martin C. Emele and R~mi Za- jac. A fix-point semantics for feature type sys- tems. In Proceedings of the 2 nd International Workshop on Conditional and Typed Rewriting Systems, CTRS'90, Montreal, June 1990. [Emele/Zajac 90hi Martin C. Emele and R~mi Zajac. Typed unification grammars. In Proceed- ings of 13 th International Conference on Com- putational Linguistics, COLING-90, Helsinki, August 1990. [Emele 91] Martin C. Emele. Graph Unification using Lazy Non-Redundant Copying. Techni- cal Report AIMS 04-91, Institut fiir maschinelle Sprachverarbeitung, University of Stuttgart, 1991. [Godden 90] Kurt Godden. Lazy unification. In Proceedings of the 28 th, Annual Meeting of the Association for Computational Linguistics, ACL, pp. 180-187, Pittsburgh, PA, 1990. [Huet 76] G~rard Huet. Rdsolution d'Equations dons des Langages d'Ordre 1, 2, ..., w. Th~se de Doctorat d'Etat, Universit~ de Paris VII, France. 1976. [Hopcroft/Karp 71] J. E. Hopcroft and R. M. Karl). An Algorithm for testing the Equivalence of Finite Automata. Technical report TR-71- 114, Dept. of Computer Science, Corneil Uni- versity, Ithaca, NY, 1971. [Karttunen 86] Lauri Karttunen. D-PATR: A De- velopment Environment for Unification-Based Grammars. Technical Report CSLI-86-61, Cen- ter for the Study of Language and Information, Stanford, August, 1986. [Karttunen/Kay 85] Lauri Karttunen and Mar- tin Kay. Structure sharing with binary trees. In Proceedings of the 23 rd Annual Meeting of the Association for Computational Linguistics, ACL, pp. 133-136a, Chicago, IL, 1985. [Kogure 90] Kiyoshi Kogure. Strategic lazy in- cremental copy graph unification. In Proceed- ings of the 13 th Intl. Conference on Compu- tational Linguistics, COLING-90, pp. 223-228, Helsinki, 1990. [Pereira 85] Fernando C.N. Pereira. A structure- sharing representation for unification-based grammar formalisms. In Proceedings of the 23 rd Annual Meeting of the Association for Computational Linguistics, ACL, pp. 137-144, Chicago, IL, 1985. [Pereira/Shieber 84] Fernando C.N. Pereira and Stuart Shieber. The semantics of grammar for- malisms seen as computer languages. In Pro- ceedings of the lOth Intl. Conference on Com- putational Linguistics, COLING-84, Stanford, 1984. [Pollard and Sag 87] Carl Pollard and Ivan Sag. b~formation-Based Syntax and Semantics, Vol- ume L CSLI Lecture Notes No 13. Chicago Uni- versity Press, Chicago, 1987. [Rounds/Kasper 86] Williams C. Rounds and R. Kasper. A complete logical calculus for record structures representing linguistic information. In IEEE Symposium on Logic in Computer Sci- ence, 1986. [Shieber 86] Stuart M. Shieber. An Introduction to Unification-based Approaches to Grammar. CSLI Lecture Notes No 4. Chicago University Press, Chicago, 1986. [Warren 83] David H. D. Warren. An Abstract Prolog Instruction Set. Technical Note 309, SRI International, Menlo Park, CA, 1983. [Wroblewski 87] David A. Wroblewski. Nonde- structive graph unification. In Proceedings of the 6 th National Conference on Artificial Intel- ligence, AAAI, pp. 582-587, Seattle, WA, 1987. 330
1991
42
Logical Form of Complex Sentences in Task-Oriented Dialogues* Cecile T. Balkanski Harvard University, Aiken Computation Lab Cambridge, MA 02138 Introduction Although most NLP researchers agree that a level of "logical form" is a necessary step toward the goal of rep- resenting the meaning of a sentence, few people agree on the content and form of this level of representation. An even smaller number of people have considered the com- plex action sentences that are often expressed in task- oriented dialogues. Most existing logical form represen- tations have been developed for single-clause sentences that express assertions about properties or actual actions and in which time is not a main concern. In contrast, utterances in task-oriented dialogues often express unre- alized actions, e.g., (la), multiple actions and relations between them, e.g., (lb), and temporal information, e.g., (lc): (1) a. What about rereading the Operations manual? b. By getting the key and unlocking the gate, you get ten points. c. When the red fight goes off, push the handle. In the following sections, I discuss the issues that arise in defining the logical form of these three types of sen- tences. The Davidsonian treatment of action sentences is the most appropriate for my purposes because it treats actions as individuals [7]. For example, the logical form of "Jones buttered the toast" is a three place predicate, including an argument position for the action being de- scribed, i.e., 3x butter(jones, toast, x). The presence of the action variable makes it possible to represent op- tional modifiers as predications of actions and to refer to actions in subsequent discourse. Furthermore, and more importantly for the present purpose, it facilitates the representation of sentences about multiple actions and relations between them. Unrealized-action sentences A Davidsonian logical form of sentence (la), namely 3x reread(us, manual, x), makes the claim that there exists a particular action x. But this is not the intended meaning of the sentence. Instead, this sentence con- cerns a hypothetical action. The same problem arises with sentences (lb) and (lc) which state how typical actions are related or when to perform a future action. Apparently, Davidson did not have these types of action in mind when suggesting his theory of logical form. In fact, a closer look at the literature shows that the problem of representing action sentences that do *This research has been supported by U S West Advanced Technologies, by the Air Force Office of Scientific Research under Contract No.AFOSR-89-0273, and by an IBM Grad- uate Fellowship. 331 not make claims about actions that have or are oc- curring (i.e., actual actions) has been virtually ignored. Hobbs, who also adopts a Davidsonian treatment of ac- tion sentences, is one notable exception [11]. His "Pla- tonic universe" contains everything that can be spoken of and the predicate Exist is used to make statements about the existence in the actual universe of individu- als in the Platonic universe. For example, the formula Exists(x) Arun'(x, john) says that the action of John's running exists in the actual universe, or, more simply, that John runs. The approach I am currently investi- gating is to extend Itobbs' representation by introduc- ing predicates stating the existence of actions in future, hypothetical or typical worlds as well as in the actual world. Another possibility is to adopt the standard philo- sophical approach to the representation of properties, and for that matter, of actions, that are not actually instantiated, namely possible worlds (cf. [13, 2]). Fur- thermore, and independently of the approach that is adopted, there is a need to identify the different types of unrealized actions and determine whether they should be distinguished in the logical form. Multi-clause sentences Another area of logical form that has not received much attention is the representation of sentences about multi- ple actions and relations between them. I have been investigating sentences including by- and to- purpose clauses because they are used to communicate two ac- tion relations, namely generation and enablement, which I have defined elsewhere [3]. In a Davidsonian logical form, the connectives "by" and "to" can be represented as two-place predicates ranging over action tokens1; e.g.: (2) To learn how to use the system, read the manual. learn(you, system, xa ) A read(you, manual, x2)A inorderto(x2, xa) Clauses may also be joined with coordination con- junctions, e.g., (3a), and the resulting constituent may participate in another action relation, as in (lb) and repeated below in (3b). I therefore represent these con- neetives by a three place predicate, e.g., and(xl, x2, x4) which is true if action x4 is the conjunction of actions xl and x2. In (3a), the action token x4 might seem superfluous, but note that it becomes necessary if that action is referred to in subsequent discourse (e.g., "Do aAlthough this problem interacts with the one discussed in the previous section, for the purpose of this presentation, I call Davidson's action variables action tokens and represent them as constants in the logical form. it fast!"); in (3b), the action token z4 can then be used as the first argument to the by predicate: (3) a. Get the key and unlock the gate. get(you, key, xl ) ^ unZock(yo,,, gate, x2 )^ and(xl, x2, x4) b. By getting the key and unlocking the gate, you get ten points. get(you, key, xl) ^ unlock(you, gate, z2)^ get(you, lOpoints, z3) ^ and(z1, z2, z4) ^ by(x4, za) In the above logical forms, I assume that the by and inorderto predicates denote a two place relation express- ing the "ideal meaning" of the corresponding English connective [9]. There is not necessarily a one-to-one mapping between particular linguistic expressions and action relations, and subsequent pragmatic processing of the logical forms will further interpret these relations. Representing the embedded clause as an additional ar- gument to the predicate representing the matrix clause (e.g., [5]), or representing the relation as a binary sen- tential operator (e.g., [16]) are alternative representa- tions, both of which suffer from problems discussed by Davidson because action tokens become irrelevant. Fur- thermore, the first does not capture the intuitive notion that these sentences express action relations, and the second introduces a lack of homogeneity between logical forms of sentences involving action relations and those that do not. Time Still another feature that has been overlooked in the study of logical form is time. Although a number of pa- pers include time in their logical forms, most do not dis- cuss their treatment of time and consider primarily past and present tense examples about actual actions (e.g., [1, 5, 1412). The lack of concern for temporal issues is also characteristic of the literature on semantic interpre- tation (e.g., [10, 15, 16]). On the other hand, there is a vast literature on the interpretation and representation of tense, aspect and temporal modifiers, but these pa- pers do not describe the logical forms from which their representations are generated (e.g., [4, 6, 8, 12, 17, 18]). Clearly, there is a missing link between the literature on logical form and that on tense and aspect. Providing such a link is one of the goals of this research. David- son's treatment of action sentences does not provide a fully satisfying starting point. Although his initial pa- per does not include any example of temporal modi- fiers, he would probably represent them as predicates over action tokens, e.g., next_week(x), a representation that does not make explicit reference to time (to which anaphors might refer). Introducing a time predicate, e.g., time(z, next_week), solves this particular problem, but introduces other complexities because this predicate would not be adequate for all temporal modifiers (e.g., compare Sue will leave in two hours and Sue reached the top in two hours). Given that the aspectual type and tense of the verb, along with the presence of adverbials and common sense knowledge all interact in the inter- pretation of the temporal information in a sentence [12], it might be preferable for such reasoning to be performed with the logical form as input rather than as output. 2Moore [14] addresses time issues, but omits future tense sentences and acknowledges problematic interactions be- tween his event abstraction operator and time. Conclusion Although many researchers have proposed formalisms for simple action sentences, very few of them have ad- dressed the issues that arise when extending those for- malisms to the more complex sentences that occur in task-oriented dialogues. There has been work in each of the above areas, but this research has been fragmentary and still needs to be integrated with that on the logi- cal form of action sentences. Ironically, the conclusion that Moore arrived at, ten years ago, is still valid today [14]: "If real progress is to be made on understanding the logical form of natural-language utterances, it must be studied in a unified way and treated as an important research problem in its own right." In ray talk, I will present an initial attempt to do so. References [1] H. Alshawi & J. van Eijck. Logical form in the core language engine. Proceedings of the ACL, 1989. [2] D. Appelt. Planning English referring expressions. Artificial Intelligence 26, 1985. [3] C. Balkanski. Modelling act-type relations in col- laborative activity. Technical Report TR-23-90, Harvard University, 1990. [4] M. Brent. A simplified theory of tense represen- tations and constraints on their composition. Pro- ceedings of the ACL, 1990. [5] L. Creary. NFLT: A language of thought for rea- soning about actions, 1983. working paper. [6] M. Dalrymple. The interpretation of tense and as- pect in English. Proceedings of the ACL, 1988. [7] D. Davidson. The logical form of action sentences. In N. Rescher (ed), The Logic of Decision and Ac- tion. University Pittsburgh Press, 1967. [8] M. Harper & E. Charniak. Time and tense in en- glish. Proceedings of the ACL, 1986. [9] A. Herskovits. Language and Spatial Cognition. Cambridge University Press, 1986. [10] G. Hirst. Semantic interpretation and ambiguity. Artificial Intelligence, 34, 1988. [11] J. Hobbs. OntologicM promiscuity. Proceedings of the ACL, 1985. [12] M. Moens & M. Steedman. Temporal ontology and temporal reference. Computational Linguistics, 14(2), 1988. [13] R. Moore. A formal theory of knowledge and action. In J. Hobbs & R. Moore (eds), Formal Theories of Commonsense Word. Ablex, 1985. [14] R. Moore. Problems in logical form. Proceedings of the ACL, 1981. [15] M. Pollack & F. Pereira. An integrated framework for semantic and pragmatic interpretation. Proceed- ings of the ACL, 1988. [16] L. Schubert & F Pelletier. From English to logic: Contex-free computation of 'conventional' logical translations. Computional Linguistics, 10, 1984. [17] B. Webber. Tense as discourse anaphor. Computa- tional Linguistics, 14(2), 1988. [18] K. Yip. Tense, aspect, and the cognitive represen- tation of time. Proceedings of IJCAI, 1985. 332
1991
43
Action representation for NL instructions Barbara Di Eugenio* Department of Computer and Information Science University of Pennsylvania Philadelphia, PA dieugeni~linc.cis.upenn.edu 1 Introduction The need to represent actions arises in many differ- ent areas of investigation, such as philosophy [5], se- mantics [10], and planning. In the first two areas, representations are generally developed without any computational concerns. The third area sees action representation mainly as functional to the more gen- eral task of reaching a certain goal: actions have of- ten been represented by a predicate with some argu- ments, such as move(John, block1, room1, room2), augmented with a description of its effects and of what has to be true in the world for the action to be executable [8]. Temporal relations between ac- tions [1], and the generation relation [12], [2] have also been explored. However, if we ever want to be able to give in- structions in NL to active agents, such as robots and animated figures, we should start looking at the char- acteristics of action descriptions in NL, and devising formalisms that should be able to represent these characteristics, at least in principle. NL action de- scriptions axe complex, and so are the inferences the agent interpreting them is expected to draw. As far as the complexity of action descriptions goes, consider: Ex. 1 Using a paint roller or brush, apply paste to the wall, starting at the ceiling line and pasting down a few feet and covering an area a few inches wider than the width of the fabric. The basic description apply paste to the wall is augmented with the instrument to be used and with direction and eztent modifiers. The richness of the possible modifications argues against representing actions as predicates having a fixed number of ar- guments. Among the many complex inferences that an agent interpreting instructions is assumed to be able to draw, one type is of particular interest to me, namely, the interaction between the intentional description of an action - which I'll call the goal or the why- and *This research was supported by DARPA grant no. N0014- 85 -K0018. 333 its executable counterpart - the how 1. Consider: Ex. 2 a) Place a plank between two ladders to create a simple scaffold. b) Place a plank between two ladders. In both a) and b), the action to be executed is aplace a plank between two ladders ~. However, Ex. 2.b would be correctly interpreted by placing the plank anywhere between the two ladders: this shows that in a) the agent must be inferring the proper po- sition for the plank from the expressed why "to create a simple scaffoldL My concern is with representations that allow specification of both bow's and why's, and with rea- soning that allows inferences such as the above to be made. In the rest of the paper, I will argue that a hybrid representation formalism is best suited for the knowledge I need to represent. 2 A hybrid action representa- tion formalism As I have argued elsewhere based on analysis of nat- urally occurring data [14], [7], actions - action types, to be precise - must be part of the underlying ontol- ogy of the representation formalism; partial action descriptions must be taken as basic; not only must the usual participants in an action such as agent or patient be represented, but also means, manner, di- rection, extent etc. Given these basic assumptions, it seems that knowledge about actions falls into the following two categories: 1. Terminological knowledge about an action- type: its participants and its relation to other action-types that it either specializes or ab- stracts - e.g. slice specializes cut, loosen a screw carefully specializes loosen a screw. 2. Non-terminological knowledge. First of all, knowledge about the effects expected to occur 1V~ta.t executable means is debatable: see for example [12], p. 63ff. when an action of a given type is performed. Because effects may occur during the perfor- mance of an action, the basic aspectua] profile of the action-type [11] should also be included. Clearly, this knowledge is not terminological; in Ex. 3 Turn the screw counterclockwise but don't loosen it completely. the modifier not ... completely does not affect the fact that don't loosen it completely is a loos- ening action: only its default culmination con- dition is affected. Also, non-terminological knowledge must in- clude information about relations between action-types: temporal, generation, enablement, and testing, where by testing I refer to the rela- tion between two actions, one of which is a test on the outcome or execution of the other. The generation relation was introduced by Gold- man in [9], and then used in planning by [1], [12], [2]: it is particularly interesting with respect to the representation of how's and why's, because it appears to be the relation holding between an intentional description of an action and its executable counterpart - see [12]. This knowledge can be seen as common.sense planning knowledge, which includes facts such as to loosen a screw, you have to turn it coun- terelockwise, but not recipes to achieve a certain goal [2], such as how to assemble a piece of fur- niture. The distinction between terminological and non- terminological knowledge was put forward in the past as the basis of hybrid KR system, such as those that stemmed from the KL-ONE formalism, for example KRYPTON [3], KL-TWO [13], and more recently CLASSIC [4]. Such systems provide an assertional part, or A-Box, used to assert facts or beliefs, and a terminological part, or T-Box, that accounts for the meaning of the complex terms used in these asser- tions. In the past however, it has been the case that terms defined in the T-box have been taken to cor- respond to noun phrases in Natural Language, while verbs are mapped onto the predicates used in the as- sertions stored in the A-box. What I am proposing here is that, to represent action-types, verb phrases too have to map to concepts in the T-Box. I am advo- cating a 1:1 mapping between verbs and action-type names. This is a reasonable position, given that the entities in the underlying ontology come from NL. The knowledge I am encoding in the T-box is at the linguistic level: an action description is composed of a verb, i.e. an action-type name, its arguments and possibly, some modifiers. The A-Box contains the non-terminological knowledge delineated above. I have started using CLASSIC to represent actions: it is clear that I need to tailor it to my needs, because 334 it has limited assertional capacities. I also want to explore the feasibility of adopting techniques similar to those used in CLASP [6] to represent what I called common-sense planning knowledge: CLASP builds on top of CLASSIC to represent actions, plans and scenarios. However, in CLASP actions are still tra- ditionally seen as STRIPS-like operators, with pre- and post-conditions: as I hope to have shown, there is much more to action descriptions than that. References [1] J. Allen. Towards a general theory of action and time. Artificial Intelligence, 23:123-154, 1984. [2] C. Balkanski. Modelling act-type relations in collab- orative activity. Technical Report TR-23-90, Cen- ter for Research in Computing Technology, Harvard University, 1990. [3] R. Brachman, R.Fikes, and H. Levesque. KRYP- TON: A Functional Approach to Knowledge Repre- sentation. Technical Report FLAIR 16, Fairchild Laboratories for Artificial Intelligence, Palo Alto, California, 1983. [4] R. Bra~hman, D. McGninness, P. Patel-Schneider, L. Alperin Resnick, and A. Borgida. Living with CLASSIC: when and how to use a KL-ONE-IIke lan- guage. In J. Sowa, editor, Principles of Semantic Networks, Morgan Kaufmann Publishers, Inc., 1990. [5] D. Davidson. Essays on Actions and Events. Oxford University Press, 1982. [6] P. Devanbu and D. Litman. Plan-Based Termino- logical Reasoning. 1991. To appear in Proceedings of KR 91, Boston. [7] B. Di Eugenio. A language for representing action descriptions. Preliminary Thesis Proposal, Univer- sity of Pennsylvania, 1990. Manuscript. [8] R. Fikes and N. Nilsson. A new approach to the application of theorem proving to problem solving. Artificial Intelligence, 2:189-208, 1971. [9] A. Goldman. A Theory of Human Action. Princeton University Press, 1970. [10] R. Jackendoff. Semantics and Cognition. Current Studies in Linguistics Series, The MIT Press, 1983. [11] M. Moens and M. Steedman. Temporal Ontology and Temporal Reference. Computational Linguis- tics, 14(2):15-28, 1988. [12] M. Pollack. Inferring domain plans in question- answering. PhD thesis, University of Pennsylvania, 1986. [13] M. VilMn. The Restricted Language Architecture of a Hybrid Representation System. In IJCAI-85, 1985. [14] B. Webber and B. Di Eugenio. Free Adjuncts in Natural Language Instructions. In Proceedings Thir- teen& International Conference on Computational Linguistics, COLING 90, pages 395-400, 1990.
1991
44
Extracting Semantic Roles from a Model of Eventualities Sylvie Ratt6 Universit6 du Qu6bec fi MontrSal / Linguistics Department C.P. 8888, Succ. "A" / Montreal, QC / H3C 3P8 e-mail: [email protected] The notion of semantic roles is usually at- tributed to Fillmore [8], however its history can be traced back through TesniSre [16] to Panini. Following this tradition, many researchers rec- ognize their usefulness in the description of language -- even if they do not agree on their significance [7]. However, a weak or strong commitment to this notion does not elude the fact that it proves to be very difficult to settle on a finite set of labels along with their formal def- initions. The dilemma resulting from this challenge is well known: to require a univocal identification by each role results in an increase in their number while to abstract their semantic content gives rise to an inconsistent set. If a fi- nite set is possible, one has to find a proper balance between these two extremes. As a result, every flavor of roles have been used from time to time in linguistics (e.g., GB, in the spirit of Fillmore, HPSG, in the line of situation seman- tics), and also in AI [10, see also 4]. Between the total refusal to use those labels (as in GPSG) and the acceptance of individual roles (as in HPSG) there is a wide range of pro- posals on what constitute a good set of L(inguistic)-Roles [7] and, as a consequence, on the way to differentiate between them and define them. Most of the definitions have been based on the referential properties that can be associated with each role bearer (e.g. an AGENT is a volitional animate entity). Even if this approach is necessary at one time or another, this kind of definition inevitably leads to either the "let's create another role" or the "let's abstract its definition" syndromes. Properties are not always of the static kind though. Sometimes, dynamic properties are also used (e.g. an AGENT is the perceived instigator of the action). Since one of the desired characteristic of a roles system is the power to discriminate events [5] (another "desired" property being to offer an easier selection of grammatical functions), the recognition of semantic roles should be linked to the interpretation of the event, that is to their dy- namic properties. In a study on locative verbs in French, Boons [3] has convincingly shown the importance of taking into account aspectual cri- teria in the description of a process, suggesting that GOAL and SOURCE roles should be reinvesti- gated in the light of those criteria. It is our hypothesis that proliferation of roles is a natural phenomenon caused by the specialized proper- ties required by the interpretation of a predicate within a specific semantic field: to overlook these properties yields the over-generalization already mentionned. The best way to approach the expansion/contraction dilemma is to search for the minimal relations required for a dynamic interpretation of events (in terms of their aspec- tual criteria and through an identification of all the participants in i0. Our first step toward this abstraction was to consider each participant (individuals or properties) either as a localized entity (a token) or a location (a place), and to determine its role in the realization of the process expressed by the predicate. The model exhibits some common points with a localist approach [1,11] since it recognizes (in an abstract sense) the importance of spatio-temporal "regions" in the process of individuation of events [14]. To express the change of localization (again in an abstract sense), the notion of transitions is used. The entire construction is inspired by Petri net theory [15]: a set S of places, a set T of transitions, a flow relation F: (S x T) ~ (T x S) and markers are the categories used to define the structure of a process (and as a consequence of the events composing it). For example, the dynamic representation of Max embarque la caisse sur le cargo [3J/Max em- barks the crate on the cargo boat can be analyzed in two steps. First there is a transition from an initial state IS where the crate is not on the cargo boat to a final state FS where the crate is on the cargo boat. The final state can be expressed by the static passive, la caisse est embarqude sur le cargo~the crate was embarked on the cargo boat, and is schematized in (2). One of the argument (cargo boat) is used as a localization while the other argument is used as a localized entity (crate), the THEME according to Gruber [9]. The initial state can be expressed (in this case) by the negation of the final state and is schematized in (1). The realization of the entire process is then represented by the firing of the net which can be illustrated by the snapshots (1) and (2). 1. Is:t~ir-~O:Fs 2. IS:O---[---(~):Fs To integrate the participation of "Max" in the model, we recognize the importance of 335 causality in the discrimination of events [13,14]. Since the cause is understood to be the first entity responsible for the realization of events [6], the obvious schematization is (3). 3. 4. It is possible that a recursive definition (places and transitions) will be necessary to ex- press "properly" the causation, the localization of events and processes or the concept of dy- namic states [2,14]. In that case, the schematiza- tion could then be (4). But we can achieve the same result through a proper type definition of the transition expressing the cause: (s x 0 -~ (t x ((s x t) -, (t x s))), where "s" is a place and "t", a transition. This approach to semantic roles determina- tion is close to the one undertook by Jackendoff [12]. His identification of each role to a particu- lar argument position in a conceptual relation is given here by the way it participate to the firing of the net. (It is our guess that most of the con- ceptual relations used by Jackendoff can be expressed within this model, giving to them an operational interpretation.) The model has the advantage to give an explicit and simple defini- tion of relations that do not have the same semantic range (e.g. CAUSE vs FROM vs AT). The analysis of locative processes using abstract regions instead of the traditional roles is better because it is, we think, the real basis of those interpretations. Abstracting away referen- tial properties gives the basic interactions ex- pressed by the predicate. Specifying those properties within a specific semantic field gives rise to the set of roles we are used to (e.g. within the spatial field, schematizations (1) and (2) express SOURCE and GOAL roles). With this model we were able to give an operational description of the difference between Max charge des briques dans le camion/Max loads bricks in the truck and Max charge le camion de briques/Max loads the truck with bricks. The schematization take into account which participant is responsible for each transi- tion firing and thus can lead us to the "final" place. As a first approximation of these continu- ous processes, (5) and (6) are proposed (the direct contribution of the instrument is also introduced). But recognition, as a participant of the quantity of bricks in (5) and the capacity of the truck in (6), results in the schematizations (7) et (8) (both display a specialization of their direct object in order to complete the semantic interpretation). . :b'uckl5. J :WuokFS '.Max :bdch IS :Initial F$ 5. ,~,,~a -~, 6. 7. ~ath,=~t .~J~ 8. By its simplicity, the model can thus give rise to "confusion" over some roles, in accor- dance with the general tendancy to observe "roles clusters". The resulting notation seems also an interesting way to explore the differences between static and dynamic processes, differ- ences that are not very '~,isual" if one is using a static notation. Our research is now directed toward the analysis of the system when more semantic content is used. We are testing if these adds-on have impacts on its behaviour, while analyzing if the partial semantic interpretation gives rise to the predicted syntactic forms (that is how does each potential participant is grammaticalized). References [1] Anderson, J.M., 1971. The grammar of case, Towards a localistic theory, CUP: Cambridge. [2] Bach, E. 1986. The Algebra of Events, Linguistics and Philosophy 9:5-16. [3] Boons, J.-P., 1987. La notion sdmantique de dd- placement dans une classification syntaxique des verbes locatifs. Langue fran~aise 76, Dec: 5-40. [4] Bruce, B., 1975. Case Systems for Natural Language. Artificial Intelligence 6, 327-360. [5] Carlson, G., 1984. Thematic roles and their role in semantic interpretation. Linguistics 22: 259-279. [6] Delancey, S., 1984. Notes on Agentivity and Causation. Studies in Language, 8.2:18 I-213. [7] Dowry, D. R., 1989. On the Semantic Content of the Notion of "Thematic Role", in Properties, Types and Meaning, II. G. Chierchia, B. H. Partee, & R. Turner (eds), Khwer: Boston, 69-129. [8] Fillmore, C. J., 1968. The Case for Case, in Universals in Linguistic Theory. Bach & Harms (eds), Holt, Rinehart & Winston: New York, 1-88. [9] Gruber, J., 1976. Lexical structures in syntax and semantics. North-Holland: New York. [10] Hirst, G., 1987. Semantic interpretation and the resolution of ambiguity. CUP: New York. [11] Hjernslev, L., 1972. La cat6gorie des cas, Wilhem Fink Verlag Miinchen: Band, (1935-1937). [12] Jackendoff, R. S., 1990. Semantic Structures. MIT Press: Cambridge MA [13] Michotte, A. E., 1954. La perception de la causalitd. Pub. univ. de Louvain, Erasme S.A.: Paris. [14] Miller, G. A. and P.N. Johnson-Laird, 1976. Language and Perception. Belknap Press of Harvard University Press: Cambridge MA. [15] Reisig, W. 1985. Petri Nets, An Introduction. Springer-Verlag: New York. [16] Tesni~re, L., 1959. Elements de Syntaxe Structurale. Klincksieck: Pads. 336
1991
45
Case Revisited: In the Shadow of Automatic Processing of Machine-Readable Dictionaries Fuliang Weng Computing Research Lab, New Mexico State University Las Cruces, NM 88003 This paper discusses the work of automat- experiencer; if a person who uses this concept ically extracting Case Frames from Machine- believes that seeing is a process of active selec- Readable Dictionaries based on a three layer tion, then this person will assign to its subject, a posteriori Case Theory[5]. an active Case such as agent. The theory is intended to deal with two 3. context layer: in this layer, Cases problems: 1. To dynamically adjust grains of Cases. This is where a posteriori comes from. 2. To provide a procedure to determine Cases. This is where three layer comes from. The three layers are: 1. base layer: This layer is intended to ac- complish transformations of words to concepts by explicating language and word specific im- plicants, e.g., for the verb eat in the intran- sitive case, its subject is eater, while for verb break in the intransitive case, its subject is the broken. 2. default layer: in this layer, implicit as- sumptions of naive theories are made explicit, e.g., for concept see, there are two different views towards its subject: if a person who uses this concept believes that seeing is just a pro- cess of passive perception, then this person will assign to its subject, a passive 1 Case such as *I would like to express thanks to Dr. L. Guthrie, Dr. D. Farwell and Prof. Y. Wilks for comments and encouragement. This project is supported in paxt by CRL. Some of the ideas were developed during my stay in CS/Fudan and CMT/CMU. 1The words passine/'acti~e are used to indicate dif- ferent levels of activeness. In what follows, Cases such as agent and instrument have somewhat different meanings than the conventional ones. We use them just for referring to a group of phenomena which are related to their names. are further clarified upon any requests from current tasks, associated context and personal belief systems (knowledge), e.g., in sentence The commander forced the soldier to break the door., whether the soldier should be assigned agent, instrument, active, or something else, should be decided by both contextual infor- mation and needs. Arguments for the three layer theory can be found in[5]. Relevant knowledge sources for arriving at different layers are: 1. Formation of the base layer: the for- mation is based on knowledge sources which mainly come from syntactic codes and def- initions in LDOCE (Longman Dictionary of Contemporary English). Examples in LDOCE also contribute to this process [1]. 2. Formation of the default layer: the for- mation is based on the assumption that naive theories are weakly consistent, which implies that certain semantic classifications may be consistent with certain naive theories: verb, noun, preposition and adjective classifications based on semantic and pragmatic codes in LDOCE, and examples in LDOCE can help to obtain such theories. 3. Formation of the context layer: the unification of the base layer and the de- 337 fault layer forms an initial representation of the context layer, its further development mainly depends on task, contextual needs and personal belief systems. The initial repre- sentation is a tuple with three components: entity-role, environment and endurance. An example of an initial representation for break is: ((+) (u-) (0)) break ((-) (u-) (0)), where (+) stands for active, (-) for passive, (u -) for indexing of the internal environment, (0) for duration. If the task is MT, the requirement for understanding could be shallow as pointed out by Wilks [7], although he did not discuss any dynamic grain adjustment. Contextual in- formation can be conveyed by active features Following the boot-strapping principle, we are starting with 750 genus verbs in the defin- ing word list of LDOCE, then gradually ex- panding them to all the verbs defined in LDOCE. There are various subtasks associated with this work: 1. Dynamically adjusting classifications of relational concepts (mainly reflected by verbs): we are trying to get a set of core verbs as proto- types of classes based on certain statistics and genus verb sense nets (the latter is being con- structed by G. Stein). A primary set of core verbs have been chosen, functional verbs are carefully prevented. The criterion for dynam- ically adjusting verb classes is: Cj (d) = (y : II y-z H< d,z E Cj), where C i are core classes and II • II is defined as: II y-x U = mini( i is the numbers of links on P, P is any path connect- ing x and y }. We can select a reasonable dis- tance for Cj(d) by detecting slopes with points in the distribution of members. Classification can also be done within connectionist models. 2. From the prototypes, naive theories may be formed, and then converted into represen- tations in the default layer. 3. Dynamic creation of Cases. Initial rep- resentations in the context layer may be ad- justed and new Cases be created according to a set of contextual conditions (mainly when mismatches happen). 4. A set of rules can be constructed to get the conventional Cases for typical situations. Many Case Theories are focused on verbs. In our situation, all the four major cate- gories (verb, noun, adjective and preposition) must be paid enough attention to, since there are many verbs defined by verb phrases in LDOCE. e.g., a definition entry of verb take in LDOCE contains get possession of. In or- der to select a right Case frame and verb class for each verb, we need something beyond what we have presented although it does not con- flict with what we have proposed and it is very plausible that the procedure used here may be adapted to establish Case frames for nouns, adjectives and prepositions. This task may be benefited from [2]. References [1] B. Atkins et al, Explicit and Implicit In]ormation in Dictionarien, CSL Report 5, Princeton Univer- sity, 1986. [2] R. Bruce and L. Guthrle, GenuJ Disambiguation: A Study in Weighted Prelerenee, MCCS-91-207, CRL/NMSU, 1991. [3] C. Fillmore, The Ca~e ]or Case,in Uni~ersab in Linguistic Theory, E. Bach and R. Harm (eds.), Holt, Rinehart, and Winston, 1968. [4] R. Schank, Coneeptaal Information Processing, North-Holland Publishing Co., 1975. [5] F. Weng, A Three-Layer a posteriori Ca~e The- 07, in preparation, 1991. [6] W. Wilkins, Syntaz and Semantics, Academic Press, Inc., California, 1988. Y. Wilks, An Artificial Intelligence Approach to Machine Translation, in Computer Models o] Thoaght 6nd Language, R.Schaak and K.Colby (eds.), W.H.Freemaa Co., 1973. 338
1991
46
Discovering the Lexical Features of a Language Eric Brill * Department of Computer and Information Science University of Pennsylvania Philadelphia, PA 19104 emaih [email protected] 1 Introduction This paper examines the possibility of automatically discovering the lexieal features of a language. There is strong evidence that the set of possible lexical fea- tures which can be used in a language is unbounded, and thus not innate. Lakoff [Lakoff 87] describes a language in which the feature -I-woman-or-fire-or- dangerons-thing exists. This feature is based upon ancient folklore of the society in which it is used. If the set of possible lexieal features is indeed unbounded, then it cannot be part of the innate Universal Gram- mar and must be learned. Even if the set is not un- bounded, the child is still left with the challenging task of determining which features are used in her language. If a child does not know a priori what lexical fea- tures are used in her language, there are two sources for acquiring this information: semantic and syntactic cues. A learner using semantic cues could recognize that words often refer to objects, actions, and proper- ties, and from this deduce the lexical features: noun, verb and adjective. Pinker [Pinker 89] proposes that a combination of semantic cues and innate semantic primitives could account for the acquisition of verb fea- tures. He believes that the child can discover semantic properties of a verb by noticing the types of actions typically taking place when the verb is uttered. Once these properties are known, says Pinker, they can be used to reliably predict the distributional behavior of the verb. However, Gleitman [Gleitman 90] presents evidence that semantic cues axe not sufficient for a child to acquire verb features and believes that the use of this semantic information in conjunction with information about the subcategorization properties of the verb may be sufficient for learning verb features. This paper takes Gleitman's suggestion to the ex- treme, in hope of determining whether syntactic cues may not just aid in feature discovery, but may be all that is necessary. We present evidence for the suffi- ciency of a strictly syntax-based model for discovering *The author would like to thank Mitch Marcus for valuable help. This work was supported by AFOSR jointly under grant No. AFOSR-90-0066, and by ARO grant No. DAAL 03-89- C0031 PRI. the lexical features of a language. The work is based upon the hypothesis that whenever two words are se- mantically dissimilar, this difference will manifest it- self in the syntax via playing out the notion 51]). Most, if not all, For instance, there is lexical distribution (in a sense, of distributional analysis [Harris features have a semantic basis. a clear semantic difference be- tween most count and mass nouns. But while meaning specifies the core of a word class, it does not specify precisely what can and cannot be a member of a class. For instance, furniture is a mass noun in English, but is a count noun in French. While the meaning of fur- niture cannot be sufficient for determining whether it is a count or mass noun, the distribution of the word Call. Described below is a fully implemented program which takes a corpus of text as input and outputs a fairly accurate word class list for the language in ques- tion. Each word class corresponds to a lexical feature. The program runs in O(n 3) time and O(n 2) space, where n is the number of words in the lexicon. 2 Discovering Lexical Features The program is based upon a Markov model. A Markov model is defined as: 1. A set of states 2. Initial state probabilities init(x) --3. Transition probabilities trans(x,~) An important property of Markov models is that they have no memory other than that stored in the current state. In other words, where X(t) is the value given by the model at time t, P,(X(t) = ~, I x(t - 1) = ~,_, ... x(o) = ~o) = Pr(X(t) = ~tt [ X(t -- 1) = at-l) In the model we use, there is a unique state for each word in the lexicon. We are not concerned with initial state probabilities. Transition probabilities represent the probability that word b will follow a and are esti- mated by examining a large corpus of text. To estimate the transition probability from state a to state b: 339 1. Count the number of times b follows a in the corpus. 2. Divide this value by the number of times a occurs in the corpus. Such a model is clearly insufficient for expressing the grammar of a natural language. However, there is a great deal of information encoded in such a model about the distributional behavior of words with respect to a very local context, namely the context of imme- diately adjacent words. For a particular word, this information is captured in the set of transitions and transition probabilities going into and out of the state representing the word in the Markov model. Once the transition probabilities of the model have been estimated, it is possible to discover word classes. If two states are sufficiently similar with respect to the transitions into and out of them, then it is assumed that the states are equivalent. The set of all suffi- ciently similar states forms a word class. By varying the level considered to be sufficiently similar, different levels of word classes can be discovered. For instance, when only highly similar states are considered equiva- lent, one might expect animate nouns to form a class. When the similarity requirement is relaxed, this class may expand into the class of all nouns. Once word classes are found, lexical features can be extracted by assuming that there is a feature of the language which accounts for each word class. Below is an example ac- tually generated by the program: With very strict state similarity requirements, HE and SHE form a class. As the similarity requirement is re- laxed, the class grows to include I, forming the class of singular nominative pronouns. Upon further relax- ation, THEY and WE form a class. Next, (HE, SHE, I) and (THEY, WE) collapse into a single class, the class of nominative pronouns. YOU and IT collapse into the class of pronouns which are both nominative and accusative. Note that next, YOU and IT merge with the class of nominative pronouns. This is because the program currently deals with bimodals by eventu- ally assigning them to the class whose characteristics they exhibit most strongly. For another example of this, see HER below. 3 Results and Future Direc- tions This algorithm was run on a Markov model trained on the Brown Corpus, a corpus of approximately one million words [Francis 82]. The results, although pre- liminary, are very encouraging. These are a few of the word classes found by the program: • CAME WENT • THEM ME HIM US • HER HIS • FOR ON BY IN WITH FROM AT • THEIR MY OUR YOUR ITS • ANY MANY EACH SOME • MAY WILL COULD MIGHT WOULD CAN SHOULD MUST • FIRST LAST • LITTLE MUCH • MEN PEOPLE MAN This work is still in progress, and a number of dif- ferent directions are being pursued. We are currently attempting to automatically acquire the suffixes of a language, and then trying to class words based upon how they distribute with respect to suffixes. One problem with this work is that it is difficult to judge results. One can eye the results and see that the lexical features found seem to be correct, but how can we judge that the features are indeed the correct ones? How can one set of hypothesized features mean- ingfully be compared to another set? We are currently working on an information-theoretic metric, similar to that proposed by Jelinek [Jelinek 90] for scoring prob- abilistic context-free grammars, to score the quality of hypothesized lexical feature sets. References [Francis 82] Francis, W. and H. Kucera. (1982) Frequency Anal- ysis o.f English Usage: Le~c.icon and Grammar. Houghton Mifflin Co. [G|eitman 90] G|eitman, Lila. (1990) "The Structural Sources of Verb Meanings." Language Acquisition, Voltmae 1, pp. 3-55. [Harris 51] Harris, Zeli 8. (1951) Structural Lingulstics. Chicago: University of Chicago Press. [Jelinek 90] Jellnek, F., J.D. Lafferty & R.L. Mercer. (1990) "Basic Methods of Probahilistic Context Free Grvannmrs." I.B.M. Technical Report, RC 16374. [Lakoff87] Lakoff, G. (1987) Women, Fire and Dangerous Things: What Categories Reveal About the Mind. Chicago: University of Chicago Press. [Pinker 89] Pinker, S. Learnability and Cognition. Cambridge: MIT Press. 340
1991
47
Lexical Disambiguation: Sources of Information and their Statistical Realization Ido Dagan * Computer Science Department, Technion, Haifa, Israel and IBM Scientific Center, Technion City, Haifa, Israel Abstract Lexieal disambiguation can be achieved using differ- ent sources of information. Aiming at high perfor- mance of automatic disambiguation it is important to know the relative importance and applicability of the various sources. In this paper we classify sev- eral sources of information and show how some of them can be achieved using statistical data. First evaluations indicate the extreme importance of local information, which mainly represents lexical associ- ations and seleetional restrictions for syntactically related words. 1 Disambiguation Sources The resolution of lexical ambiguities in unrestricted text is one of the most difficult tasks of natural lan- guage processing. In machine translation we are confronted with the related task of target word se- lection - the task of deciding which target language word is the most appropriate equivalent of a source language word in context. In contrast to compu- tational systems, humans seem to select the correct sense of an ambiguous word without much effort and usually without even being aware to the existence of an ambiguous situation. This fact naturally led researches to point out various sources of informa- tion which may provide the necessary cues for dis- ambiguation, either for humans or machines. The following paragraphs classify these sources into two major types, based on either understanding of the text or frequency characteristics of it. One kind of information relates to the under- standing of the meaning of the text, using semantic and pragmatic knowledge and applying reasoning mechanisms. The following sentences, taken from foreign news sections in the Israeli Hebrew press, demonstrate how different levels of understanding can provide the disambiguating information. (1) hayer ha-bayit ha-'elyon shel ha-parlament ha- *This research was partially supported by grant number 120-7'41 of the Israel Council for Research and Development 341 sovieti zaka be-monitin ke-hoker shel ha-sh_hitut be-kazahstan. This sentence translates into English as: (2) The member of the upper house of the soviet parliament acquired a reputation as an investi- gator of the corruption in Kazakhstan. The two most frequent senses of the ambiguous noun '_hayer' correspond to the English words 'friend' and 'member'. In the above example, the information for selecting the correct sense is provided by the seman- tic knowledge that 'a house of parliament' typically has members but not friends. Computationally this kind of information is usually captured by a shal- low semantic model of selectional restrictions. In other cases, such as example (3), it is necessary to use deeper understanding of the text, which involves some level of reasoning: (3) be-het'em le-hoq ha-hagira ha-hadash tihye le- kol ezrah.h_ sovieti ha-zkut ha-otomatit lekabel darkon bar tokef le-hamesh shanim. This sentence translates into English as: (4) According to the new emigration bill every so- viet citizen will have the automatic right to re- ceive a passport valid for five years. The Hebrew word 'hagira' is used for the two sub- senses 'emigration' and 'immigration'. In order to make the correct selection it is necessary to reason that since the soviet bill relates to soviet citizens then it concerns with leaving the country rather than entering it. Another kind of information source, which was originally raised in the psycholinguistic literature, relates to the relative frequencies of word senses and associations between word senses. These fac- tors were shown to play an important role in lexical retrieval, and were suggested as relevant for lexical disambiguation [4, 3]. Hanks [1], for example, lists different words associated with the two senses of the word 'bank', such as money, notes, account, invest- ment etc. versus river, swim, boat etc. Aiming for high performance in automatic disam- biguation, it is important to know (a) what is the portion of ambiguous cases in running text which can be resolved by each source of information and (b) how to set preferences among these sources when they provide contradicting evidence. 2 Statistical Information A tempting starting point for answering the above questions is to use various types of statistical data about word senses and evaluate their contribution to disambiguation. In recent years, statistical data were used successfully for other linguistic tasks. The process of acquiring statistical data is usually faster and more standard and objective than manual construction of knowledge. This makes such data suitable for the evaluation task we are confronted with. The following paragraphs describe the kinds of statistics we use and explain how they reflect dif- ferent types of disambiguating information. In another paper [2] we describe a new multilin- gual approach in which we gather statistics about senses of amhiguous words of one language using a corpus of a different language. For example, the different word associations for the two senses of 'bank' will be identified in a Hebrew corpus, where a distinct word is used for each of the senses. This method enabled us to collect statistics from very large corpora without manually tagging the oc- currences of the ambiguous words with their word senses. In our first experiment we have examined about one hundred examples of ambiguous Hebrew words which were selected randomly from foreign news sections in the Israeli press. For each sense of a Hebrew word we have collected statistics (in an English corpus) on its absolute frequency and its cooccurrences with other words that were syntacti- cally related with it in the example sentence. Two kinds of statistics were maintained. One statistic was the number of times in which the re- lated words were identified in the corpus having the same syntactic relation as in the example sentence. This kind of statistic reflects both sehctional restric- tions, like the relation between 'member' (versus 'friend') and 'a house of parliament', and also word associations, like the association between 'member' and 'reputation', which is stronger than the associ- ation between 'friend' and 'reputation'. In the first case we expect a null frequency for the semantically illegal alternative, while in the second case we ex- pect the difference in frequencies to represent the different degrees of association between the compet- ing alternatives and their surrounding context. In getting this syntactically based statistic we are of course limited by the coverage and the accuracy of the parser, thus getting smaller and somewhat noisy counts relative to the real counts in the corpus. A second and more robust statistic is obtained by counting the number of times in which the two words cooccurred within a limited distance [1]. For instance, the words 'member' and 'acquire' cooc- curred 81 times in the corpus within a maximal dis- tance of 7 words. This statistic is partly correlated with the first statistic, capturing also cases that were missed by the parser, but it also reflects lexical as- sociations between words that tend to cooccur ad- jacently without having a specific syntactic relation between them. For instance, in one of our examples the word 'hatsba'ah', which means either 'voting' or 'indication', cooccurred in the same sentence with the word 'bhirot' (elections). We expect that the adjacency statistic will indicate the strong associa- tion between 'voting' and 'elections', and thus would prefer 'voting' as the appropriate sense. The results reported in [2] together with further examination of our data have clearly indicated some interesting facts. In the vast majority of cases enough disambiguating information is provided by the immediate context, especially by syntactically related words. The absolute frequency of a word sense does not seem very useful, since it usually can he overridden successfully by the local context. An encouraging fact is that deep understanding of the text is rarely necessary, and seems to be required only for very delicate distinctions such as in exam- ple (3). In future work we intend to further analyze our data and test more examples, so that we can reach more decisive and quantitive conclusions. We believe that such conclusions will contribute to im- prove lexical disambiguation in broad coverage sys- tems. References [1] [2] Church, K. W., and Hanks, P., Word associa- tion norms, mutual information, and Lexicog- raphy, Computational Linguistics, vol. 16(1), 22-29 (1990). Dagan, Ido, Alon Ital and Ulrike Schwall, Two languages are more informative than one, sub- mitted to ACL-91. [3] [4] Meyer, D., Schvaneveldt, R. and Ruddy, M., Loci of contextual effects on visual word- recognition, in P. Rabbitt and S. Dornic (eds.), Attention and Performance V, Academic Press, New-York, 1975. Simpson, Greg B. and Curt Burgess, Implica- tions of lexical ambiguity resolution for word recognition, in Small, S. L., G. W. Cotrell and M. K. Tanenhaus, (eds.) Lexicai Ambiguity Res. olution, Morgan Kaufman Publishers, 1988. 342
1991
48
Non-Literal Word Sense Identification Through Semantic Network Path Schemata Eric lverson, Stephen Helmreich Computing Research Lab and Computer Science I~panment Box 30001/3CRL New Mexico State Unive~ty Las Cruc~, NM 88003-0001 When computer programs disambiguate words in a sentence, they often encounter non-literal or novel usages not included in their lexicon. In a recent study, Georgia Green (personal communica- tion) estimated that 17% to 20% of the content word senses encountered in various types of normal English text are not fisted in the dictionary. While these novel word senses are generally valid, they occur in such great numbers, and with such little individual frequency that it is impractical to expli- city include them all within the lexicon. Instead, mechanisms are needed which can derive novel senses from existing ones; thus allowing a program to recognize a significant set of potential word senses while keeping its lexicon within a reasonable size. Spreading activation is a mechanism that allows us to do this. Here the program follows paths from existing word senses stored in a semantic net- work to other closely associated word senses. By examining the shape of the resultant path, we can determine the relationship between the senses con- ~ned in the path; thus deriving novel composite meanings not contained within any of the original lexical entries. This process is similar to the spread- ing activation and marker passing techniques of Hirst [1988], Charniak [1986], and Norvig [1989] and is embodied in the Prolog program metallel based on Fass' program meta5 (Fass [1988]). Metallel's lexicon is written as a series of sense frames, each containing information about a particular word sense. A sense frame can he broken into two main parts: genera and differentiae. Gen- era are the genus terms that function as the ancestors of a word sense. Differentiae denote the qualities that distinguish a particular sense from other senses of the same genus. Differentiae can be broken down into source and target which hold, respectively, the preferences t and properties of a sense. Source con- =dns differentiae mform~on concen~g another word sense. Target infocma~on concerns the sense itself. Connections can be found to other word senses in one of two ways: through an ancestor relationship (genus) er through a preference or property relation- ship (differentia). In the case of differentiae, it is necessary to extract the word senses from a higher order structure. For example, [it (n, z), contain (v, l), n~asic (n, Z) ] is not a word sens¢ ~at is LL~ted in the lexicon, while ~asic (n, i) is Us~L It is therefore necessary to ex~act rausic (n,Z) from the larger dfffereada s~ucmre which it occurs and add it to the path. Not all paths are valid, indicating that some criteria of acceptability are needed during analysis. In addition, paths that are superficially different often end up being quite similar upon further analysis. Keeping this in mind, we have attempted to identify path schemata and associate them wkh types of non- literal usage. Specifically, we have concentrated on identifying instances of metaphor and metonymy. A metaphorical path schema is one in which the preference of a verb and the actual target of the preference both reference different 3 place differen- tiae 2 which can be said to be related. Two 3 place z Pn:f=mce* indicate the zema~dc category c~ the word =ca== dug fill= • specific u~umfic teL= with ~ w the word =ca== being de£u~L For ¢xamp~. d~ mm~v¢ ~mse of d~ verb e~ pmfen Cm normal u~ge) == =n~m=¢ ~bje~ and -- e~b~= objoc~ Vk~uiom of ~=~ pmfcnmc= =m m- dicmiom ~ aou-[kcnd mmg~ (See Wflk= and Fus [1990].) z A 3 ,,~=_~_- diff=~m6= ~ a li= of tomes following a [Subject, Verb, Object] foemat in which ei~h= the Subject or the Objc~o0asbt= o f d ~ ~ m k m it (n, 1). 343 differentiae are related if both their respective rob- jeers and objects are identical or form a "sister" rela- tionship 3. Additictmlly, the two verbs of the dif- ferentiae as well as the verb which generated the preference must have a similar relationship The ship ploughed the waves. ship (n, 1) -anc-> watercraft (n, 1) -prop-> [it (n, i), sail (v, 2), water (n, 2) ] -link-> water (n, 2) -anc-> environment (n, I) <-anc- soil (n, I) <-link- [it (n, 1), plough (v, 2), soil (n, 1) ] <-prop- plough (n, 1) <-inst- plough (v, i) -ohj-> soil(n, 1) -ant-> environment (n, I) <-ant- water (n, 2) <-part- wave (n, I) For example in the path for the senw.nce The ship ploughed the waves, [it (n, 1), sail (v, 2), water (n, i) ] and [it (n, 1), plough (v, 2), soil (n, 1) ] are related ~ plough (v, 1), plough(v, 2) and sail(v, 2) a~ ch~dlP~ of transfer (v, i), and water (n, I) and soil (n, I) ai~ ch~dlP~ of environment (n, I). A/so, the pivot nodes 4 for the insmuneat and object p~ferences of plough (v, i) ~ b~h environment (n, l) , thereby indicating an even monger relationship between the insmmaent and the object of the senwnce. Thus, an analogy exists between ploughing soil and sailing water;, suggesting a new sense of plough that combines aspects of beth. Denise drank the bottle. denise (n, 1) -anc-> woman (n, 1} -prop-> [sex (n, i), [female (aj# I) ] ] -link-> female (aJ, i) -obj-> animal (n, I) <-agent- drink (v, i) -obj-> drink (n # 1 } -ant-> liquid(n, 1) <-link~ lit (n, 1 ), contain (v, I), liquid (n, I) ] <-prop- bottle (n, 1} A metonymic path is indicated when a path is found from a target sense through one of its inherited differentiae; thus linking the original sense to a related sense through a property or preference rela. tionship. For example in the sen~nce Denise drank the bottle, one of the properties of bottle (n, 1) is [it (n, 1), contain (v, 1), liquid (n, 1) 1. This differealia allows us to derive a novel meto- nymic word sense for bottle in which the bottle's conwmts are denoted rather than the boule itself. Under memUel, any differentia can act as a conduit for a memnymy; thus facilitating the generation of novel metonymies as well as novel word senses. By using semantic network path schemata to identify instances of non-literal usage, we have expanded the power of our program without doing so at the expense of a larger lexicon. In addition, by keeping our semantic relationship and path schema criteria at a general level, we hope to be able to cover a wide variety of different semantic taxo- nomies. References Clmmi~, E 1986. A neat theory of marker pass- ing. Procs. AAAI-86. Philadelphia, PA. Fass, D. 1988. Collafive Semantics: A Semantics for Natural Language Processing. Memoranda in Computer and Cognitive Science, MCCS- 88-118. Computing Research Laboratory, New Mexico State University. Hirst, G. 1988. Resolving lexical ambiguity compu- rationally with spreading activation and polaroid words. In Small and Cottrell (eds.), Lexical Ambiguity Resolution pp. 73-107. Mor- gan Ica-fmann: San Ma~o. Norvig, P. 1989. Marker passing as a weak method for text inferencing. Cognitive Science 13(4)'..569-620. Wilks, Y., and D. Fass. 1990. Preference Semantics. Memoranda in Computer and Cognitive Sci- ence, MCCS-90-194. Computing Research l~_borato~, New Mexico State University. 4 A pivot no& is a no& whh two ~ i edges" 344
1991
49
AN ALGORITHM FOR PLAN RECOGNITION IN COLLABORATIVE DISCOURSE* Karen E. Lochbaum Aiken Computation Lab Harvard University 33 Oxford Street Cambridge, MA 02138 kel~harvard.harvard.edu ABSTRACT A model of plan recognition in discourse must be based on intended recognition, distinguish each agent's be- liefs and intentions from the other's, and avoid as- sumptions about the correctness or completeness of the agents' beliefs. In this paper, we present an algo- rithm for plan recognition that is based on the Shared- Plan model of collaboration (Grosz and Sidner, 1990; Lochbaum et al., 1990) and that satisfies these con- straints. INTRODUCTION To make sense of each other's utterances, conversa- tional participants must recognize the intentions be- hind those utterances. Thus, a model of intended plan recognition is an important component of a theory of discourse understanding. The model must distinguish each agent's beliefs and intentions from the other's and avoid assumptions about the correctness or complete- ness of the agents' beliefs. Early work on plan recognition in discourse, e.g. Allen & Perrault (1980); Sidner & Israel (1981), was based on work in AI planning systems, in particu- lar the STRIPS formalism (Fikes and Nilsson, 1971). However, as Pollack (1986) has argued, because these systems do not differentiate between the beliefs and intentions of the different conversational participants, they are insufficient for modelling discourse. Although Pollack proposes a model that does make this distinc- tion, her model has other shortcomings. In particular, it assumes a master/slave relationship between agents (Grosz and Sidner, 1990) and that the inferring agent has complete and accurate knowledge of domain ac- tions. In addition, like many earlier systems, it relies upon a set of heuristics to control the application of plan inference rules. In contrast, Kautz (1987; 1990) presented a theo- retical formalization of the plan recognition problem, *This research has been supported by U S WEST Ad- vanced Technologies and by a Bellcore Graduate Fellow- ship. and a corresponding algorithm, in which the only con- clusions that are drawn are those that are "absolutely justified." Although Kautz's work is quite elegant, it too has several deficiencies as a model of plan recogni- tion for discourse. In particular, it is a model of keyhole recognition m the inferring agent observes the actions of another agent without that second agent's knowl- edge -- rather than a model of intended recognition. Furthermore, both the inferring and performing agents are assumed to have complete and correct knowledge of the domain. In this paper, we present an algorithm for intended recognition that is based on the SharedPlan model of collaboration (Grosz and Sidner, 1990; Lochbaum et al., 1990) and that, as a result, overcomes the limita- tions of these previous models. We begin by briefly presenting the action representation used by the algo- rithm and then discussing the type of plan recogni- tion necessary for the construction of a SharedPlan. Next, we present the algorithm itself, and discuss an initial implementation. Finally, because Kautz's plan recognition Mgorithms are not necessarily tied to the assumptions made by his formal model, we directly compare our algorithm to his. ACTION P~EPRESENTATION We use the action representation formally defined by Balkanski (1990) for modelling collaborative actions. We use the term act-type to refer to a type of action; e.g. boiling water is an act-type that will be repre- sented by boil(water). In addition to types of actions, we also need to refer to the agents who will perform those actions and the time interval over which they will do so. We use the term activity to refer to this type of information1; e.g. Carol's boiling water over some time interval (tl) is an activity that will be represented by (boil(water),carol,tl). Throughout the rest of this paper, we will follow the convention of denoting ar- bitrary activities using uppercase Greek letters, while using lowercase Greek letters to denote act-types. In 1This terminology supersedes that used in (Lochbaum et al., 1990). 33 Relations Constructors Act-type Activity CGEN(71,72,C) CENABLES(7~,~f2,C) sequence(v1 ,...,Tn) simult(71 .... ,7-) conjoined(v1 ,.-.,7n) iteration(AX.v[XJ,{X1,...Xn}) GEN(r,,r~) ENABLES(FI,r2) g(rl ..... r,) I(Ax.rixl,iX~,...x,}) Table 1: Act-type/Activity Relations and Constructors defined by Balkanski (1990) addition, lowercase letters denote the act-type of the activity represented by the corresponding uppercase letter, e.g. 7 -- act-type(F). Balkanski also defines act-type and activity con- structors and relations; e.g. sequence(boil(water), add(noodles,water)) represents the sequence of doing an act of type boil(water) followed by an act of type add(noodles,water), while CGEN(mix(sauce,noodles), make(pasta_dish),C) represents that the first act-type conditionally generates the second (Goldman, 1970; Pollack, 1986). Table 1 lists the act-type and corre- sponding activity relations and constructors that will be used in this paper. Act-type constructors and relations are used in specifying recipes. Following Pollack (1990), we use the term recipe to refer to what an agent knows when the agent knows a way of doing something. As an example, a particular agent's recipe for lift- ing a piano might be CGEN(simult(lift(foot(piano)), lift(keyboard(piano))), lift(piano), AG.[IGI=2]); this recipe encodes that simultaneously lifting the foot- and keyboard ends of a piano results in lifting the piano, provided that there are two agents doing the lifting. For ease of presentation, we will sometimes represent recipes graphicMly using different types of arrows to represent specific act-type relations and constructors. Figure 1 contains the graphical presentation of the pi- ano lifting recipe. lift(pi~o) ]" AG.[IGI-= 2] simult (lift (foot (piano)),lift (keyboaxd(piano))) c, / \c2 lift(foot(piano)) lift (keyboaxd (piano)) TC indicates generation subject to the condition C c~/indicates constituent i of a complex act-type Figure 1: A recipe for lifting a piano THE SHAREDPLAN AUGMENTATION ALGORITHM A previous paper (Lochbaum et hi., 1990) describes an augmentation algorithm based on Grosz and Sid- ner's SharedPlan model of collaboration (Grosz and Sidner, 1990) that delineates the ways in which an agent's beliefs are affected by utterances made in the context of collaboration. A portion of that algorithm is repeated in Figure 2. In the discussion that follows, we will assume the context specified by the algorithm. SharedPlan*(G1,G2,A,T1,T2) represents that G1 and G2 have a partial SharedPlan at time T1 to perform act-type A at time T2 (Grosz and Sidner, 1990). Assume: Act is an action of type 7, G~ designates the agent who communicates Prop(Act), Gj designates the agent being modelled i, j E {1,2}, i ~ j, SharedPlan*(G1 ,G~,A,T1,T2). 4. Search own beliefs for Contributes(7,A) and where pos- sible, more specific information as to how 7 contributes to A. Figure 2: The SharedPlan Augmentation Algorithm Step (4) of this algorithm is closely related to the standard plan recognition problem. In this step, agent Gj is trying to determine why agent G~ has mentioned an act of type 7, i.e. Gj is trying to identify the role Gi believes 7 will play in their SharedPlan. In our previous work, we did not specify the details of how this reasoning was modelled. In this paper, we present an algorithm that does so. The algorithm uses a new construct: augmented rgraphs. AUGMENTED RGRAPH CONSTRUCTION Agents Gi and Gj each bring to their collaboration pri- vate beliefs about how to perform types of actions, i.e. recipes for those actions. As they collaborate, a signifi- cant portion of their communication is concerned with deciding upon the types of actions that need to be per- formed and how those actions are related. Thus, they establish mutual belief in a recipe for action s. In ad- dition, however, the agents must also determine which 2Agents do not necessarily discuss actions in a fixed or- der (e.g. the order in which they appear in a recipe). Con- sequently, our algorithm is not constrained to reasoning about actions in a fixed order. 34 agents will perform each action and the time inter- val over which they will do so, in accordance with the agency and timing constraints specified by their evolv- ing jointly-held recipe. To model an agent's reasoning in this collaborative situation, we introduce a dynamic representation called an augmented recipe graph. The construction of an augmented recipe graph corresponds to the reasoning that an agent performs to determine whether or not the performance of a particular activ- ity makes sense in terms of the agent's recipes and the evolving SharedPlan. Augmented recipe graphs are comprised of two parts, a recipe graph or rgraph, representing activities and relations among them, and a set of constraints, representing conditions on the agents and times of those activities. An rgraph corresponds to a partic- ular specification of a recipe. Whereas a recipe rep- resents information about the performance, in the ab- stract, of act-types, an rgraph represents more spe- cialized information by including act-type performance agents and times. An rgraph is a tree-like representa- tion comprised of (1) nodes, representing activities and (2) links between nodes, representing activity relations. The structure of an rgraph mirrors the structure of the recipe to which it corresponds: each activity and ac- tivity relation in an rgraph is derived from the corre- sponding act-type and act-type relation in its associ- ated recipe, based on the correspondences in Table 1. Because the constructors and relations used in specify- ing recipes may impose agency and timing constraints on the successful performance of act-types, the rgraph representation is augmented by a set of constraints. Following Kautz, we will use the term explaining to refer to the process of creating an augmented rgraph. AUGMENTED RGRAPH SCHEMAS To describe the explanation process, we will assume that agents Gi and Gj are collaborating to achieve an act-type A and Gi communicates a proposition from which an activity F can be derived 3 (cf. the assump- tions of Figure 2). Gj's reasoning in this context is modelled by building an augmented rgraph that ex- plains how F might be related to A. This representa- tion is constructed by searching each of Gj's recipes for A to find a sequence of relations and constructors link- ing 7 to A. Augmented rgraphs are constructed during this search by creating appropriate nodes and links as each act-type and relation in a recipe is encountered. By considering each type of relation and construc- tor that may appear in a recipe, we can specify gen- eral schemas expressing the form that the correspond- ing augmented rgraph must take. Table 2 contains the schemas for each of the act-type relations and 3F need not include a complete agent or time specifica- tion. constructors 4. The algorithm for explaining an activity F according to a particular recipe for A thus consists of consider- ing in turn each relation and constructor in the recipe linking 7 and A and using the appropriate schema to incrementally build an augmented rgraph.. Each schema specifies an rgraph portion to create and the constraints to associate with that rgraph. If agent G/ knows multiple recipes for A, then the algorithm attempts to create an augmented rgraph from each recipe. Those augmented rgraphs that are successfully created are maintained as possible explanations for F until more information becomes available; they repre- sent Gj's current beliefs about Gi's possible beliefs. If at any time the set of constraints associated with an augmented rgraph becomes unsatisfiable, a failure occurs: the constraints stipulated by the recipe are not met by the activities in the corresponding rgraph. This failure corresponds to a discrepancy between agent Gj's beliefs and those Gj has attributed to agent G~. On the basis of such a discrepancy, agent G i might query Gi, or might first consider the other recipes that she knows for A (i.e. in an attempt to produce a suc- cessful explanation using another recipe). The algo- rithm follows the latter course of action. When a recipe does not provide an explanation for F, it is eliminated from consideration and the algorithm continues look- ing for "valid" recipes. To illustrate the algorithm, we will consider the reasoning done by agent Pare in the dialogue in Figure 3; we assume that Pam knows the recipe given in Figure 1. To begin, we consider the ac- tivity derived from utterance (3) of this discourse: F1 =(lift(foot(piano)), {joe},tl), where tl is the time in- terval over which the agents will lift the piano. To ex- plain F1, the algorithm creates the augmented rgraph shown in Figure 4. It begins by considering the other act-types in the recipe to which 7x=lift(foot(piano))is related. Because 71 is a component of a simultaneous act-type, the simult schema is used to create nodes N1, N2, and the link between them. A constraint of this schema is that the constituents of the complex activ- ity represented by node N2 have the same time. This constraint is modelled directly in the rgraph by creat- ing the activity corresponding to lift(keyboard(piano)) to have the same time as F1. No information about the agent of this activity is known, however, so a vari- able, G1, is used to represent the agent. Next, because the simultaneous act-type is related by a CGEN rela- tion to lift(piano), the CGEN schema is used to create node N3 and the link between N2 and N3. The first two constraints of the schema are satisfied by creating node N3 such that its activity's agent and time are the 4The technicM report (Lochbaum, 1991) contains a more detailed discussion of the derivation of these schemas from the definitions given by Balkanski (1990). 35 Recipe Augmented Rgraph Rgraph Constraints CGEN(7, 6,C) CENABLES(7, 6,C) sequence(71,72,...7-) conjoined(71,72, ...7-) simult (71,72, ...7,) iteration(AX.7[X], {Xa, X2, ...X,}) (6,G,T) T GEN r (8, G,T) ~r ENABLES r K(rl, r2, ..., r,)=A I ci r~ K(rl, r2 .... r,)=A J ci ri K(ra, r2, :..r,)=A I cl r~ I(AX.r[x], {X~, ...X~})=A I ci [xx.rixllx~ G=agent(r) T=time(r) HOLDS'(C,G,T) HOLDS'(C,agent(r),time(r)) BEFORE(time(F),T) Yj BEFORE(time(r)),time(rj+l)) agent(A)=Ujagent(rj) time(A)=cover_interval({time(rj )})~. agent(A)=Ujagent(rj) time(A)=coverAnterval({ time(r) ) )) Yj time(r3)=time(rj+,) agent (A)=~jj agent (r,) time(A)=coverAnterval({time(rj )}) agent(A)=agent(r) time(A)=time(r) Table 2: Rgraph Schemas same as node N2's. The third constraint is instantiated and associated with the rgraph. (1) Joe: I want to lift the piano. (2) Pare: OK. (3) Joe: On the count of three, I'll pick up this [deictic to foot] end, (4) and you pick up that [deictic to keyboard] end. (5) Pam: OK. (6) Joe: One, two, three! Figure 3: A sample discourse Rgraph: NS:{lift(piano),{joe} v G 3,tl) 1" GEN N2:K({lift(foot(pitmo)),{joe},t 1},0ift(keyboard(piano)),G1 ,t 1}) I cl N 1: 0ift (foot (piano)),{joe } #1} ConBtrainta: {HOLDS'(AG.[[G I -- 2],{joe} u Gl,tl)} Figure 4: Augmented rgraph explaining (lift(foot(pi- ano)),{joe},tl) MERGING AUGMENTED RGRAPHS As discussed thus far, the construction algorithm pro- duces an explanation for how an activity r is related to a goal A. However, to properly model collaboration, one must also take into account the context of previ- ously discussed activities. Thus, we now address how the algorithm explains an activity r in this context. Because Gi and Gj are collaborating, it is appropri- ate for Gj to assume that any activity mentioned by Gi is part of doing A (or at least that Gi believes that it is). If this is not the case, then Gi must explicitly indicate that to Gj (Grosz and Sidner, 1990). Given this assumption, Gj's task is to produce a coherent ex- planation, based upon her recipes, for how all of the activities that she and Gi discuss are related to A. We incorporate this model of Gj's task into the algo- rithm by requiring that each recipe have at most one corresponding augmented rgraph, and implement this restriction as follows: whenever an rgraph node corre- sponding to a particular act-type in a recipe is created, the construction algorithm checks to see whether there is Mready another node (in a previously constructed rgraph) corresponding to that act-type. If so, the al- gorithm tries to merge the augmented rgraph currently under construction with the previous one, in part by merging these two nodes. In so doing, it combines the information contained in the separate explanations. The processing of utterance (4) in the sample di- Mogue illustrates this procedure. The activity de- rived from utterance (4) is r2=(lifl(keyboard(piano)), {pare}, tl). The initial augmented rgraph portion cre- ated in explaining this activity is shown in Figure 5. Node N5 of the rgraph corresponds to the act- type simult(lifl(foot(piano)),lift(keyboard(piano))) and includes information derived from r2. But the rgraph (in Figure 4) previously constructed in explaining rl also includes a node, N2, corresponding to this act-type (and containing information derived from rl). Rather than continuing with an independent explanation for r2, the algorithm attempts to combine the information 5The function cover_interval takes a set of time intervals as an argument and returns a time interval spanning the set (Balkanski, 1990). from the two activities by merging their augmented rgraphs. Rgraph: NS:K((lift(foot(piano)),G2,t 1),(lift(keyboard(piano)),{pam} ,tl)) I c2 N4:(lift (keyboard(piano)),{pam} ,tl) Constraints:{} Figure 5: Augmented rgraph partially explaining (lift(keyboard(piano)) ,{pain} ,tl) Two augmented rgraphs are merged by first merg- ing their rgraphs at the two nodes corresponding to the same act-type (e.g. nodes N5 and N2), and then merging their constraints. Two nodes are merged by unifying the activities they represent. If this unifica- tion is successful, then the two sets of constraints are merged by taking their union and adding to the result- ing set the equality constraints expressing the bindings used in the unification. If this new set of constraints is satisfiable, then the bindings used in the unification are applied to the remainder of the two rgraphs. Oth- erwise, the algorithm fails: the activities represented in the two rgraphs are not compatible. In this case, be- cause the recipe corresponding to the rgraphs does not provide an explanation for all of the activities discussed by the agents, it is removed from further consideration. The augmented rgraph resulting from merging the two augmented rgraphs in Figures 4 and Figure 5 is shown in Figure 6. Rgraph: N3:{lift (piano),{joe,pam} ,tl) T GEN N2:K((lift (foot (piano)),{joe} ,tl),(lift(keyboard(piano)),{pam} ,tl)) / ¢1 \ ¢2 N1 :(lift(foot(piano)),{joe},t 1) N4:(lift(keyboard(piano)),{pam},t 1 ) Constraints: {HOLDS'(AG.IlG I = 2],{joe} Lt Gl,tl), Gl={pam}} Figure 6: Augmented rgraph resulting from merging the augmented rgraphs in Figures 4 and 5 IMPLEMENTATION An implementation of the algorithm is currently un- derway using the constraint logic programming lan- guage, CLP(7~) (Jaffar and Lassez, 1987; Jaffar and Miehaylov, 1987). Syntactically, this language is very similar to Prolog, except that constraints on real- valued variables may be intermixed with literals in rules and goals. Semantically, CLP(~) is a generaliza- tion of Prolog in which unifiability is replaced by solv- ability of constraints. For example, in Prolog, the pred- icate X < 3 fails if X is uninstantiated. In CLP(~), however, X < 3 is a constraint, which is solvable if there exists a substitution for X that makes it true. Because many of the augmented rgraph constraints are relations over real-valued variables (e.g. the time of one activity must be before the time of another), CLP(T~) is a very appealing language in which to im- plement the augmented rgraph construction process. The algorithm for implementing this process in a logic programming language, however, differs markedly from the intuitive algorithm described in this paper. RGRAPHS AND CONSTRAINTS VS. EGRAPHS Kautz (1987) presented several graph-based algorithms derived from his formal model of plan recognition. In Kautz's algorithms, an explanation for an observation is represented in the form of an explanation graph or egraph. Although the term rgraph was chosen to par- allel Kautz's terminology, the two representations and algorithms are quite different in scope. Two capabilities that an algorithm for plan recog- nition in collaborative discourse must possess are the abilities to represent joint actions of multiple agents and to reason about hypothetical actions. In addition, such an algorithm may, and for efficiency should, ex- ploit assumptions of the communicative situation. The augmented rgraph representation and algorithm meet these qualifications, whereas the egraph representation and algorithms do not. The underlying action representation used in r- graphs is capable of representing complex relations among acts, including simultaneity and sequentiality. In addition, relations among the agents and times of acts may also be expressed. The action representation used in egraphs is, like that in STRIPS, simple step de- composition. Though it is possible to represent simul- taneous or sequential actions, the egraph representa- tion can only model such actions if they are performed by the same agent. This restriction is in keeping with Kautz's model of keyhole recognition, but is insuffi- cient for modelling intended recognition in multiagent settings. Rgraphs are only a part of our representation. Aug- mented rgraphs also include constraints on the activ- ities represented in the rgraph. Kautz does not have such an extended representation. Although he uses constraints to guide egraph construction, because they are not part of his representation, his algorithm can only check their satisfaction locally. In contrast, by col- lecting together all of the constraints introduced by the different relations or constructors in a recipe, we can exploit interactions among them to determine unsat- isfiability earlier than an algorithm which checks con- straints locally. Kautz's algorithm checks each event's constraints independently and hence cannot determine satisfiability until a constraint is ground; it cannot, for example, reason that one constraint makes another un- satisfiable. Because agents involved in collaboration dedicate a significant portion of their time to discussing the ac- tions they need to perform, an algorithm for rood- 37 elling plan recognition in discourse must model rea- soning about hypothetical and only partially specified activities. Because the augmented rgraph representa- tion allows variables to stand for agents and times in both activities and constraints, it meets this criteria. Kautz's algorithm, however, models reasoning about actual event occurrences. Consequently, the egraph representation does not include a means of referring to indefinite specifications. In modelling collaboration, unless explicitly indi- cated otherwise, it is appropriate to assume that all acts are related. In the augmented rgraph construction algorithm, we exploit this by restricting the reasoning done by the algorithm to recipes for A, and by combin- ing explanations for acts as soon as possible. Kautz's algorithm, however, because it is based on a model of keyhole recognition, does not and cannot make use of this assumption. Upon each observation, an indepen- dent egraph must be created explaining all possible uses of the observed action. Various hypotheses are then drawn and maintained as to how the action might be related to other observed actions. CONCLUSIONS ~ FUTURE DIRECTIONS To achieve their joint goal, collaborating agents must have mutual beliefs about the types of actions they will perform to achieve that goal, the relations among those actions, the agents who will perform the actions, and the time interval over which they will do so. In this paper, we have presented a representation, augmented rgraphs, modelling this information and have provided an algorithm for constructing and reasoning with it. The steps of the construction algorithm parallel the reasoning that an agent performs in determining the relevance of an activity. The algorithm does not re- quire that activities be discussed in a fixed order and allows for reasoning about hypothetical or only par- tially specified activities. Future work includes: (1) adding other types of con- straints (e.g. restrictions on the parameters of actions) to the representation; (2) using the augmented rgraph representation in identifying, on the basis of unsatisfi- able constraints, particular discrepancies in the agents' beliefs; (3) identifying information conveyed in Gi's utterances as to how he believes two acts are related (Balkanski, 1991) and incorporating that information into our model of Gj's reasoning. ACKNOWLEDGMENTS I would like to thank Cecile Balkanski, Barbara Grosz, Stuart Shieber, and Candy Sidner for many helpful discussions and comments on the research presented in this paper. REFERENCES Allen, J. and Perrault, C. 1980. Analyzing intention in utterances. Artificial Intelligence, 15(3):143-178. Balkanski, C. T. 1990. Modelling act-type relations in collaborative activity. Technical Report TR-23- 90, Harvard University. Balkanski, C. T. 1991. Logical form of complex sen- tences in task-oriented dialogues. In Proceedings of the 29th Annual Meeting of the ACL, Student Ses- sion, Berkeley, CA. Fikes, R. E. and Nilsson, N. J. 1971. STRIPS: A new approach to the application of theorem proving to problem solving. Artificial Intelligence, 2:189-208. Goldman, A. I. 1970. A Theory Of Human Action. Princeton University Press. Grosz, B. and Sidner, C. 1990. Plans for discourse. In Cohen, P., Morgan, J., and Pollack, M., editors, Intentions in Communication. MIT Press. Jaffar, J. and Lassez, J.-L. 1987. Constraint logic programming. In Proceedings of the 14th ACM Symposium on the Principles of Programming Lan- guages, pages 111-119, Munich. Jaffar, J. and Michaylov, S. 1987. Methodology and implementation of a CLP system. In Proceedings of the .~th International Conference on Logic Program- ming, pages 196-218, Melbourne. MIT Press. Kautz, H. A. 1987. A Formal Theory of Plan Recog- nition. PhD thesis, University of Rochester. Kautz, H. A. 1990. A circumscriptive theory of plan recognition. In Cohen, P., Morgan, J., and Pollack, M., editors, Intentions in Communication. MIT Press. Lochbaum, K. E., Grosz, B. J., and Sidner, C. L. 1990. Models of plans to support communica- tion: An initial report. In Proceedings of AAAI-90, Boston, MA. Lochbaum, K. E. 1991. Plan recognition in collabo- rative discourse. Technical report, Harvard Univer- sity. Pollack, M. E. June 1986. A model of plan inference that distinguishes between the beliefs of actors and observers. In Proceedings of the 2~th Annual Meeting of the ACL. Pollack, M. E. 1990. Plans as complex mental at- titudes. In Cohen, P., Morgan, J., and Pollack, M., editors, Intentions in Communication. MIT Press. Sidner, C. and Israel, D. J. 1981. Recognizing in- tended meaning and speakers' plans. In Proceedings of IJCAI-81. 38
1991
5
Collaborating on Referring Expressions Peter A. Heeman Department of Computer Science University of Toronto Toronto, Canada, M5S 1A4 [email protected] Abstract This paper presents a computational model of how conversational participants collaborate in making re- ferring expressions. The model is based on the plan- ning paradigm. It employs plans for constructing and recognizing referring expressions and meta-plans for constructing and recognizing clarifications. This al- lows the model to account for the generation and un- derstanding both of referring expressions and of their clarifications in a uniform framework using a single knowledge base. I, Introduction In the dialogue below 1, person A wants to refer to some object and have person B identify it. Person A does this by uttering a referring expression; however, A's expression fails to allow B to uniquely identify the object. Person B then tries to clarify A's referring expression by expanding it. A rejects B's clarification and replaces it, which al- lows B to identify the referent of the refashioned referring expression. A: 1 See the weird creature B: 2 In the corner? A: 3 No, on the television B: 4 Okay. This paper presents a computation model of Clark and Wilkes-Gibbs's work on how conversational partici- pants collaborate in forming referring expressions [2]. Our model takes the role of one of the participants, either the participant who initiates the referring expression, the ini- tiator, or the one who is trying to identify the referent, the responder. It accounts for how the initiator constructs the initial referring expressions and how she and the responder then collaborate in clarifying the referring expression until it is acceptable. Each step of the collaboration consists of a clarification of the referring expression and a subsequent understanding of the clarification. This work is based on the planning paradigm. The knowledge that is needed to choose the content of a refer- ring expression is encoded in plans. This allows an agent to use the same knowledge base for both constructing and recognizing initial referring expressions. Furthermore, the I This example is a simplified version of [6] S.2.4a (1-8). knowledge needed to clarify a referring expression is en- coded as plans. These are meta-plans that take an instan- tiated plan corresponding to a referring expression as a parameter. The meta-plans reason about the failed con- straints or effects of the instantiated plan in order to clarify it. These repairs can subsequently be understood by per- forming plan recognition. This approach allows the entire collaborative process to be expressed in a uniform frame- work with a single knowledge base. II. Referring as Action Plans encode a relationship between goals and the prim- itive actions that will accomplish these goals. Hence, a set of primitive actions is needed that is relevant in the domain of referring expressions [1]. We use the primitive actions s-refer and s-attr. S-refer is performed by the initiator to signal to the responder that she is referring to an object, and that she intends him to identify the object. S-attr ascribes some attribute to an object, for instance its category, color, or shape. III. Initial Referring Expression Constructing: When an initiator wants to refer to an object, she can do so by constructing a refer plan. This plan consists of two steps, the action s-refer, mentioned above, and the subplan describe. Describe, through its subplans headnoun and modifiers, constructs a descrip- tion of the object that is intended to allow the responder to identify the object. Headnoun decomposes into an s-attr action that ascribes to the object the head noun chosen by the constraints of the plan. The modifiers plan is more complicated. Through its constraints, it ensures that the referring expression is believed to allow the responder to uniquely identify the object. The modifiers plan achieves this by decomposing into the modifier plan a variable number of times (through recursion). Each instance of the modifier plan constructs an individual component of the description, such as the object's color, shape, or location (through an s-attr action). Recognizing: The responder, after hearing the initial referring expression, tries to recognize the intention behind the initiator's utterance. Starting with the set of primi- tive actions that he observed, the responder employs plan 345 recognition to determine a plan that accounts for them. This process will lead him to ascribe the refer plan to the initiator, including the intention for the responder to iden- tify the referent of the description. Plan recognition, by analyzing the constraints and effects of the inferred plan, lets the responder attempt to identify the referent of the description. There are two reasons why the responder might be unable to identify the referent. Either the responder is unable to find any objects that satisfy the referring ex- pression or he is able to find more than one that satisfies it. This situation might arise if the initiator and respon- der have different states of knowledge or belief about the world. For instance, in the dialogue above the responder might think that several objects are "weird". The con- straint or effect that was violated in the inferred plan is noted by the plan recognizer, and this knowledge is used to repair the plan. This approach is motivated by Pollack's treatment of ill-formed domain plans [5]. IV. Clarifications Constructing: If the responder was unsuccessful at in- ferring the referent of the referring expression, he will plan to inform the initiator that her referring expression was not successful. As Clark and Wilkes-Gibbs [2] point out, the responder will try to refashion the referring expression in order to minimize the collaborative effort, and hence he will prefer to replace or expand the referring expression rather than just rejecting it or postponing the decision. The responder has several different clarification plans [4] at his disposal and they take as a parameter the inferred plan corresponding to the referring expression. These plans correspond to Clark and Wilkes-Gibbs's analysis of the repair process. One of these plans is rej ect-replace. This plan rejects the step of the inferred referring expres- sion plan that has a constraint violation and replaces it by a similar step but with the violated constraint relaxed (relaxing a description is due to [3]). A second plan is postpone-expemd, which is used to further qualify a refer- ring expression that a participant found to match several objects. This plan is used by the responder in (2) in the dialogue above. Recognizing: If the responder clarifies the referring expression, the initiator will have to infer that the respon- der is unable to identify the referent of the expression. Furthermore, the initiator must determine how the clarifi- cation will affect the underlying referring expression. The responder might have rejected or postponed his decision, as well as proposed a correction to the underlying refer- ring expression by replacing or expanding it. Following Litman's work on understanding clarification subdialogues [4], this process is achieved through plan recognition. Continuing On: Clarification subdialogues might ex- tend beyond the responder's clarification of the initial re- ferring expression. For instance, in the above dialogue, af- ter the initiator inferred the responder's clarification, she found the resulting referring expression plan ill-formed. Hence, she constructed a subsequent clarification--"No, on the television". Then, the responder had to infer this clar- ification. In general, this process will continue until both participants accept the referring expression. The analysis involved with these subsequent turns of the dialogue is sim- ilar to the analysis given in the preceding two subsections. There may be differences between how the initiator and responder clarify a referring expression, since the initia- tor knows the identity of the referent. Also, there may be differences between a clarification following the initial re- ferring expression and one following another clarification, since, in the latter case, the referring expression may have already been partially accepted. V. Belief Revision As was mentioned earlier, the initiator and responder might have different states of knowledge or belief about the world, and these differences will be a cause of clarifica- tion subdialogues. In the process of collaborating to make referring expressions, these differences in belief will arise in the replacements and expansions that the two participants propose. Hence, they will need a way of resolving their differences in beliefs about the world if they are to both accept the referring expression. Hence the model proposed in this paper will need to incorporate belief revision. VI. Conclusion This paper has presented a computational model of how conversational participants collaborate in making referring expressions. However, it is hoped that the ideas presented in this paper are of relevance to a much larger range of collaborative processes. The work outlined in this paper is in progress. At present, a computer system has been implemented in Pro- log that can construct and recognize initial referring ex- pressions, and that can construct clarifications. In terms of the dialogue above, the system can model both the ini- tiator and responder for the first line and can model the responder for the second. References [1] D. E. Appelt. Planning English referring expressions. Ar~ificlal Intelligence, 26(1):1-33, April 1985. [2] H. H. Clark and D. Wilkes-Gibbs. Referring as a collaborative process. Cognition, 22:1-39, 1986. [3] B. A. Goodman. Repvh'ing reference identification failures by relaxation. In Proceedings o] the ~3 rd Annual Meeting o] the Association ]or Computational Linguistics, pages 204-217, 1985. [4] D. J. Litman and J. F. Allen. A plan recognition model for subdlalogues in conversations. Cognitive Science, 11(2):16,3-200, April-June 1987. [5] M. E. Pollack. Inferring domain plans in question-answerlng. Technical Note 403, SRI Interx~tional, 1986. [{3] J. Svartvik and R. Quirk. A Corpus o] English Conversation. Ltmd Studies in English. 56. C.W.K. Gleerup, Lund, 1980. 346
1991
50
Conceptual Revision for Natural Language Generation Ben E. Cline Department of Computer Science Blacksburg, VA 24061 [email protected] Traditional natural language generation systems are based on a pipelined architecture. A conceptual compo- nent selects items from a knowledge base and orders them into a message to address some discourse goal. This mes- sage is passed to the stylistic component that makes lex- ical and syntactic choices to produce a natural language surface text. By contrast, humans producing formal text typically create drafts which they polish through revision [Hayes and Flower 1980]. One proposal for improving the quality of computer-generated multisentential text is to incorporate a draft-and-revision paradigm. Some researchers have suggested that revision in gener- ation systems should only affect stylistic elements of the text [Vaughan and McDonald 1986]. But human writers also engage in conceptual revision, and there is reason to believe that techniques for conceptual revision should also be useful for a generation system producing formal text. Yazdani [1987] argues that both stylistic and conceptual revisions are useful. This paper extends those arguments and provides further evidence for the usefulness of con- ceptual as well as stylistic revision. We present strategies for identifying situations applicable to conceptual revision and techniques for effecting the revision. Why is revision important for a natural language gener- ation system? First, Hayes and Flower suggest that revi- sion reduces the cognitive strain of an author by postpon- ing the need to make some decisions while concentrating on others. A generation system can reduce complexity in the same way. By using revision, generation modules can be simpler. Second, inspection of surface text is necessary to determine whether the generated text is ambiguous. Ambiguities result not only from the words used at the surface level but from their relationships to other words in the text. To detect ambiguities, the surface text must be read. If that process reveals ambiguity, the text can be regenerated using different words or syntax. A revision component is the ideal location for reading generated text and identifying ambiguities. The Kalos system being developed by the author is de- signed to perform both stylistic and conceptual revision. Kalos will generate portions of a draft user's guide for a microprocessor from an abstract architectural descrip- tion of the microprocessor. The system achieves concep- tual generation using a discourse schema system [McK- ewon 1985, Paris 1985]; stylistic generation will be rule- based. The revision component will review the generated text and produce recommendations to the conceptual and stylistic components as to how to improve the text. Kalos takes a knowledge intensive approach. Each com- ponent of the system, including conceptual generation, stylistic generation, and revision, has access to the full knowledge of the system, and they use the same infer- ence system. This use of a unified knowledge base lets the revision component identify easily both the concepts and schema slots from which the surface string was gen- erated. This type of association is crucial for a revision system. In systems where knowledge is localized, it is dif- ficult or impossible to determine the deep level knowledge responsible for a particular subtext. In Kalos, conceptual revision will he applied to at least three situations. First, the Kalos revision module will detect situations where a preferred word or phrase will improve the text. Second, it will detect the need for an example to produce clearer text. Third, it will attempt to identify paragraphs that are too short or too long. Kalos generates text aimed at engineers and others experienced with microprocessors, using preferred words and phrases common to user's guides covering various mi- croprocessors. The revision module will manage use of preferred words for two reasons. First, performing pre- ferred word processing in the revision component reduces the complexity of the generation components. Second, using preferred phraseology can affect both the concep- tual and the stylistic components, so placing the logic for handling preferred words and phrases in the revision component localizes the necessary knowledge structures for easier maintenance and expansion. For example, consider a description of the address bus of the Zilog Z-80 microprocessor: "The address bus of the Zilog Z-80 microprocessor is sixteen bits wide." Using the preferred phrase "address space", the same fact can be restated as follows: "The Zilog Z-80 has a sixty-four kilobyte address space." 347 The first sentence relates an attribute of the address bus, while the second sentence makes a statement di- rectly about the processor. The second sentence both uses a preferred way of describing the processor's maxi- mum memory size and gives an important feature of the microprocessor. It is thus desirable to include it in an overview paragraph of the microprocessor rather than in a following paragraph describing its buses. Kalos will contain rules indicating preferred phrases for the discourse goal of describing a microprocessor. In this example, the relevant rule states that if the size of the address bus is described, replace the sentence with a de- scription of the address space of the microprocessor. As noted above, Kalos will have a representation of the sur- face sentence which includes the surface representation and associations to the concepts and schemata from which the sentence was generated. By inspecting the underly- ing concept, Kalos can determine that the rule should be applied. It can then locate the schemata responsible for the text and make the revision. The revision component of Kalos will be used to sug- gest at which points in the text an example is appropriate. This processing is placed in the revision module to re- duce the complexity of the conceptual generation module. Examples will sometimes be included in the text in the description of individual instructions. Instructions that are straightforward do not require an example. Consider the add instruction of typical microprocessor. A typical reader of a microprocessor user's guide will gain little or no information from an example of the add instruction after reading the description of the register transfers in- volved. This is not the case, however, with more compli- cated instructions involving several registers and register transfers. In Kalos, the process schema selects the knowledge structures needed to describe the actions of an instruc- tion. This schema has an optional example slot which will initially be left empty by the conceptual generation mod- ule. The Kalos revision module inspects the underlying conceptual structures of instruction descriptions to deter- mine if an instruction is complicated, based on the num- ber of register transfers and the number of registers in- volved. When a complicated instruction is identified, the revision module will suggest that the generation module expand the text by filling the example slot of the process schema. It is then the task of the conceptual generation component to construct an example. Kalos's third type of conceptual revision relates to the size of generated paragraphs. Extremely short or long paragraphs are sometimes appropriate, but they are sus- pect and will be examined by the revision component for possible restructuring. Kalos will attempt to expand small paragraphs by sug- gesting revisions that fill optional schema slots when the text is regenerated. In Kalos, text can be expanded by adding an example or comparing and contrasting the ob- ject being described to another object. The suggestions to add text will be inspected by the generation module and implemented if they meet two criteria. First, the knowl- edge base must contain the information necessary to fill the optional schema slot. Second, the inclusion of the ad- ditional knowledge must pass a test for salience. Salience will be based in part on deviation from typicality [Cline and Nutter, 1989]. The revision module will also try to restructure long paragraphs. It will look at both the surface text and the underlying concepts from which the text was gener- ated in order to produce suggestions for the revision. To reduce the amount of text, the revision component will suggest that the generation component either remove an optional schema slot or take a different choice point in a schema. Targets for removal include embedded com- pare and contrast schemata and example slots in process schemata. The revision module may also select a different choice point in the constituency schema to list part cat- egories rather than parts. For example, an overview of a typical microprocessor would do better to list instruction categories than to list over a hundred instructions. In reducing long paragraphs, the revision module will have some simple characterizations as to how important the removed information is. Based on these measures, the re- vision component may decide to retain the lengthy para- graph. Cline, B. E. & Nutter, 5. T. (1989) Implications of natural categories for natural language generation. In: Proceed- ings of the First Annual SNePS Workshop. Gregg, L. W. & Steinberg, E. R. (Eds.) (1980) Cognitive Processes in Writing. Hillsdale, N J: Erlbaum. Hayes, J. R. • Flower, L. S. (1980) Identifying the Orga- nization of Writing Processes. In: Gregg & Steinberg. Kempen, G. (Ed.) (1987) Natural Language Generation. Dordrecht: Martinus Nijhoff Publishers. McKeown, K. R. (1985) Text Generation: Using Dis- course Strategies and Focus Constraints to Generate Nat- ural Language Text. Cambridge: Cambridge University Press. Paris, C. L. (1985) Description Strategies for Naive and Expert Users. In: Proceedings of the 23rd Annual Meeting of the Association of Computational Linguistics. Chicago, I11. Vaughan, M. M. & McDonald, D. D. (1986) A Model of Revision in Natural Language Generation. In: Proceed- ings of the 24th Annual Meeting of the Association for Computational Linguistics. New York. Yazdani, M. (1987) Reviewing as a Component of the Text Generation Process. In: Kempen 348
1991
51
Modifying Beliefs in a Plan-Based Dialogue Model Lynn Lambert Department of Computer and Information Sciences University of Delaware Newark, Delaware 197161 1 Introduction Previous models of discourse have inadequately accounted for how beliefs change during a conversation. This paper outlines a model of dialogue which main- tains and updates a user's multi-level belief model as the discourse proceeds. This belief model is used in a plan-recognition framework to identify communicative goals such as expressing surprise. 2 Plans, Beliefs, and Processing My plan-based model of dialogue incrementally builds a structure of the discourse (a Dialogue Model, or DM) using a multi-level belief model updated after each utterance. The belief model contains the beliefs as- cribed to the user during the course of the conversation and how strongly each belief is held. Researchers [1, 3, 5] have noted that discourse understanding can be enhanced by recognizing a user's goals, and that this recognition process requires reason- ing about the agent's beliefs [7]. For example, in order to recognize from utterance IS2 in the following dia- logue that the speaker has the communicative goal of expressing surprise at the proposition that Dr. Smith is teaching CIS360 and not just asking if Dr. Smith is teaching CIS420, it is necessary for the system to be able to plausibly ascribe to IS the beliefs that 1) Dr. Smith is teaching CIS420; 2) that this somehow implies that Dr. Smith is not teaching CIS360; and 3) that IP believes that Dr. Smith is teaching CIS360. ISI: Who is teaching CIS 360? IPl: Dr. Smith. IS2: Dr. Smith is teaching CIS 420, isn't she? IP2: Yes, she is. Dr. Smith is teaching two courses. IS3: What time is CIS 360? My model ascribes these beliefs to IS as the discourse proceeds, anti uses the ascribed beliefs for recognizing utterances that involve negotiation dialogues. Without the ability to modify a belief model as a dialogue pro- gresses, it would not be possible to plausibly ascribe 1) or 3), so it is unclear how recognizing expressions of surprise would be accomplished in systems such as Litman's [5] that recognize discourse goals but do not maintain belief models. IS2 also exemplifies how people may have levels of belief and indicate those levels in the This material is based upon work supported by the National Science Foundation under Grant No. IRI-8909332. The Govern- ment has certain rights in this material. surface form of utterances. Here, IS uses a tag question to indicate that he thinks that Dr. Smith is teaching CIS420, but is not certain of it. My belief model main- tains three levels of belief, three levels of disbelief, and one level indicating no belief about a proposition. My process model begins with the semantic rep- resentation of an utterance. The effects of the surface speech act, such as a tag question, are used to suggest augmentations to the belief model. Plan inference rules are used to infer actions that might motivate the utter- ance; the belief ascription process during constraint sat- isfaction determines whether it is reasonable to ascribe the requisite beliefs to the agent of the action and, if not, the inference is rejected. Focusing heuristics allow expectations derived from the existing dialogue context to guide the recognition process by preferring those in- ferences that lead to the most coherent expansions of the existing dialogue model. The resultant DM contains a structure of the dia- logue at every point in the discourse, including three dif- ferent kinds of goals, each modeled on a separate level: the domain level models domain goals such as travel- ing by train; the problem-solving level, plan-construction goals such as instantiating a variable in a plan; and the discourse level, communicative goals such as express. ing surprise. Within each of these levels, actions may contribute to other actions on the same level; for exam- ple, on the discourse level, providing background data, asking a question, and answering a question all can be part of obtaining information. 2 So, actions at each level form a tree structure in which each node represents an action that a participant is performing and the chil- dren of a node represent actions pursued in order to perform the parent action. This tree structure allows my model to capture the relationship among several ut- terances that are all part of the same higher-level dis- course plan, which is not possible in Litman's model [5]. In addition, an action on one level may contribute to, or link to, an action on an immediately higher level. For example, discourse actions may be executed to at- tain the knowledge needed for problem-solving actions at the middle level. This tripartite, plan-based model of discourse fa- 2The DM is really a mental model of intentions [7] which im- plicitly captures a number of intentions that are attributed to the participants, such as the intention that the participants follow through with the subactions that are part of plans for actions in the DM. 349 cilitates recognition of changing beliefs as the dialogue progresses. Allen's representation of an Inform speech act [1] assumed that a listener adopted the communi- cated proposition. Clearly, listeners do not adopt every- thing they are told (e.g., IS2 indicates that IS does not immediately accept that Dr. Smith is teaching CIS360). Perrault [6] assumed that a listener adopted the com- municated proposition unless the listener had conflict- ing beliefs, as in IS2. Unfortunately, Perrault assumes that people's beliefs persist so it would not be possible for Perranlt to model IS adopting IP's explanation in IP2. I am assuming that the participants are involved in a cooperative dialogue, so try to square away their beliefs [4]. Thus, after every Inform action, a speaker expects the listener either to accept any claims that the speaker made or to initiate a negotiation dialogue. 3 Ac- ceptance can be communicated in two ways. Either the listener can explicitly indicate acceptance (e.g., "oh, al- right"), or the listener can implicitly convey acceptance [2] by making an utterance which cannot be interpreted as initiating a negotiation dialogue. Since both parties are engaged in a cooperative dialogue in which beliefs are squared away, this failure to initiate a negotiation di- alogue by default indicates (implicit) acceptance of any claims not disputed. This corresponds with a restricted form of Perrault's default reasoning about the effects of Inform acts [6]. An example of implicit acceptance is considered in the next section. 3 Example Consider the dialogue model given in Section 2. The process model infers from the first utterance that IS is executing a high level discourse action of Obtain.Info- Ref to determine who is teaching CIS360 and problem- solving actions of Insfanfiate- Var and Build-Plan in or- der to build a plan to take CIS360 so that IS may even- tually execute a domain action, Take-Course, to take CIS360. IS2 is recognized as an expression of surprise at IP's answer since acceptance or negotiation of the answer is expected and since the following beliefs can be ascribed to IS: 1) as a default rule, that teachers generally teach only one course; 2) that Dr. Smith is already teaching CIS420 (from the tag question form); and 3) that the combination of 1) and 2) implies that Dr. Smith is not teaching CIS360. IP responds by try- ing to make her answer believable and to resolve the conflict. This is done by informing IS that his belief about Dr. Smith teaching CIS420 is correct, but that Dr. Smith is an exception to the default rule. Focusing heuristics suggest explicit acceptance of or objection to IP~ as ways to continue the current dis- course plan. However utterance IS3, instead, pursues a 3A third possibility exists: that the participants agree to dis- agree about a particular point, and continue the dialogue. My model will handle this also, but it is not preferred, and for space reasons will not be considered further here. completely new discourse action, Obtain-Info-Ref, un- related to the original Obtain-Info-Ref, though still re- lated to the problem-solving action of Instantiate-Var in order to build a plan to take CIS360. Since a new discourse plan is being pursued, the process model in- fers by default that IP2 has been accepted because oth- erwise IS would have initiated a negotiation dialogue. Since the inform action is accepted (implicitly), this ac- tion, and the higher level actions that it contributes to, are considered to be successfully completed, so the goals and effects of these plans are considered to hold. Some of the goals of these plans are that 1) IS believes that Dr. Smith teaches both CIS360 and CIS420, and thus is an exception to the default rule that teachers only teach one course and 2) IS knows that Dr. Smith is the faculty member that teaches CIS360, the answer to the original question that IS asked. Once the process model recog- nizes IS3 as pursuing this new Obtain-Info-Ref action, the belief model is updated accordingly. 4 Conclusion Previous models of dialogue have inadequately accounted for changing beliefs of the participants. This paper has outlined a plan-based model of dialogue that makes use of beliefs currently ascribed to the user, ex- pectations derived from the focus of attention in the di- alogue, and implicit or explicit cues from the user both to identify communicative goals and to recognize altered user beliefs. References [1] James F. Allen. A Plan-Based Approach to Speech Act Recognition. PhD thesis, University of Toronto, Toronto, Ontario, Canada, 1979. [2] S. Carberry. A pragmatics-based approach to ellipsis res- olution. Computational Linguistics, 15(2):75-96, 1989. [3] B. Grosz and C. Sidner. Attention, intention, and the structure of discourse. Computational Linguistics, 12(3):175-204, 1986. [4] Aravind K. Joshi. Mutual beliefs in question-answer sys- tems. In N. Smith, editor, Mutual Beliefs, pages 181- 197, New York, 1982. Academic Press. [5] D. Litman and J. Allen. A plan recognition model for subdialogues in conversation. Cognitive Science, 11:163- 200, 1987. [6] R. Perrault. An application of default logic to speech act theory. In P. Cohen, J. Morgan, and M. Pollack, editors, Intentions in Communication, pages 161-185. MIT Press, Cambridge, Massachusetts, 1990. [7] Martha Pollack. A model of plan inference that distin- guishes between the beliefs of actors and observers. In Proceedings of the ~th Annual Meeting o;f the Associa- tion for Computational Linguistics, pages 207-214, New York, New York, 1986. 350
1991
52
RESOLVING A PRAGMATIC PREPOSITIONAL PHRASE ATTACHMENT AMBIGUITY Christine H. Nal~tani Department of Computer and Information Science, University of Pennsylvania, Philadelphia, PA 19104 emaih [email protected] 1. Introduction To resolve or not to resolve, that is the structural ambigu- ity dilemma. The traditional wisdom is to disambiguate only when it matters in terms of the meaning of the utterance, and to do so using the computationally least costly information. NLP work on PP-attachment has followed this wisdom, and much effort has been focused on formulating structural and lexical strategies for resolving noun-phrase and verb-phrase (NP-PP vs. VP-PP) attachment ambiguity (e.g. [8, 11]). In one study, statistical analysis of the distribution of lexical items in a very large text yielded 78% correct parses while two humans achieved just 85%[5]. The close performance of machine and human led the authors to pose two issues that will be addressed in this paper: is the predictive power of distributional data due to "a complementation relation, a modification relation, or something else", and what charac- terizes the attachments that escape prediction? 2. Pragmatically ambiguous PPs Although structural and lexical rules alone do not suffice to disambiguate all kinds of PPs, discourse modelling is viewed as computationally costly (cf. [1]). The debate over resolu- tion strategies is not simply about practicality, but rather, at stake is the notion of what exactly it means for a PP to attach. This paper defends discourse-level strategies by arguing that a certain PP-attachment ambiguity, sentential vs. verb-phrase (S-PP vs. VP-PP), reflects a third kind of relation that is pragmatic in nature. As noted in [11], context-dependent preferences cannot be computed a priori, so pragmatic PP-attachment ambiguities are among those that defy structural and lexical rules for disambiguation. Another criticism aimed at discourse-level approaches is that pragmatic ambiguities can be left unresolved because they do not affect the meaning of an utterance. In the case of S-PPs and VP-PPs, however, the linguistic evidence points to significant meaning differences (section 3). This paper offers a unified account of the linguistic behavior of these PPs which is expressed in a new formalism (section 4), and concludes that the resolution of pragmatic PP-attachment ambiguity is necessary for language understanding (section 5). 3. The need to disambiguate 3.1 Linguistic evidence Linguists have identified instrumental, locative and temporal adverbial PPs as the most structurally unrestricted, context- dependent types of PPs [6, 10]. These kinds of PPs often can attach either to S or VP. Thus, Warren sang in the park can be paraphrased as either Where Warren sang was in the park or What Warren did in the park was sing. Kuno argues that the former interpretation involves a place-identifying VP-PP, and the latter a scene-setting S-PP. Also, the following mean- ing differences occur: given-new/theme-rheme S-PPs are given/themes, VP- PPs are new/themes. preposability S-PPs can be preposed, preposed VP-PPs sound awkward and often change meaning. 351 entailments S-PP utterances have no entailments of the utterance without the PP. For VP-PPs, the utterance without the PP is entailed only if the utterance is affir- mative. negation S-PPs always lie outside the scope of negation, VP-PPs may or may not lie inside the scope of negation. These aspects of meaning cannot be dismissed as spurious. Consider Kuno's pair of sentences: • Jim didn't visit museums in Paris, but he did in London (1). • Jim didn't visit museums in Paris: he visited museums in London (2). Kuno assigns (1) the interpretation in which'the PPs are sentential and two events are described: although Jim visited museums only in London, he also went to Paris. Sentence (2) is assigned the reading that Jim was not in Paris at all but went only to London where he visited museums. The PPs are verb-phrasal and only one event is being talked about. 3.2 A pragmatic relation The behavior of these adverbial PPs reflects neither a com- plementation nor a modification relation. If attachment is dictated by complementation, an instrumental PP should al- ways appear as an argument of the verb predicate in logical form. But this sacrifices entailments for affirmative VP-PP utterances; 'butter(toast,knife)' does not logically entail 'but- ter(toast)' [2, 3]. If construed as a modification relation, at- tachment is redundant with phrase structure information and curiously depends on whether the subject, or any other con- stituent outside the VP, is or is not modified by the PP. There may well be reasons to preserve these relations in the syrt- tactic structure, but they axe not the relations that desribd the behavior of pragmatically ambiguous PPs. The linguistic evidence suggests that the S-PP vs. VP-PP distinction reflects a pragmatic relation, namely a discourse entity specification relation where specify means to refer in a model [4]. Since this relation cannot be represented by tra- ditional phrase structure trees, the meaning differences that distinguish the two kinds of PPs must be captured by a dif- ferent formal structure. The proposed event formalism treats utterances with adverbial PPs as descriptions of events and is adapted from Davidson's logical form for action sentences [2] using restricted quantification. 4. A unified formal account 4.1 Event representations Davidson's logical form consists of an existentially quanti- fied event entity variable and predication, as in (3c)(Agt(Jones, e) A Act(butter, e) A Obj(toast, e) A Instr(knife, e)) for Jones buttered the toast with the knife. Davidson assigns equal status to all modifiers, thereby allowing events, like ob- jects and people, to be described by any combination of their properties. This flattening of the argument structure clears the way for using restricted quantification to 'elevate' some predicates to event-specifying status. Following [12], the structure 3eP restricts the range of e to those entities that satisfy P, an arbitrarily complex predicate of the form AuP~(zl,tt) ^ ... ^ P,,,(z,n,n). In expressions of the form (3e:)~uPl(zl, tt)A...APm(zm, u))[RI (Yl, e)A...ARn(yn, c)], event-specifying predicates appear in the A-expression while the other predicates remain in the predication Re. Here- after, the term event description refers to the ),-expression, and event predication to the sentence predicate Re. The two parts together comprise an event representation. 4.2 Applying the formalism In the formalism, (3) represents sentence (1) and (4), (2): (Be : )~uAgt(J, u) A Loc(P,u))-,[Act(v,e) A Obj(m,e)] A (3e : )~uAgt( J, u) A Loc(L, u) )[act(v, e) A Obj(m, e)] (3) -(Be : )tuAgt(J, u) A Act(v, u) A Obj(m,u))[Loc(P,e)] A (Be: AuAgt( J, u) A Act(v, u) A Obj(m, u))[Loc(L, e)] (4) In (3), the thematic S-PPs (in bold) are represented in the event descriptions, whereas in (4), the nonthematic VP-PPs are in the event predications. Now the well-worn given-new distinction can be replaced by the more precise distinction made by the event formalism. Event-speci~ing PPs appear in the event description and contribute to the specification of an event entity in the discourse model. Predication PPs appear in the event predication and convey new information about the specified entity. The formalism shows how preposing a VP-PP can change the meaning of the utterance. If the PPs in (2) are pre- posed, as in In Paris, Jim didn't visit museums: in Lon- don, he visited museums, the original reading is lost. This is shown in the representation: --(Be : AuAgt( J, u) A Act(v, u) A Obj(m, ~) ^ Loc(P,t,)) ^ (Be : XuAat(J, u) ^ Act(v,u) ^ Obj(m, u)ALoc(L, u)). Since the event descriptions conflict- one event cannot take place in two places- this sentence can no longer be understood as describing a single event. The formalism also shows different effects of negation on event-specifying and predication PPs. Sentence (2) denies the existence of any 'Jim visiting museums in Paris' event, so the quantifier lies within the scope of negation in (4). In (3) negation scopes only the event predication; sentence (1) expresses a negative fact about one event, and an affirmative fact about another. In general, a PP that lies outside the scope of negation appears in the description Pu of a repre- sentation of form (3e : AuPu)-,[Re]. A PP that lies inside appears in the predication Re of form -,(3e : A,,P,,)[Re]. Finally, the formalism lends insight into differences in en- tailments. The following entailment relationship holds for affirmative VP-PP sentences, where R,,(y,,, e) represents the PP predicate: (3e : AuPu)[Rl(yl,e) ^... ^ R,,_~(y,,-1,e) ^ a.(~.,e)] ~ (3e : AuP~)[~l(y,,e)^ ... ^ R.-l(y.-1,e)]. A PP predicate Rn(yn,e) in a negated event predication may or may not be negated, so the entailment for negative VP-PP sentences is blocked: (Be: AnPu)'~[Ra(ya, e) A... ^ Rn-i (yn-a, e) A Sn(y,,, e)] ~ (Be: ~uPn)-,[R1 (Yl, e) A... ^ Rn-l(y,-1, e)]. Why S-PP sentences have no entailments is a separate matter. Eliminating an event-specifying PP from an event description yields a representation with a different description. Intuitively, it seems desirable that no entail- ment relations hold between different types of entities. The formalism preserves this condition. The proposed formalism succeeds in capturing the dis- course entity specification relation and lends itself naturally to processing in an NLP system that takes seriously the dy- namic nature of context. Such a system would for each utter- ance construct an event representation, search for a discourse entity that satisfies the event description, and use the event predication to update the information about that entity in the discourse model. 352 5. Conclusion A preliminary algorithm for processing highly ambiguous PPs has been worked out in [7]. The algorithm uses in- tonation [9], centering and word order information to con- struct and process event representations in a discourse model structured after [4]. The wider applicability of the two-part event formalism has not yet been tested. Nevertheless, one conclusion is that the value of resolving any structural am- biguity can only be measured in terms of the semantics of the structural Iormalism itsel]. In the case of VP-PP vs. S-PP ambiguity, an NLP system must not idly wait for syn- tax to choose how a PP should pragmatically function. The traditional wisdom- find the meaning and do so efficiently- instead suggests that more productive than demanding of syntax unreasonably diverse expressive powers is to search for direct linguistic correlates of pragmatic meaning that can be efficiently encoded in a dynamic pragmatic formalism. Acknowledgements The author thanks Barbara Grosz and Julia Hirschberg, who both advised this research, for valuable comments and guidance; and acknowledges current support from a Na- tional Science Foundation Graduate Fellowship. This paper stems from research carried out at Harvard University and at AT&T Bell Laboratories. References [1] Altmann, G. and M. Steedman 1988. Interaction with context during human sentence processing, Cognition, 30(3). [2] Davidson, D. 1967. The logical form of action sentences, in Davidson and Harman, eds., The Logic o.f Grammar, pp. 235-246, Dickenson Publishing Co., Inc., Encino, CA, 1975. [3] Fodor, J. A. 1972. Troubles about actions, in Harman and Davidson, eds., Semantics o.f Natural Language, pp. 48-69, D. Reidel, Dordrecht-Holland. [4] Grosz, B. J. and C. Sidner 1986. Attention, intentions, and the structure of discourse, CL, 12(3). [5] Hindle, D. and M. Rooth 1990. Structural ambiguity and lexical relations, Proceedings of the DARPA Speech and Natural Language Workshop, Hidden Valley, Penn- sylvania. [6] Kuno, S. 1975. Conditions for verb phrase deletion, Foundations o.f Language, 13. [7] Nakatani, C. 1990. A discourse modelling approach to the resolution of ambiguous prepositional phrases, manuscript. [8] Pereira, F. C. N. 1985. A new characterization of at- tachment preferences, in Dowty, Karttunen and Zwicky, eds., Natural Language Parsing, pp. 307-319, Cambridge University Press, Cambridge. [9] Pierrehumbert, J. and J. Hirschberg 1990. The mean- ing of intonational contours in the interpretation of dis- course, in Cohen, Morgan and Pollack, eds., Intentions in Communication, pp. 271-311, MIT Press. [10] Reinhart, T. 1983. Anaphora and Semantic Interpreta- tion, University of Chicago, Chicago. [11] Shieber, S. 1983. Sentence disambiguation by a shift- reduce parsing technique, Proceedings of glst Meeting o/the ACL, Cambridge, MA. [12] Webber, B. 1983. So what can we talk about now?, in Brady and Berwick, eds., Computational Models o] Dis- course, pp. 331-371, Cambridge, MA, MIT Press.
1991
53
Current Research in the Development of a Spoken Language Understanding System using PARSEC* Carla B. Zoltowski School of Electrical Engineering Purdue University West Lafayette, IN 47907 February 28, 1991 1 Introduction We are developing a spoken language system which would more effectively merge natural lan- guage and speech recognition technology by us- ing a more flexible parsing strategy and utiliz- ing prosody, the suprasegmental information in speech such as stress, rhythm, and intonation. There is a considerable amount of evidence which indicates that prosodic information impacts hu- man speech perception at many different levels [5]. Therefore, it is generally agreed that spoken language systems would benefit from its addi- tion to the traditional knowledge sources such as acoustic-phonetic, syntactic, and semantic in- formation. A recent and novel approach to incor- porating prosodic information, specifically the relative duration of phonetic segments, was de- veloped by Patti Price and John Bear [1, 4]. They have developed an algorithm for computing break indices using a hidden Markov model, and have modified the context-free grammar rules to incorporate links between non-terminals which corresponded to the break indices. Although in- corporation of this information reduced the num- ber of possible parses, the processing time in- creased because of the addition of the link nodes in the grammar. 2 Constraint Grammar Dependency Instead of using context-free grammars, we are using a natural language framework based on the *Parallel Architecture Sentence ConstraJner Constraint Dependency Grammar (CDG) for- realism developed by Maruyama [3]. This frame- work allows us to handle prosodic information quite easily, Rather than coordinating lexical, syntactic, semantic, and contextual modules to develop the meaning of a sentence, we apply sets of lexical, syntactic, prosodic, semantic, and pragmatic rules to a packed structure containing a developing picture of the structure and mean- ing of a sentence. The CDG grammar has a weak generative capacity which is strictly greater than that of context-free grammars and has the added advantage of benefiting significantly from a par- allel architecture [2]. PARSEC is our system based on the CDG formalism. To develop a syntactic and semantic analysis using this framework, a network of the words for a given sentence is constructed. Each word is given some number indicating its position rela- tive to the other words in the sentence. Once a word is entered in the network, the system assigns all of the possible roles the words can have by applying the lexical constraints (which specify legal word categories) and allowing the word to modify all the remaining words in the sentence or no words at all. Each of the arcs in the network has associated with it a matrix whose row and column indices are the roles that the words can play in the sentence. Initially, all entries in the matrices are set to one, indicat- ing that there is nothing about one word's func- tion which prohibits another word's right to fill a certain role in the sentence. Once the net- work is constructed, additional constraints are introduced to limit the role of each word in the sentence to a single function. In a spoken lan- guage system which may contain several possible candidates for each word, constraints would also 353 provide feedback about impossible word candi- dates. • We have been able to incorporate the dura- tional information from Bear and Price quite easily into our framework. An advantage of our approach is that the prosodic information is added as constraints instead of incorporat- ing it into a parsing grammar. Because CDG is more expressive than context-free grammars, we can produce prosodic rules that are more ex- pressive than Bear and Price are able to pro- vide by augmenting context-free grammars, Also by formulating prosodic rules as constraints, we avoid the need to clutter our rules with nonter- minals required by context-free grammars when they are augmented to handle prosody. Assum- ing O(n4/log(n)) processors, the cost of apply- ing each constraint is O(log (n))[2]. Whenever we apply a constraint to the network, our pro- cessing time is incremented by this amount. In contrast, Bear and Price, by doubling the size of the grammar are multiplying the processing time by a factor of 8 when no prosodic information is available (assuming (2n) 3 = 8n 3 time). 3 Current Research Our current research effort consists of the devel- opment of algorithms for extracting the prosodic information from the speech signal and incor- poration of this information into the PARSEC framework. In addition, we will be working to interface PARSEC with the speech recognition system being developed at Purdue by Mitchell and Jamieson. We have selected a corpus of 14 syntactically ambiguous sentences for our initial experimen- tation. We have predicted what prosodic fea- tures humans use to disambiguate the sentences and are attempting to develop algorithms to ex- tract those features from the speech. We are hoping to build upon those algorithms presented in [1, 4, 5]. Initially we are using a professional speaker trained in prosodics in our experiments, but eventually we will test our results with an untrained speaker. Although our current system allows multiple word candidates, it assumes that each of the pos- sible words begin and end at the same time. It currently does not allow for non-aligned word boundaries. In addition, the output of the speech recognition system which we will be utilizing will consist of the most likely sequence of phonemes for a given utterance, so additional work will be required to extract the most likely word candi- dates for use in our system. 4 Conclusion The CDG formalism provides a very promis- ing framework for our spoken language system. We believe its flexibility will allow it to over- come many of the limitations imposed by natural language systems developed primarily for text- based applications, such as repeated words and false starts of phrases. In addition, we believe that prosody will help to resolve the ambigu- ity introduced by the speech recognition system which is not present in text-based systems. 5 Acknowledgement This research was supported in part by NSF IRI- 9011179 under the guidance of Profs. Mary P. Harper and Leah H. Jamieson. References [1] J. Bear and P. Price. Prosody, syntax, and parsing. In Proceedings of the ~8th annual A CL, 1990. [2] R. Helzerman and M.P. Harper. Parsec: An archi- tecture for parallel parsing of constraint dependency grammars. In Submitted to The Proceedings o/the ~9th Annual Meeting o.f ACL, June 1991. [3] H. Maruyama. Constraint dependency grammar. Technical Report #RT0044, IBM, Tokyo, Japan, 1990. [4] P. Price, C. Wightman, M. Ostendorf, and J. Bear. The use of relative duration in syntactic disambigua- tion. In Proceedings o] 1CSLP, 1990. [5] A. Waibel. Prosody and Speech Recognition. Morgan Kaufmann Publishers, Los Altos, CA, 1988. 354
1991
54
Syntactic Graphs and Constraint Satisfaction Jeff Martin Department of Linguistics, University of Maryland College Park, MD 20742 [email protected] In this paper I will consider parsing as a discrete combinatorial problem which consists in constructing a labeled graph that satisfies a set of linguistic constraints. I will identify some properties of linguistic constraints which allow this problem to be solved efficiently using constraint satisfaction algorithms. I then describe briefly a modular parsing algorithm which constructs a syntactic graph using a set of generative operations and applies a filtering algorithm to eliminate inconsistent nodes and edges. The model of grammar I will assume is not a monolithic rule system, but instead decomposes grammatical problems into multiple constraints, each describing a certain dimension of linguistic knowledge. The grammar is partitioned into operations and constraints. Some of these are given in (1); note that many constraints, including linear precedence, are not discussed here. I assume also that the grammar specifies a lexicon, which is a list of complex categories or attribute-value structures (Johnson 1988), along with a set of partial functions which define the possible categories of the grammar. (1) Operations Constraints PROJECT-X CASEMARK(X,Y) ADJOIN-X THETAMARK(X,Y) MOVE-X AGREE(X,Y) INDEX-X ANTECEDE(X,Y) This cluster of modules incorporates operations and constraints from both GB theory (Chomsky 1981) and TAG (Johsi 1985). PROJECT-X is a category-neutral X- bar grammar consisting of three context-free metarules which yield a small set of unordered elementary trees. ADJOIN-X, which consists of a single adjunction schema, is a restricted type of tree adjunction which takes two trees and adjoins one to a projection of the head of the other. The combined schema are given in (2): (2) X2 = { X1, Y2} Xl = {x0,Y2} Xn = (0 (a lexical category) Xn = {Xn, Yn} specifier axiom complement axiom labeling axiom adjunction axiom MOVE-X constructs chains which link gaps to antecedents, while INDEX-X assigns indices to nodes from the set of natural numbers. In the parsing model to be discussed below, these make up the four basic operations of a nondeterministic automaton that generates sets of cantidate structures. Although these sets are finite, their size is not bounded above by a polynomial function in the size of the input. I showed in Martin(1989) that if X-bar and adjunction rules together allow four attachment levels, then the number of possible (unordered) trees formed by unconstrained application of these rules to a string of n terminals is o(4n). Also, Fong(1989) has shown that the number of n z1 distinct indexings for n noun phrases is bn= Xm= 1 {m}, whose closed form solution is exponential. Unconstrained use of these operations therefore results in massive overgeneration, caused by the fact that they encode only a fragment of the knowledge in a grammar. Unlike operations, the constraints in (1) crucially depend on the attributes of lexical items and nonterminal nodes. Three key properties of the constraints can be exploited to achieve an efficient filtering algorithm: (i) they apply in local government configurations (ii) they depend on node attributes whose domain of values is small (iii) they are binary For example, agreement holds between a phrase YP and a head Xo if and only if YP governs Xo, and YP and Xo share a designated agreement vector, such as [(zperson, ~number]; case marking holds between a head Xo and a phrase YP if and only if Xo governs YP, and Xo and YP share a designated case feature; and so forth. Lebeaux (1989) argues that only closed classes of features can enter into government relations. Unlike open lexical classes such as (3a), it is feasible to list the members of closed classes extensionally, for example the case features in (3b): (3)a. b. Verb : {eat, sing, cry .... } Case : {Nom, Acc, Dat, Gen} Constraints express the different types of attribute dependency which may hold between a governor and a governed node in a government domain. Each constraint can be represented as a binary predicate P(X,Y) which yields True if and only if a designated subset of attributes do not have distinct values in the categories X and Y. We may picture such predicates as specifying a path which must be unifiable in the directed acyclic graphs representing the categories X and Y. Before presenting the outline of a parsing algorithm incorporating such constraints, it is necessary to introduce the notion of boolean constraint satisfaction 355 problem (BCSP) as defined in Mackworth (1987). Given a finite set of variables {V1,V 2 ..... Vn} with associated domains {D1,D2,...,Dn} , constraint relations are stated on certain subsets of the variables; the constraints denote subsets of the cartesian product of the domains of those variables. The solution set is the largest subset of the cartesian product D1 x D2 x ... x Dn such that each n-tuple in that set satisfies all the constraints. Binary CSP's can be represented as a graph by associating a pair (Vi, Di) with each node. An edge between nodes i and j denotes a binary constraint Pij between the corresponding variables, while loops at a node i denote unary constraints Pi which restrict the domain of the node. Consistency is defined as follows: (4) Node i is consistent iff Vx[x~ D i] ~Pi(x). Arc i,j is consistent iff Vx[x~ D i] :=~ 3y[y~ Dj ,~Pij(x,y)]. A path of length 2 from node i through node m to node j is consistent iff VxVz[Pij(x,z)] ~3y[yE Dm ^ Pim(x,y)^ Pmj(Y,Z)]. A network is node, arc, and path consistent iff all its nodes, arcs and paths are consistent. Path consistency can be generalized to paths of arbitrary length. The parsing algorithm tries to find a consistent labeling for a syntactic graph representing the set of all syntactic analyses of an input string (see Seo & Simmons 1989 for a similar packed representation). The graph is constructed from left to fight by the operations Project- X, Adjoin-X, Move-X and Index-X, which generate new nodes and arcs. In this scheme, overgeneration does not result in an abundance of parallel structures, but rather in the presence of superfluous nodes and arcs in a single graph. Each new node and arc generated is associated with a set of constraints; these associations are defined statically by the grammar. For example, complement arcs are associated with thetamarking constraints, specifier arcs are associated with agreement constraints, and indexing arcs are associated with coreference constraints. On each cycle the parser attempts to connect two consistently labeled subgraphs G1 and G2, where G1 represents the analyses of a leftmost portion of the input string, and G2 represents the analyses of the rightmost substring under consideration. The parse cycle contains three basic steps: (a) select an operation (b) apply the operation to graphs G1 and G2, yielding G3 (c) apply node, arc and path consistency to the extended graph (;3. Step (c) deletes inconsistent values from the domain at a node; also, if a node or arc is inconsistent, it is deleted. Note that nodes in syntactic graphs are labeled by linguistic categories which may contain many attribute- value pairs. Thus, a node typically represents not one but a set of variables whose values are relevant to the constraint predicates. The properties of locality and finite domains mentioned above turn out to be useful in the filtering step. Locality guarantees that the algorithm need only apply in a government domain. Therefore, it is not necessary to make the entire graph consistent after each extension, but only the largest subgraph which is a government domain and contains the nodes and edges most recently connected. The fact that the domains of attributes have a limited range is useful when the value of an attribute is unknown or ambiguous. In such cases, the number of possible solutions obtained by choosing an exact value for the attribute is small. In this paper I have sketched the design of a parsing algorithm which makes direct use of a modular system of grammatical principles. The problem of overgeneration is solved by performing a limited amount of local computation after each generation step. This approach is quite different from one which preprocesses the grammar by folding together grammatical rules and constraints off-line. While this latter approach can achieve an a priori pruning of the search space by eliminating overgeneration entirely, it may do so at the cost of an explosion in grammar size. References Chomsky, N. (1981) Lectures on Government and Binding. Foris, Dordrecht. Fong, S. (1990) "Free Indexation: Combinatorial Analysis and a Compositional Algorithm. Proceedings of the ACL 1990. Johnson, M. (1988) Attribute-Value Logic and the Theory of Grammar. CSLI Lecture Notes Series, Chicago University Press. Johsi, A. (1985) "Tree Adjoining Grammars," In D. Dowty, L. Karttunen & A. Zwicky (eds.), Natural Language Processing. Cambridge U. Press, Cambridge, England. Lebeaux, D. (1989) Language Acquisition and the Form of Grammar. Doctoral dissertation, U. of Massachusetts, Amherst, Mass. Mackworth, A. (1987) "Constraint Satisfaction," In: S Shapiro (ed.), Encyclopedia of Artificial Intelligence, Wiley, New York. Mackworth, A. (1977) "Consistency in networks of relations," Artif.InteU. 8(1), 99-118. Martin, J. (1989) "Complexity of Decision Problems in GB Theory," ms., U. of Maryland. Seo, J. & R. Simmons (1989). "Syntactic Graphs: A Representation for the Union of All Ambiguous Parse Trees," Computational Linguistics 15:1. 3.56
1991
55
An Incremental Connectionist Phrase Structure Parser James Henderson* U. of Pennsylvania, Dept of Computer and Information Science 200 South 33rd Philadelphia, PA 19104 1 Introduction This abstract outlines a parser implemented in a con- nectionist model of short term memory and reasoning 1 . This connectionist architecture, proposed by Shastri in [Shastri and Ajjanagadde, 1990], preserves the sym- bolic interpretation of the information it stores and manipulates, but does its computations with nodes which have roughly the same computational proper- ties as neurons. The parser recovers the phrase struc- ture of a sentence incrementally from beginning to end and is intended to be a plausible model of human sen- tence processing. The formalism which defines the grammars for the parser is expressive enough to in- corporate analyses from a wide variety of grammatical investigations 2. This combination gives a theory of hu- man syntactic processing which spans from the level of linguistic theory to the level of neuron computations. 2 The Connectionist Architec- ture In order to store and manipulate information in a con- nectionist net quickly, the information needs to be rep- resented in the activation of nodes, not the connections between nodes 3. A property of an entity can be repre- sented by having a node for the entity and a node for the property both active at the same time. However, this only permits information about one entity to be stored at any one time. The connectionist architecture used here solves this problem with nodes which, when active, fire at regular intervals. A property is predi- cated of an entity only if their nodes are firing syn- *This research was supported by DARPA grant num- ber N0014-90-J-1863 and ARO grant number DAAL03-89- C0031PRI. lAs of this writing the parser has been designed, but not coded. 2A paper about a closely related formalism was submitted to this year's regular ACL session under the title "A CCG--Like System of Types for Trees", and an older version of the later was discussed in my masters thesis ([Henderson, 1990]), where its linguistic expressiveness is demonstrated. 3This section is a very brief characterization of the core sys- tem presented in [Shastri and Ajjanagadde, 1990]. chronously. This permits multiple entities to be stored at one time by having their nodes firing in different phases. However, the number of entities is limited by the number of distinct phases which can fit in the inter- val between periodic firings. Such boundedness of hu- man conscious short term memory is well documented, where it is about seven entities. Computation using the information in the memory is done with pattern-action rules. A rule is represented as a collection of nodes which look for a temporal pattern of activation, and when it finds the pattern it modifies the memory contents. Rules can compute in parallel. 3 The Grammar Formalism I will describe grammar entries through the examples given in figure 1. Each entry must be a rooted tree fragment. Solid lines are immediate dominance links, dashed arrows are linear precedence constraints, and dotted lines, called dominance links, specify the need for a chain of immediate dominance links. Plus sub- scripts on nodes designate that the node is headed and minus subscripts that the node still needs a head. The parent of a dominance link must always be an:unheaded node. Node labels are feature structures, but corefer- ence of features between nodes is not allowed. In the figure, the structure for "likes" needs heads for both its NP's, thereby expressing the subcategorization for these arguments. The structure for '`white" expresses its modification of N's by having a headless root N. The structure for "who" subcategorizes for an S and specifies that somewhere within that S there must be a headless NP. These four words can combine to form a complete phrase structure tree through the appropriate equations of nodes. NP."~'VP+ w~o NP÷ p177a Figure 1: Example grammar entries. There are four operations for combining adjacent 357 tree fragments 4. The first equates a node in the left tree with the root of the right tree. As in all equations, at least one of these nodes must be unheaded, and their node labels must unify. If the node on the left is un- headed then it is subcategorisation, and if the root is unheaded then it is modification. The second combi- nation operation satisfies a dominance link in the left tree by equating the node above the dominance link to the root of the right tree and the node below the dom- inance link to another node in the right tree. This is for things such as attaching embedded subjects to their verbs and filling subject gaps. The third combination operation also involves a dominance link in the left sub- tree, but only the parent and root are equated and the dominance relationship is passed to an unheaded node in the right tree. This is for passing traces down the tree. The last operation only involves one tree frag- ment. This operation satisfies a dominance link by equating the child of the link with some node which is below the node which was the original parent of this dominance link, regardless of what nodes the link has been passed to with the third operation. This is for gap filling. Limitations on the purser's ability to de- termine what nodes are eligible for this equation force some known constraints on long distance movement. All these operations are restricted so that linear prece- dence constraints are never violated. The important properties of this formalism are its use of partiality in the specification of tree fragments and the limited domain affected by each combination operation. Partiality is essential to allow the parser to incrementally specify what it knows so far about the structure of the sentence. The fact that each combi- nation operation is only dependent on a few nodes is important both because it simplifies the parser's rules and because nodes which are no longer going to be in- volved in any equations can be forgotten. Nodes must be forgotten in order to parse arbitrarily long sentences with the memory's very limited capacity. 4 The Parser The parser builds the phrase structure for a sentence incrementally from beginning to end. After each word the short term memory contains information about the structure built so far. Pattern-action rules are then used to compute how the next word's tree can be com- bined with the current tree. In the memory, the nodes of the tree are the entities, and predicates are used to specify the necessary information about these nodes and the relationships between them. The tree in the 4Thls formalism has both a structural interpretation and a Categorial Grmmnar style interpretation. In the late~ interpre- tati~m these combination opez'atiorm have a more natural speci- fication. Unfortunately space prevents me discueming it here. memory is used as the left tree for the combination operations. The tree for the next word is the right tree. For every grammar entry there are pattern-action rules for each way it could participate in a combination. When the next word is identified its grammar entries are activated and their rules each try to find a place in the current tree where their combination can be done. The best match is chosen and that rule modifies the memory contents to represent the result of its combi- nation. The gap filling combination operation is done with a rule which can be activated at any time. If the parser does not have enough space to store all the nodes for the new word's tree, then any node which has both a head and an immediate parent can be removed from the memory without changing any predications on other nodes. When the parse is done it succeeds if all nodes have heads and only the root doesn't have an immediate parent. Because nodes may be forgotten before the parse of a sentence is finished, the output of the parser is not a complete phrase structure tree. The output is a list of the combinations which were done. This is isomorphic to the complete phrase structure, since the structure can be constructed from the combination information. It also provides incremental information about the pro- gression of the parse. Such information could be used by a separate short term memory module to construct the semantic structure of the sentence in parallel with the construction of the syntactic structure. Several characteristics make this parser interesting. Most importantly, the computational architecture it uses is compatible with what we know about the ar- chitecture of the human brain. Also, its incrementality conforms to our intuitions about the incrementality of our sentence processing, even providing for incremental semantic analysis. The parallelism in the combination process provides for both lexical ambiguity and uncer- tainty about what word was heard. Only further work can determine the linguistic adequacy of the parser's grammars, but work on related formalisms provides ev- idence of its expressiveness. References [Henderson, 1990] James Henderson. Structure Uni- fication Grammar: A Unifying Framework For Investigating Natural Language. Technical Re- port MS-CIS-90-94, University of Pennsylvania, Philadelphia, PA, 1990. [Shastri and Ajjanagadde, 1990] Lokendra Shastri and Venkat Ajjanagadde. From Simple Associations to Systematic Reasoning: A Connectionist Repre- sentation of Rules, Variables and Dynamic Bind- ings. Technical Report MS-CIS-90-05, University of Pennsylvania, Philadelphia, PA, 1990. 358
1991
56
A THREE-LEVEL MODEL FOR PLAN EXPLORATION Lance A. Ramshaw Department of Computer Science Bowdoin College Brunswick, ME 04011 Internet: [email protected] ABSTRACT In modeling the structure of task-related discourse using plans, it is important to distinguish between plans that the agent has adopted and is pursuing and those that are only being considered and ex- plored, since the kinds of utterances arising from a particular domain plan and the patterns of ref- erence to domain plans and movement within the plan tree are quite different in the two cases. This paper presents a three-level discourse model that uses separate domain and exploration layers, in addition to a layer of discourse metaplans, allow- ing these distinct behavior patterns and the plan adoption and reconsideration moves they imply to be recognized and modeled. DISCOURSE MODEL LEVELS In task-related discourse, much of the discourse structure derives directly from the task structure, so that a model of the agent's plans can serve as a useful discourse model, with discourse segment boundaries mapping to sub-plan boundaries and the like. This simple model works well in appli- cations like expert-apprentice dialogues, where a novice agent is currently pursuing a single domain plan. Since the discourse tracks the domain plan so closely in such cases, it is fairly easy to make the links between the agent's queries and the relevant domain plans. But this single-level model is not rich enough to handle phenomena like clarification subdia- logues or plan revision, as seen in the work of Litman, Carberry, and others. Litman's model [Lit85, LA87, LA90] employs a stack of discourse mctaplans on top of the base domain plan, produc- ing a two-level model that can handle clarification subdialogues and other discourse phenomena that go beyond the domain plans. Carberry [Carg0] adds an independent stack of discourse goals, for similar reasons. In earlier work [Ram89a], I explored the use of a different kind of metaplan to model what I called the problem-solving level, where the agent is exploring possible plans, rather than pursuing an adopted one. Such plan exploration contexts, which can include comparison between alternative plans or consideration of plans in hypothetical cir- cumstances, are quite different from adopted do- main plan execution contexts, both in terms of the reference patterns to domain plans and in terms of the kinds of utterances that are generated. This paper describes an effort to combine these earlier approaches into a three-level model where the discourse metaplans can be rooted in either the exploration or domain plan levels, so that both kinds of plan-related behavior can be modeled. Such a model can capture the differences between the plan exploration level and the domain level in terms of the appropriate plan recognition and query generation processes, thus broadening the range of discourse phenomena that can be mod- eled. It also allows us to model shifts between lev- els, as the agent explores, adopts, and reconsiders particular plans. THE NATURE OF PLAN EXPLORATION Cohen and Levesque [CLg0] have recently pointed out the theoretical problems that arise, while us- ing a planning system to model a rational agent, from failing to distinguish between a system's plans and its intentions, since agents frequently form plans that they never adopt. It is this same distinction that motivates the division proposed here of the domain-plan-related portion of the dis- course model into separate levels representing first those domain plans and goals which the agent has adopted and is pursuing, here called the domain 39 layer, and second those which the agent is explor- ing but has not adopted, the exploration layer. While the same domain plans give structure to these two levels, the resulting discourse phenom- ena including the space of relevant queries based on those plans and the patterns of references to plans in the plan tree are quite different. QUERY TYPES One clear difference can be seen in the content of utterances arising on those different layers from the same underlying domain plan. For example, in a banking context, consider these two queries: What is the interest rate on your pass- book account? To whom do I make out the cheek .for the initial deposit on a passbook account? The interest rate query is an example of a query based in an exploration context, since the interest rate on an account affects its desirability compared to other accounts but has no instrumental rele- vance to any of the plan steps involved in opening an account. The query about the check payee, on the other hand, plays a local instrumental role in the open-account plan, but has no relevance out- side of that particular subplan. Some queries, of course, can arise in either kind of context. For example, What's the minimum initial deposit for a passbook account? could be either an exploration level query from an agent weighing the comparative advantages of a passbook account versus a CD, or it could be a domain level query from an agent who had al- ready decided to open a passbook account, and who needed to know how large a check to write to open it. Thus the context model for that query is ambiguous between the two interpretations. There are also whole classes of queries that may be generated on the plan exploration level but that do not arise when agents are pursuing an adopted domain plan, as when an agent asks queries about the possible plans for a goal or about possible fillers for a variable within a plan. For example, What does it take $o open an account? asks about the subactions in a plan being explored, and What kinds of accounts do you offer? asks for possible fillers of the account-class vari- able. Such queries imply that the agent does not have any fully-instantiated adopted plan in mind. DOMAIN PLAN REFERENCES Another difference between the exploration and domain levels comes in the patterns of references to domain plans. An agent pursuing an adopted domain plan has a single subplan in focus and typ- ically shifts that focus in an orderly way related to the sequential steps in that plan. On the other hand, at the exploration level, the possible pat- terns of movement are much less constrained, and conflicting alternative plans or multiple hypothet- ical plans may be explored simultaneously. Explo- ration metaplans can capture these more complex patterns. For example, agents frequently generate queries that compare particular features of alternative plans for the same goal. For instance, after ask- ing about the interest rate on passbook accounts, the agent might naturally ask about the rate for CD's. This kind of comparison can be modeled by a compare-by-feature plan exploration recta- plan, which represents the close discourse connec- tion between the similar features of the two dif- ferent plans. Such feature by feature comparison would be hard to capture in a model based directly on the domain plans, since the focus would have to jump back and forth between the two alterna- tives. At each step, such a model would seem to predict further queries about the current plan as more likely than a jump back to a query about the other plan, while at the exploration level, we can have plan comparison metaplans that capture either a plan by plan or a feature by feature ap- proach. A different kind of complex domain plan ref- erence can occur in a hypothetical exploration query, where the agent explores plans in a con- text that includes projected states of affairs that are different from her own current world model. Of course there is a sense in which every explo- ration level query is hypothetical, since it con- cerns the preconditions or effects of executing a plan to which the agent is not yet committed, but the issue here concerns hypothetical queries that assume more than the adoption of the single plan being explored. While modeling arbitrary hypo- theticals requires more than a planning system, there are cases where the hypothetical element in the agent's question can be expressed by assum- ing the adoption of some other plan in addition to the one currently explored. For example, for a hypothetical query like If I put $1000 in a 1-year CD and with- drew it in a month, what would be the 40 penalty ? it seems that an exploration level metaplan could use the purchase-CD plan to create the hypothet- ical context in which the query about the with- drawal plan penalty is to be understood. Thus the domain plan references in exploration utterances often do not correspond closely to the shape of the domain level plan tree. Exploration metaplans can supply alternative structures that better capture the more complex reference pat- terns involved in examples like feature compar- isons or hypothetical queries. MOVES BETWEEN LEVELS Finally, distinguishing between domain plan and exploration behavior is important so that the sys- tem can recognize when the agent moves from one level to the other. If an agent has been asking evaluative queries and then proceeds to ask a pure domain level query about one of those plan op- tions, the system should recognize that the agent has adopted that particular plan and is now ac- tively pursuing it, rather than continuing to eval- uate alternatives. Such an adoption of a particular subplan establishes the expectation that the agent will continue to pursue it, perhaps asking further domain level queries, either until it is completed and focus moves on to a new plan or until a plan blockage or second thoughts on the agent's part trigger a reconsideration move back to the explo- ration level. We can see the importance of this distinction in the care taken by human agents to make clear the level at which they are operating. Agents use mood, cue phrases, and direct informing acts to keep their expert advisor informed as to whether they are speaking of adopted plans or of ones only being explored, so that the expert's plan tracking and responses can be fully cooperative. STRUCTURE OF THE MODEL THE DOMAIN LEVEL The base level in the model is always a domain plan structure representing the plans and goals that the agent is believed to have adopted and to be currently pursuing. These plans are orga- nized (adapting a technique from gautz [Kau85]) into a classification hierarchy based on their ef- fects, so that the subplan children of a class of plans represent alternative possible strategies for achieving the given effects. The plans also include traditional action links outlining their action de- composition in terms of more primitive plans. There are two classes of predicates in a plan definition: precondition., which must be true for the plan to be executed but which can be re- cursively planned for, and constraint, (using Lit- man's word--Carberry calls them "applicability conditions") which cannot be recursively planned for. Each predicate is also classified as to its rel- evance, where internally relevant predicates are those whose bindings must be known in order to execute the plan and external predicates are those whose bindings are relevant when evaluating the plan from the outside. Thus, using the ear- lier examples, the payee identity for write-check is only internally relevant, the interest rate for open-savings-account is only externally relevant, and the minimum initial deposit feature is rele- vant both internally and externally. This heuris- tic classification of predicates is used to indicate which ones can be expanded at the domain vs. exploration levels. THE EXPLORATION LEVEL The basic exploration metaplan is explore-plan it- self, which takes an instantiated domain plan as its single parameter. The expected default ex- ploration pattern simply follows the domain plan tree shape, exploring the subplans and actions be- neath that plan, using the explore-subplan and explore-subactlon metaplans. This default behav- ior is compiled in by linking each exploration level node to the explore-plan nodes for the subplans and subactions of the domain plan it references. Thus when the system models a move to the ex- ploration level from a given domain plan node, the entire subtree of possible plans and actions be- neath that node is also instantiated beneath the initial exploration level node. The more complex exploration level strategies are encoded as metalevel subplans and subactions of explore-plan. For example, compare-subplans is a subplan of explore-plan, and compare-by-feature is in turn one subplan of compare-subplans. The system works from this library of exploration metaplans to create trees of possible contexts be- neath each explore-plan node that model these al- ternative strategies of plan exploration. THE DISCOURSE LEVEL The metaplan structure directly underlying an ut- terance is always a discourse metaplan, though it may be as simple as an ask-value metaplan (like Litman's identify-parameter) directly based in the current domain plan context. As in Litman's ap- proach, phenomena like clarification subdialogues 41 Discourse: * (ask-sub-plans (open-savings-account agent1 ?bank)) Exploration: * (explore-plan (conduct-banking-activlty agent1)) * (explore-plan (open-savings-account agent1 ?bank)) Domain: * (manage-money agent1) * (conduct-banking-activity agentl) Figure 1: What kinds of savings accounts do you offer? can be handled by further layers of discourse meta. plans that introduce additional structure above the domain plan. In this three-level model, these discourse layer metaplans can also be based on ex- ploration level plans. In testing for a match between a given query and a discourse context like ask-value, the discourse metaplans have access to the set of relevant pred- icates from the base context. In determining that set, the system uses the appropriate relevance cri- teria depending on whether the base context is at the domain or exploration level. There are also particular discourse plans, such as the ask-fillers plan, that require that their base context be at the exploration level. OPERATION OF THE MODEL For each utterance, the system begins from the previous context(s) and searches for a discourse node (based either on a domain or exploration node) that matches the utterance. In the follow- ing example, an initial domain level context is as- sumed, with the default top-level goal being that deduced from the situation of the agent entering a bank and approaching the receptionist, namely, (conduct-banklng-activity ?agent) as a subplan of (manage-money ?agent). The matching context for the initial query, What kind8 of savings accounts do you offer? is seen in Figure i, with asterisks used to mark the current focused path. No match in the as- sumed context is found to this particular query using discourse metaplans based in domain plans, although one can imagine other contexts in which this query could be a step in pursuing an adopted plan, as in a journalist compiling a consumer's re- port on various banks. But using the normal plans for banking customers, this query matches only on the exploration level, where the agent is exploring the plan of opening a savings account. Note that an exploration level match could also be found by assuming that the move to the explo- ration level occurs at open-savings-account, sug- gesting that the agent has adopted not just the plan conduct-banking-activity, but also the more specific plan open-savings-account. The system finds both matches in such cases, but heuristically prefers the one which makes the weakest assump- tions about plans and goals adopted by the agent, thus preferring the model where the open-savings- account plan is only being explored. Suppose the agent continues with the query What's the interest raze on your passbook account? This is matched by a discourse plan based on exploring one of the subplans of open-savings- account, which was the previous exploration level context, as seen in Figure 2 at #1. The system also explores the possibility of matching to a dis- course plan based in a domain level plan, which would imply the agent's adoption of the plan. However, the interest rate feature has only exter- nal relevance, and thus cannot match queries on the domain level. This query does finds a second match as the beginning of a compare-by-feature (at ~2), but the heuristics prefer the match that is closer to the previous context, while discourag- ing the one-legged comparison. The agent's next query, And the rate for the investment account? 42 Discourse: * (ask-value agent ?rate (open-passbook-account ...)) Exploration: * (explore-plan (conduct-banking-activity agent1)) * (explore-plan (open-savings-account agent1 ?bank)) * (explore-plan (open-passbook-account agent1 ?bank)) #1 (compare-by-feature (open-savings-account agent1 ?bank) (compare-feature (open-passbook-account ...) (interest-rate-of ...)) #2 Domes: * (manage-money agent1) * (conduct-banking-activity agent1) Figure 2: What's the interest rate for the passbook account? Discourse: * (ask-value agent1 ?rate (interest-rate-of ...)) Exploration: * (explore-plan (conduct-banking-activity agent1)) * (explore-plan (open-savings-account agent1 ?bank)) (explore-plan (open-passbook-account agent1 ?bank)) (explore-plan (open-investment-account agent1 ?bank)) #i * (compare-by-feature (open-savings-account ...)(interest-rate-of ...)) (compare-feature (open-passbook-account ...)(interest-rate-of ...)) * (compare-feature (open-investment-account ...)(interest-rate-of ...)) #2 Domain: * (manage-money agent1) * (conduct-banking-activity agent 1) Figure 3: And the rate for the investment account? can also be matched in two different ways, as seen in Figure 3. One way (at #1) is based in an explore-plan for open-investment-account, sug- gesting that the agent has simply turned from ex- ploring one plan to exploring an alternative one. But this query also matches (at #2) as a second leg of the compare-by-feature subplan of explore- plan, where the query is part of the comparison between the two kinds of savings accounts based on the interest rate offered. Since that serves as a close continuation of the feature comparison in- terpretation of the previous query, the latter in- terpretation is preferred. The following two queries How big is the initial deposit for the,pass- book account? And for the investment account? can be matched by a sibling compare-by-feature subtree, as seen in Figure 4. This approach is thus able to represent the logical feature-by-feature structure of such a comparison, rather than having to bounce back and forth between explorations of the two subplan trees. The next query, OK, tuho do I see to open a passbook ac- count? makes a substantial change in the context, as 43 Discourse: * (ask-value agentl ?deposit (init-deposit-of ...)) Exploration: * (explore-plan (conduct-banking-activity agentl)) * (explore-plan (open-savings-account agentl ?bank)) (explore-plan (open-passbook-account agentl ?bank)) (explore-plan (open-investment-account agentl ?bank)) (compare-by-feature (open-savings-account ...)(interest-rate-of ...)) (compare-feature (open-passbook-account ...)(interest-rate-of ...)) (compare-feature (open-investment-account ...)(interest-rate-of ...)) * (compare-by-feature (open-savings-account ...)(init-deposit-of ...)) * (compare-feature (open-passbook-account...)(init-deposit-of ...)) * (compare-feature (open-investment-account ...)(init-deposit-of ...)) Domain: * (manage-money agent 1) * (conduct-banking-activity agentl) Figure 4: How big is the initial deposit for the passbook accountf And for the investment account? Discourse: * (ask-fillers agentl ?staff) Doma~l.* * (manage-money agent1) * (conduct-banking-activity agent1) * (open-savings-account agentl ?bank) * (open-passbook-account agent1 ?bank) * (fill-out-application agent1 ?staff ) Figure 5: OK, who do I see to open a passbook accountf shown in Figure 5. Since the choice of the bank personnel for opening an account is an internal fea- ture that can only be queried on the domain level, the only matches to this query are ones that imply that the agent has adopted the plan that she was previously exploring. Modeling that adoption, the parallel path in the domain tree to the path that was being explored becomes the current domain context, and the matching discourse plan is based there. The cue phrase "OK", of course, is a fur- ther signal of this change in level, though not one the system can yet make use of. In spite of that plan adoption, the agent can later reopen an exploration context concerning a subplan by saying, for example, What kinds of checks do you hayer She could also raise a query that implies a recon- sideration of the previous plan adoption by saying I forgot to ask whether there are any maintenance charges on this account. which would reestablish an exploration context of choosing between the passbook and investment ac- counts. COMPARISON WITH EXISTING WORK The general framework of using domain plans to model discourse structure is one that has been 44 widely pursued and shown to be fruitful for var- ious purposes [All79, AP80, Car84, Car85, GS85, Sid85]. Important extensions have been made more recently in plan classification [Kau85] and in modeling plans in terms of beliefs, so as to be able to handle incorrect plans [Po186, Polg0]. The most direct precursor of the model pre- sented here is Litman and Allen's work [Lit85, LA87, LAg0], which combines a domain plan model with discourse metaplans in a way that can model utterances arising from either the normal flow of domain plans, clarification subdialogues, or cases of domain plan modification. Like explo- ration metaplans, their discourse plans can handle examples that do not mirror the execution struc- ture of the domain plan. Their system, however, makes the assumption that the agent is pursuing a single domain plan. While the agent can mod- ify a plan, there is no way to capture an agent's exploration of a number of different domain plan possibilities, the use of varying exploration strate- gies, or the differences between utterances that are based on exploration plans vs. those based on do- main plans. Carberry developed a model [Carg0] that is sim- ilar to Litman's in combining domain plans with a discourse component, although this model's dis- course plans operate on a separate stack rather than as a second layer of the domain plan model. While the mechanisms of her model cover a wide variety of discourse goals, they make no distinc- tion between domain and exploration plans. They are thus also limited to following a single domain plan context at a time. In earlier work [Ram89a], I presented a model that accounts for the plan refining and query as- pects of plan exploration by using a tree of plan- building metaplans, and much of that structure is incorporated in this model. However, that version uses only a single layer of plan-building metaplans, so that it is strictly limited to plan exploration discourse. It thus cannot model queries arising di- rectly from the domain level, nor can it model the moves of plan adoption or reconsideration when the agent switches levels. The plan-building trees in that earlier version are also limited to following the structure of the domain plans, and so are un- able to represent comparison by features or other alternative exploration strategies, and that earlier model also lacks a separate discourse component. Lambert and Carberry [Lamg0, LCgl] are cur- rently working on a new, three-level approach that has much in common with the one presented here. One interesting difference is that the three levels in their model form a hierarchy, with discourse plans always rooted in exploration plans. While this may be appropriate for information-seeking discourse, allowing discourse plans to be rooted directly in domain plans can provide a natural way of representing utterances based directly on adopted plans. Overall, their model makes signifi- cant contributions on the discourse level, allowing for the recognition of a wide range of discourse plans like expressing surprise or warning. In con- trast, the main focus in this work has been on the exploration level, modeling alternative exploration strategies, and plan adoption and reconsideration. It would be fruitful to try to combine the two ap- proaches. IMPLEMENTATION AND FUTURE WORK The model presented here has been implemented in a system called Pragma (redone from the ear- lier Pragma system [RamSOb]) which handles the examples covered in the paper. Since the focus is on modeling plan exploration strategies, the initial context is directly input in the form of a domain plan with its parameter values, and the queries are input as meaning representations. The out- put after each query is the updated set of context models. The system has been exercised in the banking and course registration domains, though it is only populated with enough domain plans to serve as a testbed for the plan exploration strategies. The exploration level is the most developed, including metaplans for constraining or instantiating plan variables and for exploring or comparing subplans using various strategies. The discourse level cur- rently includes only the metaplans ask-value, ask- plans, and ask-fillers. Important next steps include expanding the col- lection of exploration level metaplans from the samples worked out so far to better character- ize the full range of plan exploration strategies that people actually use, validating that collec- tion against real data. It would be particularly interesting to add coverage for the hypothetical queries discussed above, where the assumed event is another known domain plan. The coverage of discourse level metaplans should be expanded, to better explore their interaction with exploration plans. The system should also be made sen- sitive to other indicators for recognizing moves between the exploration and domain levels be- sides the class of predicate queried, including verb 45 mood, cue phrases, and direct inform statements by the agent. CONCLUSIONS This work suggests that plan exploration meta- plans can be a useful and domain independent way of expanding the range of discourse phenom- ena that can be captured based on a model of the agent's domain plans. While the more complex exploration strategies complicate the plan recogni- tion task of connecting discourse phenomena with the underlying domain plans, exploration meta- plans can successfully model those strategies and also allow us to recognize the moves of plan explo- ration, adoption, and reconsideration. References [AI179] lAPS0] [CarS4] [CarS,] [Car90] [CL90] [csss] [Kan65] James F. Allen. A Plan-Based Ap- proach to Speech-Act Recognition. PhD thesis, University of Toronto, 1979. James F. Allen and C. Raymond Per- rault. Analyzing intention in utter- ances. Artificial Intelligence, 15:143- 178, 1980. Sandra Carberry. Understanding prag- matically ill-formed input. In Pro- ceedings of the lOth International Con- ference on Computational Linguistics, pages 200-206, 1984. Sandra Carberry. A pragmatics-based approa~ to understanding intersenten- tim ellipsis. In Proceedings of the ~3rd Annual Meeting o/the ACL, pages 188- 197, 1985. Sandra Carberry. Plan Recognition in Natural Language Dialogue. The MIT Press, 1990. Philip R. Cohen and Hector J. Levesque. Intention is choice with commitment. Artificial Intelligence, 42:213-261, 1990. Barbara Grosz and Candace Sidner. The structures of discourse structure. Technical Report 6097, Bolt Beranek and Newman, 1985. Henry A. Kautz. Toward a theory of plan recognition. Technical Report 162, University of Rochester, 1985. [LA87] [LA90] [Lam90] [LC91] [Lit85] [I>o186] [Pol90] [Ram89a] [R SOb] [Sid85] Dianne Litman and James Allen. A plan recognition model for subdialogues in conversation. Cognitive Science, 11:163-200, 1987. Dianne Litman and James Allen. Dis- course processing and commonsense plans. In P. Cohen, J. Morgan, and M. Pollack, editors, Intentions in Com- munication, pages 365-388. MIT Press, 1990. Lynn Lambert. A plan-based model of discourse understanding. Technical Report 91-04, University of Delaware, 1990. Lynn Lambert and Sandra Carberry. A tripartite, plan-based model of dia- logue. In Proceedings of the ~gth An- nual Meeting of the ACL, 1991. Dianne J. Litman. Plan Recognition and Discourse Analysis: An Integrated Approach for Understanding Dialogues. PhD thesis, University of Rochester, 1985. Martha E. Pollack. Inferring Do- main Plans in Question-Answering. PhD thesis, University of Pennsylvania, 1986. Martha E. Pollack. Plans as complex mental attitudes. In P. Cohen, J. Mor- gan, and M. Pollack, editors, Intentions in Communication, pages 77-103. MIT Press, 1990. Lance A. Ramshaw. A metaplan model for problem-solving discourse. In Pro- ceedings of the Fourth Conference of the European Chapter of the ACL, pages 35-42, 1989. Lance A. Ramshaw. Pragmatic Knowl- edge for Resolving Ill-Formedness. PhD thesis, University of Delaware, 1989. Candace Sidner. Plan parsing for intended response recognition in dis- course. Computational Intelligence, 1:1-10, 1985. 46
1991
6
A TRIPARTITE PLAN-BASED MODEL OF DIALOGUE Lynn Lambert Sandra Carberry Department of Computer and Information Sciences University of Delaware Newark, Delaware 19716, USA Abstract 1 This paper presents a tripartite model of dialogue in which three different kinds of actions are modeled: domain actions, problem-solving actions, and dis- course or communicative actions. We contend that our process model provides a more finely differenti- ated representation of user intentions than previous models; enables the incremental recognition of com- municative actions that cannot be recognized from a single utterance alone; and accounts for implicit acceptance of a communicated proposition. 1 Introduction This paper presents a tripartite model of di- alogue in which intentions are modeled on three levels: the domain level (with domain goals such as traveling by train), the problem-solving level (with plan-construction goals such as instantiating a pa- rameter in a plan), and the discourse level (with communicative goals such as ezpressing surprise). Our process model has three major advantages over previous approaches: 1) it provides a better repre- sentation of user intentions than previous models and allows the nuances of different kinds of goals and processing to be captured at each level; 27 it enables the incremental recognition of commumca- tire goals that cannot be recognized from a single utterance alone; and 3) it differentiates between il- locutionary effects and desired perlocutionary ef- fects, and thus can account for the failure of an inform act to change a heater's beliefs[Per90] ~. 2 Limitations of Current Models of Discourse A number of researchers have contended that a coherent discourse consists of segments that are related to one another through some type of structuring relation[Gri75, MT83] or have used rhetorical relations to generate coherent text[Hov88, MP90]. In addition, some researchers 1 This material is based upon work supported by the Na- tional Science Foundation under Grant No. IRI-8909332. The Government has certain rights in this material. 2We would ilke to thank Kathy McCoy for her comments on various drafts of this paper. have modeled discourse based on the semantic rela- tionship of individual clauses[Po186a] or groups of clauses[Rei78]. But all of the above fail to capture the goal-oriented nature of discourse. Grosz and Sidner[GS86] argue that recognizing the structural relationships among the intentions underlying a dis- course is necessary to identify discourse structure, but they do not provide the details of a compu- tational mechanism for recognizing these relation- ships. To account for the goal-oriented nature of discourse, many researchers have adopted the planning/plan-recognition paradigm[APS0, PA80] in which utterances are viewed as part of a plan for accomplishing a goal and understanding con- sists of recognizing this plan. The most well- developed plan-based model of discourse is that of Litman and AIIen[LA87]. However, their discourse plans conflate problem-solving actions and commu- nicative actions. For example, their Correct-Plan has the flavor of a problem-solving plan that one would pursue in attempting to construct another plan, whereas their Identify-Parameter takes on some of the characteristics of a communicative plan that one would pursue when conveying information. More significantly, their model cannot capture the relationship among several utterances that are all part of the same higher-level discourse plan if that plan cannot be recognized and added to their plan stack based on analysis of the first utterance alone. Thus, if more than one utterance is necessary to recognize a discourse goal (as is often the case, for example, with warnings), Litman and Allen's model will not be able to identify the discourse goal pur- sued by the two utterances together or what role the first utterance plays with respect to the sec- ond. Consider, for example, the following pair of utterances: (1) The city of zz~ is considering filing for bankruptcy. (2) One of your mutual funds owns zzz bonds. Although neither of the two utterances alone con- stitutes a warning, a natural language system must be able to recognize the warning from the set of two utterances together. Our tripartite model of dialogue overcomes these limitations. It differentiates among domain, problem-solving, and communicative actions yet models the relationships among them, and enables 47 the recognition of communicative actions that take more than one utterance to achieve but which can- not be recognized from the first utterance alone. In the remainder of this paper, we will present our tripartite model, motivating why our model recognizes three different kinds of goals, de- scribing our dialogue model and how it is built in- crementally as a discourse proceeds, and illustrat- ing this plan inference process with a sample dia- logue. Finally, we will outline our current research on modeling negotiation dialogues and recognizing discourse acts such as expressing surprise. 3 A Tripartite Model 3.1 Kinds of Goals and Plans Our plan recognition framework recognizes three different kinds of goals: domain, problem- solving, and discourse. In an information-seeking or expert-consultation dialogue, one participant is seeking information and advice about how to con- struct a plan for achieving some domain goal. A problem-solving goal is a metagoal that is pursued in order to construct a domain plan[Wil81, LA87, Ram89]. For example, if an agent has a goal of earning an undergraduate degree, the agent might have the problem-solving goal of selecting the in- stantiation of the degree parameter as BA or BS and then the problem-solving goal of building a sub- plan for satisfying the requirements for that degree. A number of researchers have demonstrated the im- portance of modeling domain and problem-solving goals[PA80, WilS1, LA87, vBC86, Car87, Ram89]. Intuitively, a discourse goal is the com- municative goal that a speaker has in making an utterance[.Car89], such as obtaining information or expressing surprise. Recognition of discourse goals provides expectations for subsequent utter- ances and suggests how these utterances should be interpreted. For example, the first two utterances in the following exchange establish the expectation that S1 will either accept S2's response, or that S1 will pursue utterances directed toward understand- ing and accepting it[Car89]. Consequently, Sl's sec- ond utterance should be recognized as expressing surprise at S2's statement. SI: When does CS400 meet? $2:GS400 meets on Monday from 7.9p.m. SI: GS400 meets at night? A robust natural language system must recognize discourse goals and the beliefs underlying them in order to respond appropriately. The plan library for our process model con- tains the system's knowledge of goals, actions, and plans. Although domain plans are not mutually known by the participants[Po186b], how to commu- nicate and how to solve problems are common skills that people use in a wide variety of contexts, so the system can assume that knowledge about dis- course and problem-solving plans is shared knowl- edge. Our representation of a plan includes a header giving the name of the plan and the action it accomplishes, preconditions, applicability condi- tions, constraints, a body, effects, and goals. Appli- cability conditions represent conditions that must be satisfied for the plan to be reasonable to pur- sue in the given situation whereas constraints limit the allowable instantiation of variables in each of the components of a plan[LAB7, Car87]. Especially in the case of discourse plans, the goals and effects are likely to be different. This allows us to dif- ferentiate between illocutionary and perlocutionary effects and capture the notion that one can, for ex- ample, perform an inform act without the hearer adopting the communicated proposition. 3 Figure 1 presents three discourse plans and one problem- solving and domain plan. 3.2 Structure of the Model Agents use utterances to perform commu- nicative acts, such as informing or asking a ques- tion. These discourse actions can in turn be part of performing other discourse actions; for example, providing background data can be part of asking a question. Discourse actions can take more than one utterance to complete; asking for information requires that a speaker request the information and believe that the request is acceptable (i.e., that the speaker say enough to ensure that the speaker be- lieves that the request is understandable, justified, and the necessary background information is known by the respondent). Thus, actions at the discourse level form a tree structure in which each node rep- resents a communicative action that a participant is performing and the children of a node represent communicative actions pursued in order to perform the parent action. Information needed for problem-solving ac- tions is obtained through discourse actions, so dis- course actions can be executed in order to perform problem-solving actions as well as being part of other discourse actions. Similarly, domain plans are constructed through problem-solving actions, so problem-solving actions can be executed in order to eventually perform domain actions as well as being part of plans for other problem-solving actions. Therefore, our Dialogue Model (DM) con- tains three levels of tree structures, 4 one for each kind of action (discourse, problem-solving, and do- main) with links among the actions on different lev- els. At the lowest level the discourse actions are represented; these actions may contribute to the problem-solving actions at the middle level which, ZConsider, for example, someone saying "I informed yon of X 6at you wouldn't 6elieve me." 4The DM is really a mental model of intentions[Pol80b]. The structures shown in our figures implicitly capture a num- ber of intentions that are attributed to the participants, such as the intention that the hearer recognize that the speaker believes the applicability conditions for the just initiated dis- course actions are satisfied and the intention that the par- ticipants follow through with the subactions that are part of plans for actions in theDM. 48 Domain Plan-D1: {_agent earns a minor in _subj} Action: Get-Minor(.agent, ..sub j) Prec: have-plan(_agent, Plan-D1, Get-minor(.agent, .sub j)) Body: 1. Complete-Form(.agent, change-of-major-form, add-minor) 2. Take-Required-Courses(.agent, .sub j) Effects: have-minor(_agent, -sub j) Goal: have-minor(_agent, _sub j) Action: AppCond: Constr: Problem-solvin~ Plan-P1:{_agent1 and _agent~ build a plan ]or -agent1 to do _action} Build-Plan(_agentl, .agent2, .,action) want(.agentl, .action) plan-for(.plan, .action) action-in-plan-for(Aaction, .action) Prec: selected(_agentl, .action, .plan) know(.agent2, want(.agentl, .action)) knowref(..agentl, .prop, prec-of(.prop, -plan)) knowref(.agent2, .prop, prec-of(-prop, .plan)) knowref(.agentl, .]action, need-do(.agentl, .laction, .action)) knowref(_agent2, .laction, need-do(-agentl, Aaction, .action)) 1. for all actions .laction in .plan, Instantiate-Vars(-agentl, .agent2, _laction) 2. for all actions .laction in -plan, Build-Plan(.agentl, .agent2, .laction) have-plan(_agentl, .plan, .action) have-plan(-agentl, .plan, .action) Body: Effects: Goal: Discourse Plan-C1: {_agentl asks -agent~ /or the values of.term ]or which -prop is true} Action: Ask-Ref(.agentl, .agent2, .term, .prop) AppCond: want(-agentl, knowref(-agentl, _term, believe(.agent2, -prop))) --knowref(.agentl, .term, believe(.agent2, .prop)) Constr: term-in(_term, .prop) Body: Request(_agentl, .agent2, Informref(_agent2, .agentl, _term, ..prop)) Make-Question-Acceptable(_agentl, _agent2, _prop) Effects: believe(-agent2, want(.agentl, Informref(.agent2, .agent1, _term, .prop))) (goal: want(.agent2, Answer-Ref(.agent2, .agent1, -term, _prop)) Discourse Plan-C2:{-agent1 in]orms _agent2 o] _prop} Action: Inform(.agentl, .agent2, .prop) AppCond: believe(.agentl, know(-agentl, .prop)) -,believe(.agentl, believe(.agent2, .prop)) Body: Tell(.,xgentl, .agent2, .prop) Make-Prop-Believable(.agentl, .agent2, .prop) Effects: believe(.agent2, want(.agentl, believe(.agent2, -prop))) Goal: know(.,xgent2, .prop) Discourse Plan-C3:{_agent1 tells _prop to .agent~} Action: Tell(.agentl, .agent2, .prop) AppCond: believe(.agentl, .prop) -~believe(_agentl, believe(.agent2, believe(.agentl, .prop))) Body: Surface-Inform(.agentl, .agent2, .prop) Make-Statement-Understood(_agentl, .agent2, -prop) Effects: told-about(_agent2, .prop) Goal: believe(.agent2, believe(-agentl, .prop)) Figure 1: Sample Plans from the Plan Library 49 in turn, may contribute to the domain actions at the highest level (see Figure 3). The planning agent is the agent of all actions at the domain level, since the plan being constructed is for his subsequent ex- ecution. Since we are assuming a cooperative di- alogue in which the two participants are working together to construct a domain plan, both partic- ipants are joint agents of actions at the problem- solving level. Both participants make utterances and thus either participant may be the agent of an action at the discourse level. For example, a DM derived from two ut- terances is shown in Figure 3; its construction is described in Section 3.3. The DM in Figure 3 in- dicates that the inform and the request were both part of a plan for asking for information; the inform provided background data enabling the information request to be accepted by the hearer. Furthermore, the actions at the discourse level were pursued in or- der to perform a Build-Plan action at the problem- solving level, and this problem-solving action is be- ing performed in order to eventually perform the domain action of getting a math minor. The cur- rent focus of attention on each level is marked with an asterisk. 3.3 Building the Dialogue Model Our process model uses plan inference rules[APS0, Car87], constraint satisfaction[LAB7], focusing heuristics[Car87], and features of the new utterance to identify the relationship between the utterance and the existing dialogue model. The plan inference rules take as input a hypothesized action Ai and suggest other actions (either at the same level in the DM or at the immediately higher level) that might be the agent's motivation for Ai. The focusing heuristics order according to coherence the ways in which the DM might be ex- panded on each of the three levels to incorporate the actions motivating a new utterance. Our focus- ing heuristics at the discourse level are: 1. Expand the plan for an ancestor of the cur- rently focused action in the existing DM so that it includes the new utterance, preferring to expand ancestors closest to the currently fo- cused action. This accounts for new utterances that continue discourse acts already in the DM. 2. Enter a new discourse action whose plan can be expanded to include both the existing dis- course level of the DM and the new utterance. This accounts for situations in which actions at the discourse level of the previous DM are part of a plan for another discourse act that had not yet been conveyed. 3. Begin a new tree structure at the discourse level. This accounts for initiation of new dis- course plans unrelated to those already in the DM. The focusing heuristics, however, are not identical for all three levels. Although it is not pos- sible to expand the plan for the focused action on the discourse level since it will always be a surface speech act, continuing the plan for the currently focused action or expanding it to include a new action are the most coherent expectations on the problem-solving and domain levels. This is because the agents are most expected to continue with the problem-solving and domain plans on which their attention is currently centered. In addition, since actions at the discourse and problem-solving lev- els are currently being executed, they cannot be returned to (although a similar action can be initi- ated anew and entered into the model). However, since actions at the domain level are part of a plan that is being constructed for future execution, a domain subplan already completely developed may be returned to for revision. Although such a shift in attention back to a previously considered sub- plan is not one of the strongest expectations, it is still possible at the domain level. Furthermore, new and unrelated discourse plans will often be pursued during the course of a conversation whereas it is un- likely that several different domain plans (each rep- resenting a topic shift) will be investigated. Thus, on the domain level, a return to a previously con- sidered domain subplan is preferred over a shift to a new domain plan that is unrelated to any already in the DM. In addition to different focusing heuristics and different agents at each level, our tripartite model enables us to capture different rules regard- ing plan retention. A continually growing dialogue structure does not seem to reflect the information retained by humans. We contend that the domain plan that is incrementally fleshed out and built at the highest level should be maintained through- out the dialogue, since it provides knowledge about the agent's intended domain actions that will be useful in providing cooperative advice. However, problem-solving and discourse actions need not be retained indefinitely. If a problem-solving or dis- course action has not yet completed execution, then its immediate children should be retained in the DM, since they indicate what has been done as part of performing that as yet uncompleted action; its other descendants can be discarded since the apar- ent actions that motivated them are finished. (For illustration purposes, all actions have been retained in Figure 3.) We have expanded on Litman and Allen's notion of constraint satisfaction[LA87] and Allen and Perrault's use of beliefs[AP80]. Our applica- bility conditions contain beliefs by the agent of the plan, and our recognition algorithm requires that the system be able to plausibly ascribe these beliefs in recognizing the plan. The algorithm is given the semantic representation of an utterance. Then plan inference rules are used to infer actions that might motivate the utterance; the belief ascription process during constraint satisfaction determines whether it is reasonable to ascribe the requisite beliefs to the agent of the action and, if not, the inference is rejected. The focusing heuristics allow expecta- 50 tions derived from the existing dialogue context to guide the recognition process by preferring those inferences that can eventually lead to the most ex- pected expansions of the existing dialogue model. In [Car89] we claimed that a cooperative participant must explicitly or implicitly accept a response or pursue discourse goals directed toward being able to accept the response. Thus our model treats failure to initiate a negotiation dialogue as implicit acceptance of the proposition conveyed by the response. Consider, for example, the following dialogue: SI: Who is teaching CS360 next semester? $2: Dr. Baker. SI: What time does it meet? Since Sl's second utterance cannot be interpreted as initiating a negotiation dialogue, S1 has implic- itly accepted the proposition that Dr. Baker is teaching CS360 next semester as true. This no- tion of implicit acceptance is similar to a restricted form of Perrault's default reasoning about the ef- fects of an inform act[Per90] and is explained fur- ther in [Lam91]. 3.4 An Example As an example of how our process model assimilates utterances and can incrementally rec- ognize a discourse action that cannot be recognized from a single utterance, consider the following: SI: (a) I want a math minor. (b) What should I do? A few of the plans needed to handle this example are shown in Figure 1; these plans assume a co- operative dialogue. From the surface inform, plan inference rules suggest that S1 is executing a Tell action and that this Tell is part of an Inform ac- tion (the applicability conditions for both actions can be plausibly ascribed to S1) and these are en- tered into the discourse level of the DM. No fur- ther inferences on this level are possible since the Inform can be part of several discourse plans and there is no existing dialogue context that suggests which of these S1 might be pursuing. The system infers that S1 wants the goal of the Inform action, namely know(S2, want(S1, Get-Minor(S1, Math))). Since this proposition is a precondition for building a plan for getting a math minor, the system infers that S1 wants Build-Plan(S1, $2, Get-Minor(S1, math)) and this Build.Plan action is entered into the problem-solving level of the DM. From this, the system infers that S1 wants the goal of that action; since this result is the precondition for getting a math minor, the system infers that S1 wants to get a math minor and this domain action is entered into the domain level of the DM. The resulting discourse model, with links between the actions at different levels and the current focus of attention on each level marked with an asterisk, is shown in Figure 2. The semantic representation of (b) is jR | | * [Build-Plan~Sl, S2, Get-Minor,S1, DomLin Level f''''" "''''''J ! ! " [Oct-Minor{S,, M&th~ ] I g T en~ble-&rc Problem-solvlng Level ~.,h?? J , Discourse Level m .5 jdmm~en~o em em am amamm | ! g 8ub&ctioa-sre ~ J | ! ! , [ T,n{s~, s~, ...~{s~, Q,,-Mi.o,(S~, M.,b), I t J sub6ct[on-&r¢ ~ J ! ! Figure 2: Dialogue Model from the first utterance Surface-Request(S1, $2, Informref(S2, $1, _action1, need-do(S1, _action1, .action2))) From this utterance we can infer that $1 is per- forming a Request and thus may be performing an Ask.Re? action (since Request is part of the body of the plan for Ask-Re~) and that S1 may thus be performing an Obtain-Info-Ref action (since Ask- Re? is part of the body of the plan for Obtain-In?o- Re]) and that S1 wants the goal of the Obtain-In?o- Re? action (namely, that $1 know the subactions that he needs to do in order to perform _a~tion2), which is in turn a precondition for building a plan. This produces the inference that $1 wants Build- Plan(S1, $2, .action2) which is an action at the problem-solving level. The focusing heuristics suggest that the most coherent expectation at the discourse level is that Sl's discourse level actions are part of a plan for performing the Tell action that is the parent of the action that was previously marked as the current focus of attention in the discourse model. However, no line of inference from the second ut- terance represents an expansion of this plan. (This means that the proposition was understood without any clarification. 5) Similarly, no expansion of the plan for the Inform action (the other ancestor of the focus of attention in the existing DM) succeeds in linking the new utterance to the DM. (This means that the communicated proposition was accepted without any squaring away of beliefs[Jos82].) Since the first focusing heuristic was unsuc- cessful in finding a relationship between the new utterance and the existing dialogue model, the sec- 5We are assuming that the hearer has an opportunity to intervene after an utterance. This is a simplification and must eventually be removed to capture a heater's saving his requests for clarification and negotiation of beliefs until the end of the speaker's complete turn. 51 en~ble-&rc Dom.in Level f, .q | | ~* [ Oct-Minor{St, }~[sth) J | i en~ble-~r¢ Problem-solvlng Level | J 8 ~ IBuild-Plzn(51 53 Get-Mluor(Sl M~th)) : 4 : '.-} ensble-Lrc | m m m I rmm~mmmmm i O bt.~-l~fo-Ref(St. $2. -&ctlonl. need-dotS1..Lctlonl. Oct-Minor(St. M~.th))) sub,ction-&rc ¢ [ A.k-I~ef~$1, $2, .Actionl .... d-do~Sl, Get.~Iinor~Sl, l~.th))) sub.ction-src ¢ .~ctlon I, [ M.ke-qu.s*lon-*¢~-p~.bie~S~, s~ .... d-do~s~, Oe~-Mi~o.~S~, ~.tb))) I snbsction-&r¢ ¢ .~ctionl, ' t Glve-Bzckg .... d($1, 52 .... t(SI, Oet-lviinor(Sl, Msth)), I J [ Iteed-do(Sl, .~ctlonl, Oct-Minor(S1, M~th))) J J J sub&ction-.rc ¢ | , [ lnform~Sl, $2 ..... ~Sl, CJet-MinorISl, l~4&th))) J i sub,Lction-t,r¢ ¢ | need-do{S1! | sub.ctlon-&rc I ] Surf .... ,ztform{Sl, $2 .... t($1, Oet-Minor(Sl, Ms|h))) ] . J I I need-do{St, I Discourse Level sub.ctlon-,rc l l l l l I l t l l l J l l i I' l luformrefq S~, Sl, .~ctionl, .sctlont t }ei-Minor(Sl. MAth'})) subtction-src I ! Surf&ce-J~.equest(Sl, S2s lnfotmref(S2, SI, .sctionl, | J .~¢tionl? Oet.lCfiuor{Sl! MAth}}} ] J ......J Figure 3: Dialogue Model derived from two utterances ond focusing heuristic is tried. It suggests that the new utterance and the actions at the discourse level in the existing DM might both be part of an ex- panded plan for some other discourse action. The inferences described above lead from (b) to the dis- course action Ask-Ref whose plan can be expanded as shown in Figure 3 to include, as background for the Ask-Ref, the Inform and the Tell actions that were entered into the DM from (a). s The focusing heuristics suggest that the most coherent continu- ation at the problem-solving level is that the new utterance is continuing the Build-Plan that was pre- viously marked as the current focus of attention at that level. This is possible by instantiating .action2 with Get-minor(S1, math). Thus the DM is ex- panded as shown in Figure 3 with the new focus of attention on each level marked with an asterisk. Note that Sl's overall goal of obtaining information was not conveyed by (a) alone; consequently, only after both utterances were coherently related could it be determined that (a) was paxt of an overall discourse plan to obtain information and that (a) was intended to provide background data for the request being made in (b). 7 6Note that the actions in the body of Ask.Re] ~re not ordered; an agent can provide d~'ification and background information before or after asking a question. 7An inform action could also be used for other pur- poses, including justifying a question and merely conveying information. Further queries would lead to more elaborate tree structures on the problem-solving and domain levels. For example, suppose that S1 is told that Math 210 is a required course for a math minor. Then a subsequent query such as Who is teach- ing Math 210 next semester e. would be performing a discourse act of obtaining information in order to perform a problem-solving action of instantiat- ing a parameter in a Learn-Material domain action. Since learning the materiM from one of the teach- ers of a course is part of a domain plan for taking a course and since instantiating the parameters in ac- tions in the body of domain plans is part of building the domain plan, further inferences would indicate that this Instanfiafe- Wars problem-solving action is being executed in order to perform the problem- solving action of building a plan for the domain action of taking Math 210 in order to build a plan to get a math minor. Consequently, the domain and problem-solving levels would be expanded so that each contained several plans, with appropriate links between the levels. 4 Current and Future Work We are currently examining the applications that this model has in modeling negotiation dia- logues and discourse acts such as convince, warn, and express surprise. To extend our notion of im- plicit acceptance of a proposition to negotiation di- 52 alogues, we are exploring treating a discourse plan as having successfully achieved its goal if it is plau- sible that all of its subacts have achieved their goals and all of its applicability conditions (except those negated by the goal) are still true after the subacts have been executed. Especially in negotiation dialogues, a system must account for the fact that a user may change his mind during a conversation. But often people only slightly modify their beliefs. For example, the system might inform the user of some proposition about which the user previously held no beliefs. In that case, if the user has no reason to disbelieve the proposition, the user may adopt that proposition as one of his own beliefs. However, if the user disbe- lieved the proposition before the system performed the inform, then the user might change from disbe- lief to neither belief nor disbelief; a robust model of understanding must be able to handle a response that expresses doubt or even disbelief at a previous utterance, especially in modeling arguments and negotiation dialogues. Thus, a system should be able to (1) represent levels of belief, (2) recognize how a speaker's utterance conveys these different levels of belief, (3) use these levels of belief in recog- nizing discourse plans, and (4) use previous context and a user's responses to model changing beliefs. We are investigating the use of a multi-level belief model to represent the strength of an agent's beliefs and are studying how the form of an utter- ance and certain clue words contribute to conveying these beliefs. Consider, for example, the following two utterances: (1) Is Dr. Smith teaching CSMO? (2) Isn't Dr. Smith teaching CSMO? A simple yes-no question as in utterance (1) sug- gests only that the speaker doesn't know whether Dr. Smith teaches CS310 whereas the form of the question in utterance (2) suggests that the speaker has a relatively strongbelief that Dr. Smith teaches CS310 but is uncertain of this. These beliefs con- veyed by the surface speech act must be taken into account during the plan recognition process. Thus our plan recognition algorithm will first use the ef- fects of the surface speech act to suggest augmen- tations to the belief model. These augmentations will then be taken into account in deciding whether requisite beliefs for potential discourse acts can be plausibly ascribed to the speaker and will enable us to identify such discourse actions as expressing sur- prise. [Lam91] further discusses the use of a multi- level belief model and its contribution in modeling dialogue. cution level (corresponding to queries after commit- ment has been made to achieve a goal in a particular way). In our tripartite model, discourse, problem- solving, and domain plans form a hierarchy with links between adjacent levels. Whereas Ramshaw's exploration level captures the consideration of al- ternative plans, our intermediate level captures the notion of problem-solving and plan-construction, whether or not there has been a commitment to a particular way of achieving a domain goal. Thus a query such as To whom do I make out the check? would be recognized as a query against the domain execution level in Ramshaw's model (since it is a query made after commitment to a plan such as opening a passbook savings account[Ram91]), but our model would treat it as a discourse plan that is executed to further the problem-solving plan of instantiating a parameter in an action in a domain plan -- i.e., our model would view the agent as ask- ing a question in order to further the construction of his partially constructed domain plan. Our tripartite model offers several advan- tages. Ramshaw's model assumes that the top-level domain plan is given at the outset of the dialogue and then his model expands that plan to accom- modate user queries. Our model, on the other hand, builds the DM incrementally at each level as the dialogue progresses; it therefore can han- dle bottom-up dialogues[Car87] in which the user's overall top-level goal is not explicitly known at the outset and can recognize discourse actions that can- not be identified from a single utterance. In addi- tion, our domain, problem-solving, and discourse plans are all recognized incrementally using basi- cally the same plan recognition algorithm on each level[Wil81]. Consequently, we foresee being able to extend our model to include additional pairs of problem-solving and discourse levels whose domain level contains an existing problem-solving or dis- course plan; this will enable us to handle utter- ances such as What should we work on next? (query trying to further construction of a problem-solving plan) and Do you have information about ... ? (query trying to further construction of a discourse plan to obtain information). Ramshaw's plan exploration strategies, his differentiation between exploration and commit- ment, and his heuristics for recognizing adoption of a plan are very important. While our work has not yet addressed these issues, we believe that they are consistent with our model and are best addressed at our problem-solving level by adding new problem- solvin~ metaplans. Such an incorporation will have severat advantages, including the ability to handle utterances such as 5 Related Work Ramshaw[Ram91] has developed a model of discourse that contains a domain execution level, an exploration level, and a discourse level. In his model, discourse plans can refer either to the explo- ration level (corresponding to queries about possi- ble ways of achieving a goal) or to the domain exe- If I decide to get a BA degree, then I'll take French to meet the foreign language requirement. In the above case, the speaker is still exploring a plan for getting a BA degree, but has committed to taking French to satisfy the foreign language re- quirement should the plan for the BA degree be adopted. It does not appear that Ramshaw's model 53 can handle such contingent commitment. This en- richment of our problem-solving level may necessi- tate changes to our focusing heuristics. 6 Conclusions We have presented a tripartite model of dia- logue that distinguishes between domain, problem- solving, and discourse or communicative actions. By modeling each of these three kinds of actions as separate tree structures, with links between the actions on adjacent levels, our process model en- ables the incremental recognition of discourse ac- tions that cannot be identified from a single ut- terance alone. However, it is still able to cap- ture the relationship between discourse, problem- solving, and domain actions. In addition, it pro- vides a more finely differentiated representation of user intentions than previous models, allows the nu- ances of different kinds of processing (such as dif- ferent focusing expectations and information reten- tion) to be captured at each level, and accounts for implicit acceptance of a communicated proposition. Our current work involves using this model to han- dle negotiation dialogues in which a hearer does not automatically accept as valid the proposition com- municated by an inform action. References lAP80] [CarS7] [Car89] [Gri75] [GS86] [HovS8] [Jos82] [LA87] James F. Allen and C. Raymond Perrault. Analyzing intention in utterances. Artifi- cial Intelligence, 15:143-178, 1980. Sandra Carberry. Pragmatic modeling: Toward a robust natural language inter- face. Computational Intelligence, 3:117- 136, 1987. Sandra Carberry. A pragmatic.s-based ap- proach to ellipsis resolution. Computa- tional Linguistics, 15(2):75-96, 1989. Joseph E. Grimes. The Thread of Dis- course. Mouton, 1975. Barbara Grosz and Candace Sidner. At- tention, intention, and the structure of discourse. Computational Linguistics, 12(3):175-204, 1986. Eduard H. Hovy. Planning coherent multisentential text. Proceedings of the ~6th Annual Meeting of the Association for Computational Linguistics, pages 163- 169, 1988. Aravind K. Joshi. Mutual beliefs in question-answer systems. In N. Smith, ed- itor, Mutual Beliefs, pages 181-197, New York, 1982. Academic Press. Diane Litman and James Allen. A plan recognition model for subdialogues in con- versation. Cognitive Science, 11:163-200, 1987. [Lam91] [MP90] [MT83] [PASO] [Per90] [Po186a] [Po186b] [Ram89] [Ram91] [Rei78] [vBC86] [Wil81] Lynn Lambert. Modifying beliefs in a plan-based discourse model. In Proceed- ings of the 29th Annual Meeting of the ACL, Berkeley, CA, June 1991. Johanna Moore and Cecile Paris. Plan- ning text for advisory dialogues. In Pro- ceedings of the 27th Annual Meeting of the Association for Computational Linguis- tics, pages 203-211, Vancouver, Canada, 1990. William C. Mann and Sandra A. Thomp- son. Relational propositions in dis- course. Technical Report ISI/RR-83-115, ISI/USC, November 1983. Raymond Perrault and James Allen. A plan-based analysis of indirect speech acts. American Journal of Computational Linguistics, 6(3-4):167-182, 1980. Raymond Perrault. An application of de- fault logic to speech act theory. In Philip Cohen, Jerry Morgan, and Martha Pol- lack, editors, Intentions in Communica- tion, pages 161-185. MIT Press, Cam- bridge, Massachusetts, 1990. Livia Polanyi. The linguistics discourse model: Towards a formal theory of dis- course structure. Technical Report 6409, Bolt Beranek and Newman Laboratories Inc., Cambridge, Massachusetts, 1986. Martha Pollack. Inferring Domain Plans in Question-Answering. PhD thesis, University of Pennsylvania, Philadelphia, Pennsylvania, 1986. Lance A. Ramshaw. Pragmatic Knowl- edge for Resolving lll-Formedness. PhD thesis, University of Delaware, Newark, Delaware, June 1989. Lance A. Rarnshaw. A three-level model for plan exploration. In Proceedings of the 29th Annual Meeting of the Association for Computational Linguistics, Berkeley, California, 1991. Rachel Reichman. Conversational co- herency. Cognitive Science, 2:283-327, 1978. Peter van Beck and Robin Cohen. To- wards user specific explanations from ex- pert systems. In Proceedings of the Sixth Canadian Conference on Artificial Intelli- gence, pages 194-198, Montreal, Canada, 1986. Robert Wilensky. Meta-planning: Repre- senting and using knowledge about plan- ning in problem solving and natural lan- g uage understanding. Cognitive Science, :197-233, 1981. 54
1991
7
DISCOURSE RELATIONS AND DEFEASIBLE KNOWLEDGE* Alex Lascarides t Human Communication Research Centre University of Edinburgh 2 Buccleuch Place, Edinburgh, EH8 9LW Scotland alex@uk, ac. ed. cogsc£ Nicholas Asher Center for Cognitive Science University of Texas Austin, Texas 78712 USA asher@sygmund, cgs. utexas, edu Abstract This paper presents a formal account of the temporal interpretation of text. The distinct nat- ural interpretations of texts with similar syntax are explained in terms of defeasible rules charac- terising causal laws and Gricean-style pragmatic maxims. Intuitively compelling patterns of defea,- sible entailment that are supported by the logic in which the theory is expressed are shown to un- derly temporal interpretation. The Problem The temporal interpretation of text involves an account of how the events described are related to each other. These relations follow from the discourse relations that are central to temporal import. 1 Some of these are listed below, where the clause a appears in the text before fl: Narration(a,fl): The event described in fl is a consequence of (but not necessarily caused by) tile event described in a: (1) Max stood up. John greeted him. Elaboration(a,~): The event described in /? contributes to the occurrence of the culmination *This paper is greatly influenced by work reported in (Lascarides & Oberlander, 1991). We would llke to thank Hans Kamp, Michael Morreau and .Ion Oberlander for their significant contributions to the content of this pa- per. All mistakes are solely our responsibility. t The support of the Science and Engineering Research Council through project number GR/G22077 is gratefully acknowledged. HCRC is supported by the Economic and Social Research Council. 1 Extensive classifications of discourse relations are of- fered in (Polanyi, 1985), (Scha & Polanyi, 1988) and (Thompson & Mann, 1987). of the event described in a, i.e. fl's event is part of the preparatory phase of a's: 2 (2) The council built the bridge. The architect drew up the plans. Explanation(a, fl): For example the event de- scribed in clause fl caused the event described in clause a: (3) Max fell. John pushed him. Background(a, fl): For example the state de- scribed in fl is the 'backdrop' or circumstances under which the event in a occurred (so the event and state temporally overlap): (4) Max opened the door. The room was pitch dark. Result(a, fl): The event described in a caused the event or state described in fl: (5) Max switched off the light. The room was pitch dark. We assume that more than one discourse re- lation can hold between two sentences; the sick- ness in (6) describes the circumstances when Max took the aspirin (hence the sentences are related by Background) and also explains why he took the aspirin (hence the sentences are related by Explanation as well). (6) Max took an aspirin. He was sick. The sentences in texts (1) and (3) and in (4) and (5) have similar syntax, and therefore similar 2We assume Moens and Steedman's (1988) tripartite structure of events, where an event consists of a prepara- tory phase, a culmination and a consequent phase. 55 logical forms. They indicate, therefore, that the constraints on the use of the above discourse re- lations cannot rely solely on the logical forms of the sentences concerned. No theory at present is able to explain the dis- tinct temporal structures of all the above texts. Webber (1988) observes that Kamp & Rohrer (1983), Partee (1984), Hinrichs (1986) and Dowty (1986) don't account for the backwards movement of time in (2) and (3). Webber (1988) can account for the backwards movement of time in (2), but her theory is unable to predict that mismatching the descriptive order of events and their temporal order is allowed in some cases (e.g. (2) and (3)) but not in others (e.g. (1), which would be mis- leading if the situation being described were one where the greeting happened before Max stood up). Our aim is to characterise the circumstances under which each of the above discourse relations hold, and to explain why texts can invoke dif- ferent temporal relations in spite of their similar syntax. Dahlgren (1988) represents the difference be- tween (1) and (3) in terms of probabilistic laws describing world knowledge (WK) and linguistic knowledge (LK). Our approach to the problem is generally sympathetic to hers. But Dahlgren's account lacks an underlying theory of inference. Furthermore, it's not clear how a logical conse- quence relation could be defined upon Dahlgren's representation scheme because the probabilistic laws that need to interact in certain specific ways are not logically related. Unlike Dahlgren (1988), we will supply an inference regime that drives the interpretation of text. The properties required of an inference mech- anism for inferring the causal structure underly- ing text is discussed in (Lascarides & Oberlander, 1991). The work presented here builds on this in two ways; first by supplying the required notion of inference, and second by accounting for discourse structure as well as temporal structure. Temporal Relations and Defeasible Reasoning Let us consider texts (1) and (3) on an intu- itive level. There is a difference in the relation that typically holds between the events being de- scribed. Intuitively, world knowledge (WK) in- eludes a causal 'law' gained from perception and experience that relates falling and pushing: 3 • Causal Law 3 Connected events el where x falls and e2 where y pushes z are normally such that e2 causes el. There is no similar law for standing up and greet- ing. The above law is a de feasible law. Our claim is that it forms the basis for the distinction be- tween (1) and (3), and that defeasible reasoning underlies the temporal interpretation of text. First consider text (1). Intuitively, if there is no temporM information at all gained from WK or syntactic markers (apart from the simple past tense which is the only temporal 'expres- sion' we consider here), then the descriptive order of events provides the only vital clue as to their temporal order, and one assumes that descriptive order matches temporal order. This principle is a re-statement of Grice's (1975) maxim of Man- ner, where it is suggested that text should be or- derly, and it is also motivated by the fact that the author typically describes events in the or- der in which the protagonist perceives them (cf. Dowty (1986)). This maxim of interpretation can be captured by the following two laws: • Narration Unless there's information to the contrary, clauses a and j3 that are discourse-related are such that Narration(a, ~) holds. • Axiom for Narration If Narration(a, fl) holds, and a and fi de- scribe the events el and e2 respectively, then el occurs before e2. Narration is defensible and the Axiom for Narra- tion is indefeasible. The idea that Gricean-style pragmatic maxims should be represented as de- feasible rules is suggested in (Joshi, Webber & Weischedel (1984)). The above rules can be defined in MASH--a logic for defensible reasoning described in (Asher & Morrean, 1991). We will demonstrate shortly that an intuitively compelling pattern of defensi- ble inference can then underly the interpretation of (1). MASH supplies a modal semantics for a lan- guage with a default or generic quantifier, and a 3The causal law's index corresponds to the index of the text for which it is relevant. 56 dynamic partial semantics of belief states is built on top of this modal semantics to c~pture intu- itively compelling patterns of non-monotonic tea- soning. We use a propositional version of MASH here. Defaults are represented as ¢ > ¢ (read as "¢ then ¢, unless there is information to the contrary"). The monotonic component of the the- ory defines a notion of validity ~ that supports axioms such as ~ [:3(¢ --* ¢) ~ ((X > ¢) --~ (X > ¢)). The dynamic belief theory supplies the nonmonotonic component, and the corresponding nonmonotonic validity, ~, describes what reason- able entailments follow from the agent's beliefs. supports (at least) the following patterns of common sense reasoning: Defensible Modus Ponens ¢>¢,¢ ~ ¢ but not ¢>¢,¢,-~¢ ~ ¢ e.g. Birds fly, Tweety is a bird ~ Tweety flies, but not: Birds fly, Tweety is a bird that doesn't fly ~ Tweety flies. Penguin Principle ¢ >¢,¢>C¢>-~,¢ ~-~i but not: ¢ > ¢,¢ :> (,¢ > -,(,¢ ~ ( e.g. Penguins are birds, Birds fly, Penguins don't fly, Tweety is a Penguin ~ Tweety doesn't fly, and does not ~ Tweety flies. Nixon Diamond not (¢ > ¢,I > "¢,¢,( ~ ¢ (or --¢)) e.g. There is irresolvable conflict in the follow- ing: Quakers are pacifists, Republicans are non- pacifists, Nixon is a Quaker and Republican. We assume a dynamic theory of discourse struc- ture construction in which a discourse structure is built up through the processing of successive clauses in a text. To simplify our exposition, we will assume that the basic constructs of these structures are clauses. 4 Let (4,13) mean that the clause ~ is to be attached to the clause a with a discourse relation, where a is part of the already built up discourse structure. Let me(a) be a term that refers to the main eventuality described by a (e.g. me(Max stood up) is the event of Max standing up). 5 Then Narration and the axiom on Narration are represented in MASH as follows (cl -~ e.~ means "el wholly occurs before e2"): 4The theory should extend naturally to an account where the basic constructs are segments of text; the approach adopted here is explored extensively in Asher (forthcoming). 5me(c~) is formally defined in Lascarides & Asher (1991) in a way that agrees with intuitions. • Narration (or, ~) > Narration(c~,~3) • Axiom on Narration r~ (Na,','atio,~(~, ~) --, me(~) ~ me(Z)) We assume that in interpreting text the reader believes all LK and WK (and therefore believes Narration and its axiom), the laws of logic, and the sentences in the text. The sentences in (1) are represented in a DnT-type framework as follows: 6 (7) [e1,~1][~1 <now, hold(el,Q),s~andup(rn, el)] (8) [~, t~][t2 < now, hold(~2, t2),gr~t(j, m, ~2)] In words, (7) invokes two discourse referents el and ~1 (which behave like deictic expressions), where el is an event of Max standing up, tl is a point of time earlier than now and et occurs at it. (8) is similar save that the event e2 describes John greeting Max. (7) and (8) place no condi- tions on the relative temporal order between et and e2. These are derived at a higher level of anal- ysis than sentential semantics by using defensible reasoning. Suppose that the reader also believes that the clauses in text (1) are related by some discourse relation, as they must be for the text to be coher- ent. Then the reader's beliefs also include (7, 8). The natural interpretation of (1) is derived by calculating the common sense entailments from the reader's belief state. Given the assumptions on this state that we have just described, the an- tecedent to Narration is verified, and so by Defen- sible Modus Ponens, Narration(7, 8) is inferred. Since the belief states in MASH support modal clo- sure, this result and the Axiom on Narration en- tail that the reader believes the main eventuality of (7), namely el, precedes the main eventuality of (8), namely e2. So the intuitive discourse struc- ture and temporal interpretation of (1) is derived by exploiting defeasible knowledge that expresses a Gricean-style pragmatic maxim. But the analysis of (1) is satisfactory only if the same technique of exploiting defeasible rules can be used to obtain the appropriate natural in- terpretation of (3), which is different from (1) in spite of their similar syntax. eFor the sake of simplicity we ignore the problem of resolving the NP anaphora in (8). The truth definitions of (7) and (8) are llke those given in DRT save that they are evaluated with respect to a possible world index since MASH is modal. 67 (3) a. Max fell. b. John pushed him. As we mentioned before, Causal Law 3 will pro- vide the basis for the distinct interpretations of (1) and (3). The clauses in (3) must be related by a discourse relation for the text to be coherent, and therefore given the meanings of the discourse relations, the events described must be connected somehow. Therefore when considering the do- main of interpreting text, one can re-state the above causal law as follows: 7 Causal Law 3 Clauses a and/3 that are discourse-related where a describes an event el of x falling and/3 describes an event e~ of y pushing x are normally such that e2 causes el. The representation of this in MASH is: Causal Law 3 (a,/3)^f.n(x, me(a))^push(y, x, me(/3)) > ca~se(m~(~), me(a)) This represents a mixture of WK and linguistic knowledge (LK), for it asserts that given the sen- tences are discourse-related somehow, and given the kinds of events that are described by these sentences, the second event described caused the first, if things are normal. The logical forms for (3a) and (3b) are the same as (7) and (8), save that standup and greet are replaced respectively with fall and push. Upon interpreting (3), the reader believes all de- feasible wK and LK together with (3a), (3b) and (3a, 3b). Hence the antecedents to two defeasible laws are satisfied: Narration and Causal Law 3. Moreover, the antecedent of Law 3 entails that of Narration, and the laws conflict because of the axiom on Narration and the axiom that causes precede effects: • Causes Precede Effects [] (Vele2)(cause(el, e2) ~ ~e2 -~ el) The result is a 'Complex' Penguin Principle: it is complex because the consequents of the two defeasible laws are not ~ and -~ff, but instead the laws conflict in virtue of the above axioms. MASH supports the more complex Penguin Principle: ;'This law may seem very 'specific'. It could potentially be generalised, perhaps by re-stating el as x moving and e2 as y applying a force to x. For the sake of brevity we ignore this generalisation. • Complex Penguin Principle o(¢ ¢),¢ > x,¢ > ¢, o(x 0), [] (¢ ¢ but not: [] (¢ --* ¢), ¢ > X, ¢ > (, o (x 0), n (¢ -. ¢ x Therefore there is a defeasible inference that the pushing caused the falling from the premises, as required. The use of the discourse relation Explanation is characterised by the following rule: • Explanation (a, A > Explanation(a, jr) In words, if a and f~ are discourse-related and the event described in/3 caused the event described in a, then Explanation(a, ~) normally holds. Fur- thermore, Explanation imposes a certain tempo- ral structure on the events described so that if is a causal explanation of a then fPs event doesn't precede a's: • Axiom on Explanation [] (Explanation(a,/3) -~ -~me(a ) -~ rne(/3 ) ) The antecedent to Narration is verified by the reader's beliefs, and given the results of the Com- plex Penguin Principle above, the antecedent to Explanation is also verified. Moreover, the an- tecedent to Explanation entails that of Narration, and these laws conflict because of the above ax- ioms. So there is another complex Penguin Prin- ciple, from which Explanation(3a, 3b) is inferred. The second application of the Penguin Prin- ciple in the above used the results of the first, but in nonmonotonic reasoning one must be wary of dividing theories into 'subtheories' in this way because adding premises to nonmonotonic deduc- tions does not always preserve conclusions, mak- ing it necessary to look at the theory as a whole. (Lascarides & Asher, 1991) shows that the pred- icates involved in the above deduction are suffi- ciently independent that in MASH one can indeed divide the above into two applications of the Pen- guin Principle to yield inferences from the theory as a whole. Thus our intuitions about the kind of reasoning used in analysing (3) are supported in the logic. We call this double application of the Penguin Principle where the second application uses the results of the first the Cascaded Penguin Principle. s 8On a general level, MASH is designed so that the con- 58 Distinct Discourse Structures Certain constraints are imposed on discourse structure: Let R be Explanation or Elaboration; then the current sentence can be discourse re- lated only to the previous sentence a, to a sen- tence fl such that R(fl, a), or to a sentence 7 such that R(7, fl) and R(~, a). This is a simpler ver- sion of the definition for possible attachment sites in Asher (forthcoming). Pictorially, the possi- ble sites for discourse attachment in the example structure below are those marked open: Open Explana~ lanati°n Closed Open Narration Explanation/// ~xplanation Closed ~ Open Narration There are structural similarities between our notion of openness and Polanyi's (1985). The above constraints on attachment explain the awk- wardness of text (9a-f) because (9c) is not avail- able to (gf) for discourse attachment. (9) a. Guy experienced a lovely evening last night. b. He had a fantastic meal. c. He ate salmon. d. He devoured lots of cheese. e. He won a dancing competition. f. ?He boned the salmon with great ex- pertise. According to the constraint on attachment, the only available sentence for attachment if one were to add a sentence to (1) is John greeted him, whereas in (3), both sentences are available. Thus although the sentences in (1) and (3) were as- signed similar structural semantics, they have very different discourse structures. The events they flict between defeasible laws whose antecedents axe such that one of them entails the other is resolvable. Thus un- wanted irresolvable conflicts can be avoided. describe also have different causal structures. These distinctions have been characterised in terms of defeasible rules representing causal laws and prag- matic maxims. We now use this strategy to anal- yse the other texts we mentioned above. Elaboration Consider text (2). (2) a. The council built the bridge. b. The architect drew up the plans. We conclude Elaboration(2a, 2b) in a very sim- ilar way to example (3), save that we replace cause(me(~), me(a)) in the appropriate defensi- ble rules with prep(me(~), me(a)), which means that rne(~) is part of the preparatory phase of me(a). In Law 2 below, Info(a,~) is a gloss for "the event described in a is the council build- ing the bridge, and the event described in fl is the architect drawing up the plans", and the law represents the knowledge that drawing plans and building the bridge, if connected, are normally such that the former is in the preparatory phase of the latter: • Elaboration (a, ^ prep( e( ), me(a)) > Elaboration(a, fl ) • Axiom on Elaboratio~ n (Elaboration(a, -* ne(a) • Law 2 (a,/3) ^ Info(a, > prep(me(Z), ) The inference pattern is a Cascaded Penguin Prin- ciple again. The two resolvable conflicts are Law 2 and Narration and Elaboration and Narration. Background Intuitively, the clauses in (4) are related by Back- ground. (4) Max opened the door. The room was pitch dark. The appropriate reader's belief state verifies the antecedent of Narration. In addition, we claim that the following laws hold: 59 • States Overlap (a, A state(me( )) > overlap(me(a), me( ) ) • Background (a, Z> ^ overlap(me(a), me(Z)) > Background(a, fl ) • Axiom on Background [] (Background(a, overlap(me(a), me(~) ) ) States Overlap ensures that when attached clauses describe an event and state and we have no knowl- edge about how the event and state are connected, gained from WK or syntactic markers like because and therefore, we assume that they temporally overlap. This law can be seen as a manifesta- tion of Grice's Maxim of Relevance as suggested in (Lascarides, 1990): if the start of the state is not indicated by stating what caused it or by in- troducing an appropriate syntactic marker, then by Grice's Maxim of Relevance the starting point, and is irrelevant to the situation being described. So the start of the state must have occurred be- fore the situation that the text is concerned with occurs. As before, we assume that unless there is information to the contrary, the descriptive order of eventualities marks the order of their discovery. This together with the above assumption about where the state starts entail that unless there's information to the contrary, the state temporally overlaps events or states that were described pre- viously, as asserted in States Overlap. We assume that the logical form of the sec- ond clause in (4) entails state(me(~)) by the classification of the predicate dark as stative. So Background is derived from the Cascaded Penguin Principle: the two resolvable conflicts are States Overlap and Narration and Back- ground and Narration. States Overlap and Nar- ration conflict because of the inconsistency of overlap(el,e~) and el -~ e~; Background and Narration conflict because of the axioms for Back- ground and Narration. Result (5) has similar syntax to (4), and yet unlike (4) the event causes the state and the discourse rela- tion is Result. (5) a. Max switched off the light. b. The room was pitch dark. Let Info(a,fl) be a gloss for "me(a) is Max switching off the light and me(fl) is the room be- ing dark". So by the stative classification of dark, Info(a, fl) entails state(me(~)). Then Law 5 re- flects the knowledge that the room being dark and switching off the light, if connected, are normally such that the event causes the state: 9 • Causal Law 5 (a,/7) A Info(a,~) > cause(me(a), me(/7)) The use of the discourse relation of Result is char- acterised by the following: • Result (a, )^eause(me( ), > • Axiom on Result D(Result(a,~) --. me(a) ~ me(fl)) The reader's beliefs in analysing (5) verify the an- tecedents of Narration, States Overlap and Law 5. Narration conflicts with States Overlap, which in turn conflicts with Law 5. Moreover, the an- tecedent of Law 5 entails that of States Overlap, which entails that of Narration. So there is a 'Penguin-type' conflict where Law 5 has the most specific antecedent. In MASH Law 5's consequent, i.e. cause(me(ha), me(hb)), is inferred from these premises. The antecedent of Result is thus sat- isfied, but the antecedent to Background is not. Result does not conflict with Narration, and so by Defeasible Modus Ponens, both Result(ha, 5b) and Narration(ha, hb) are inferred. Note that thanks to the axioms on Background and Result and the inconsistency of overlap(el, e~) and el -~ e2, these discourse relations are in- consistent. This captures the intuition that if a causes b, then b could not have been the case when a happened. In particular, if Max switching off the light caused the darkness, then the room could not have been dark when Max switched off the light. Discourse Popping Consider text (9a-e): (9) a. Guy experienced a lovely evening last night. b. He had a fantastic meal. 9For the sake of simplicity, we ignore the problem of inferring that the light is in the room. 60 c. He ate salmon. d. lie devoured lots of cheese. e. He won a dancing competition. The discourse structure for (9a-d) involves Cas- caded Penguin Principles and Defeasible Modus Ponens as before. Use is made of the defeasible knowledge that having a meal is normally part of experiencing a lovely evening, and eating salmon and devouring cheese are normally part of having a meal if these events are connected: Guy experienced a lovely evening last night Elaboration He had a fantastic meal Elabora~-~f~~boration lie ate salmon He devoured Narration lots Of cheese We study the attachment of (9e) to the preced- ing text in detail. Given the concept of openness introduced above, the open clauses are (9d), (95) and (9a). So by the assumptions on text pro- cessing, the reader believes (9d, 9e), (9b, 9e) and (9a, 9e). (9d, 9e) verifies the antecedent to Narra- tion, but intuitively, (9d) is not related to (9e) at all. The reason for this can be explained in words as follows: • (9d) and (9e) don't form a narrative be- cause: - Winning a dance competition is nor- mally not part Of a meal; - So (9e) doesn't normally elaborate (9b); - But since (9d) elaborates (95), (9e) can normally form a narrative with (9d) only if (9e) also elaborates (9b). Thcse intuitions can be formalised, where Info(a, fl) is a. gloss for "me(a) is having a meal and me(fl) is winning a dance competition": * Law 9 (a, ^ I fo( , Z) > prep(me( ), me(.)) • Defeaslbly Necessary Test for Elaboration (a, ^ > -~ Elaboration( a, fl) • Constraint on Narration Elaboration((~, fll )A-~Eiaboration( a, f12 ) > -~ N arration(~t , ~2 ) The result is a 'Nixon Polygon'. There is irre- solvable conflict between Narration and the Con- straint on Narration because their antecedents are not logically related: Narration(9d, 9e) -~Elaboration(9b, 9e) Elaboration(9b, 9e) l (9d,De) (9d, 9e) -~prep(me(9b, 9e)) Elaboration(~ (9d, 9e) Info(9b, 9e) Elaboration(9b, 9d) The above in MASH yields ]i~Narration(9d, 9e) and ~-~Narration(9d, 9e). We assume that be- lieving (9d, 9e) and failing to support any dis- course relation between (9d) and (9e) is inco- herent. So (9d,9e) cannot be believed. Thus the Nixon Diamond provides the key to discourse 'popping', for (9e) must be related to one of the remaining open clauses; i.e. (95) or (9a). In fact by making use of the knowledge that winning a dance competition is normally part of experienc- ing a lovely evening if these things are connected, Elaboration(9a, 9e) and Narration(9b, 9e) follow as before, in agreement with intuitions. Conclusion We have proposed that distinct natural inter- pretations of texts with similar syntax can be ex- plained in terms of defeasible rules that represent 61 causal laws and Gricean-style pragmatic maxims. The distinct discourse relations and event rela- tions arose from intuitively compelling patterns of defeasible entailment. The Penguin Principle captures the intuition that a reader never ignores information salient in text that is relevant to cal- culating temporal and discourse structure. The Nixon Diamond provided the key to 'popping' from subordinate discourse structure. We have investigated the analysis of texts in- volving only the simple past tense, with no other temporal markers present. Lascarides & Asher (1991) show that the strategy pursued here can be applied to the pluperfect as well. Future work will involve extending the theory to handle texts that feature temporal connectives and adverbials. References Asher, Nicholas [forthcoming] Abstract Objects, Semantics and Anaphora. Asher, Nicholas & Morreau, Michael [1991] Common Sense Entailment: A Modal Theory of Nonmonotonic Reasoning, in Carlson, Greg & Pelletier, Jeff (eds.) The Generic Book, Proceed- ings to JELIA90, University of Chicago Press. Dahlgren, Kathleen [1988] Naive Semantics for Natural Language Understanding, Kluwer Aca- demic Publishers; Boston, USA. Dowty, David [1986] The Effects of Aspeetual Class on the Temporal Structure of Discourse: Se- mantics or Pragmatics? Linguistics and Philoso- phy, 9, 37-61. Grice, H. Paul [1975] Logic and Conversation. In Cole, P. and Morgan, J. L. (eds.) Syntaz and Semantics, Volume 3: Speech Acts, pp41-58. New York: Academic Press. Itinrichs, Erhard [1986] Temporal Anaphora in Discourses of English. Linguistics and Philoso- phy, 9, 63-82. Joshi, Aravind, Webber, Bonnie L. & Weischedel, Ralph [1984] Default Reasoning in Interaction. In Proceedings of the Non-Monotonic Reasoning Workshop, AAAI, New York, October, 1984, 144-150. Kamp, Hans [1981] A Theory of Truth and Se- mantic Representation. In Groenendijk, J. A. G., Janssen, T. M. V. and Stokhof, M. B. J. (eds.) Formal Methods in the Study of Language, Vol- ume 136, pp277-322. Amsterdam: Mathematical Centre Tracts. Kamp, Hans & Rohrer, Christian [1983] Tense in Texts. In Bauerle, R., Schwarze, C. and yon Stechow, A. (eds.) Meaning, Use and Interpreta- tion of Language, pp250-269. Berlin: de Gruyter. Lascarides, Alex [1990] Knowledge, Causality and Temporal Representation. Research Report No. HCRC/RP-8, Human Communication Re- search Centre, University of Edinburgh, Edin- burgh, June, 1990. Lascarides, Alex & Asher, Nicholas [1991] Dis- course Relations and Common Sense Entailment, DYANA deliverable 2.5b, available from Centre for Cognitive Science, University of Edinburgh. Lascarides, Alex & Oberlander, Jon [1991] Tem- poral Coherence and Defeasible Knowledge. In Proceedings to the Workshop on Discourse Co- herence, Edinburgh, April 1991. Moens, Marc & Steedman, Mark [1988] Tem- poral Ontology and Temporal Reference. Com- putational Linguistics, 14, 15-28. Partee, Barbara [1984] Nominal and Temporal Anaphora. Linguistics and Philosophy, 7, 243- 286. Polanyi, Livia [1985] A Theory of Discourse Structure and Discourse Coherence. In Eilfort, W. H., Kroeber, P. D. and Peterson, K. L. (eds.) Papers from the General Session at the Twenty- First Regional Meeting of the Chicago Linguistics Society, Chicago, April 25-27, 1985. Scha, Remko & Polanyi, Livia [1988] An Aug- mented Context Free Grammar. In Proceedings of the 121h International Conference on Compu- tational Linguistics and the 24th Annual Meeting of the Association for Computalional Linguistics, Budapest, Hungary, 22-27 August, 1988, 573-577. Thompson, Sandra A. & Mann, William C. [1987] Rhetorical Structure Theory: A Frame- work for the Analysis of Texts. International Pragmaties Association Papers in Pragmatics, 1, 79-105. Webber, Bonnie [1988] Tense as Discourse Anaphor. Computational Linguistics, 14, 61-73. 62
1991
8
SOME FACTS ABOUT CENTERS, INDEXICALS, AND DEMONSTRATIVES Rebecca J. Passonneau Columbia University 450 Computer Science Bldg. New York, New York 10027, U.S.A [email protected] ABSTRACT 1 Certain pronoun contexts are argued to establish a local center (LC), i.e., a conventionalized indexical similar to lst/2nd pers. pronouns. Demonstrative pronouns, also indexicals, are shown to access en- tities that are not LCs because they lack discourse relevance or because they are not yet in the uni- verse of discourse. 1 Introduction Referring expressions in discourse are multifunc- tional and dual-faced. Besides functioning to spec- ify referents, they also indicate the status of their referents in the evolving discourse model, such as the informational status of being given or new [Pri81], or maintain the attentional status of be- ing in focus [Sid83] [Gro77]. They are dual-faced in that the surface form of a referring expression is constrained by the prior discourse context, and then increments the context, serving to constrain subsequent utterances [Isa75]. As a consequence of this latter property, the communicative effect of many referring forms, especially pronouns, is rel- ative to specific types of discourse contexts. The discourse reference functions of a few types of pro- nouns are examined, taking into account their mul- tifunctionality and their dual nature, in order to clarify their processing requirements in dialogic natural language systems. In particular, a compar- ison of the conversational usage of it with two types of indexical pronouns indicates that certain uses of it, referred to as local centering, resemble what Ka- plan [Kap89] refers to as pure indexicals. Several functions of lhat are also identified and shown to contrast with local centering with respect to their preconditions and effects. Third person, definite (3d) pronouns contrast with indexical pronouns because the referents of the former are arbitrary, and must be actively es- tablished as part of the current universe of dis- course in order for the intended referent to be 1 This paper was written under the support of DARPA grant N000039-84-C-0165 and NSF grant IRT-84-51438. I am grateful to Kathy McKeown for her generous support. identified. In contrast, the referents of index- icals such as I and you (i.e., the speaker and addressee) are necessary components of the dis- course circumstances. 2 Indexical pronouns can be further classified into pure indexicals versus demonstratives, 8 depending on how the current dis- course circumstances provide their referents. The referent of a pure indexical is fully determined by the semantic rules and a context, which together pick out a unique referent for each use. Thus I refers to the person who utters it (assuming that I is used to refer). A pure indexical cannot refer to alternative entities, nor can any other expression pick out the relevant entity via the same type of referring function. Pure indexicals do not add en- tities to a context, or change the attentional status of their referents. In contrast, the referent of a demonstrative pro- noun is not completely determined by the context plus the semantic rules. An accompanying demon- stration is required, such as a physical or vocal gesture to something in the immediate discourse circumstances. Further, demonstratives can refer to anything in the context that can be demon- strated. In the cases of discourse deixis discussed by Webber [Web90], e.g., demonstrative pronouns are used to refer to discourse segments. Webber notes that in these cases, the demonstration consists in the intention to refer signalled by the use of the demonstrative, plus the semantics of the contain- ing clause, plus attentional constraints on which discourse segments can be demonstrated. 4 Thus, 3d pronouns, pure indexical pronouns, and demon- stratives all differ with respect to the set of con- textual elements that are available referents, and the manner in which the referent is related to the referring expression. Investigating their distinct discourse functions leads to extensions to the tri- 2The term indexical includes devices whose meaning per- talns to the time, the place and the perceived environment of a discourse context, e.g., tense, deictic adverbs (here, there) and deictic pronouns (this, that) [Pei35]. 3The view of indexicals presented here is largely drawn from Kaplan [Kap89]. 4Webber [Weh90] argues that only segments on the right frontier are available referents. 63 partite discourse model of attentional state, inten- tional structure and segmental structure proposed by Grosz and Sidner [GS86]. 5 The data presented here come from a set of dia- logic interviews, originally described in [Sch85] (cf. also [PasS9]). The methodology, fully described in [Pas90], primarily involves the examination of lin- guistic choices that are in principle independent, but which are found to co-vary significantly of- ten. Such co-variation is presumed to serve commu- nicative functions that discourse processing models need to replicate and explain. It should be remem- bered that the patterns of co-variation •described here represent pragmatically significant usage pat- terns, rather than obligatory ones. 2 Local Center In previous work, I presented the results of an anal- ysis of the distribution of occurrences of it and that having explicit antecedents in conversational data from 4 interviews, involving 5 different speak- ers (g = 678) [Pas89]. The two pronouns have similar syntactic contexts of occurrence thus dif- ferences in their distribution are pragmatic in na- ture, and stem primarily from the semantic con- trast of demonstrativity with non-demonstrativity. Previously, I had noted that the data supported the centering rule (CR) [GJW83] and the property sharing principle (PSP) [Kam86]. A review of the assumptions of the centering model, and of the con- versational data, argues for an alternative view. In this section I reinterpret the results as establishing a distinct attentional state, local center. I explain the two property sharing patterns of Kameyama's PSP (subject and non-subject, [Kam86]) with re- spect to local center, and discuss the similarity be- tween local centers and pure indexicals. Finally, I discuss the relation • of local centering to intentional and segmental structure. According to the centering model, every utter- ance has a backward-looking center---the currently most salient entity, but it need not be overtly men- tioned in the current utterance [GJW83]. The cen- tering rule (CK) [GJW83], in combination with the property-sharing principle (PSP) [Kam86], predict certain preferred surface choices for maintaining the backward-looking center (Cb). The CR says that when the same Cb is maintained in a new ut- terance, it is likely to be expressed by ;a (3d) pro- noun [GJW83]. The PSP says that when 3d pro- nouns realize the Cb in adjacent utterances, they . 5 The term segmental structure is used in place of their linguistic structure. FA and GR Lex. Choice and Gr of N2 of N1 SUB I Non-SUB I that it I that it l Cell No. 1 2 3 147 31 39 19 ProsvB 96.0 48.7 48.7 42.4 27.1 6.4 1.9 12.9 Cell No. 5 6 7 8 37 21 34 14 Pro,~on-SvB 43.1 21.9 21.9 19.1 .9 .0 6.7 1.3 Cell No. 9 10 11 1P 18 6 11 10 NPsuB 18.3 9.3 9.3 8.1 .0 1.1 .3 .1 Cell No. 13 14 15 16 43 33 36 45 NP,~o.-SUB 63.9 32.4 32.4 28.2 6.8 .0 .4 I0.0 Cell No, 17 18 19 20 8 5 1 1 OTHsuB 6.1 3.1 3.1 2.7 .6 1.2 1.4 1.1 Cell No. PI PP 23 2,~ 23 44 19 33 OTH,on-SvB 48.4 24.6 24.6 21.4 13.3 15.3 1.3 6.3 Table x-Square 116.3 Probability 0.00001 Table 1: Effects of form and grammatical role of antecedents on pronoun choice, with observed fre- quency, expected frequency, and x-squares for each cell (individual cells are numbered for convenient • reference) should both be subjects (canonical center reten- tion) or both not subjects (non-canonical center retention) [Kam86]. Given that the Cb can poten- tially be realized in non-preferred ways, that the Cb may change, or that it may be unexpressed, Cb has many possible surface realizations within a lo- cal discourse context of two s-adjacent utterances. 6 The distinct effects of alternate realizations of Cb on segmental structure and intentional structure have not been explored. Also, since the centering model focusses on 3d pronouns, no claims are made regarding the relation of indexical pronouns to the discourse model. The empirical results presented in [Pas89] showed that two features of the utterances contain- ing a pronoun and its antecedent were extremely 6I usethe somewhat awkward term s-adjacen$ to connote adjacency with respect to a containing segment, an impor- tant aspect of the Grosz-Sidner model; thus two s-adjacent utterances need not be literally adjacent. 64 predictive of lexical choice between it and that: the form of the antecedent (FA), and the grammati- cal role (GR) of both expressions. The best clas- sifications were where FA had the three values-- pronominal antecedent (PRO), full NP antecedent (NP), and other (OTH)--and where GR had the two values---subject (SUB) and non-subject (non- SUB). No other classifications of FA or GR were as predictive/ It is crucial to note that these classi- fications were the minimal set that still preserved the distinctiveness of the distributions. Seven other surface features had previously been found to be non-predictive [Sch85]. s Table 1 shows a very strong correlation (p -- .01%) 9 between the form and GR of the antecendent (N1) and the lexical choice and GR of the co-specifying pronoun (N2). Exactly 2 contexts selected for it, as shown by the combination of the high cell X2s, and the low val- ues for expected frequency, which together indi- cate that the observed frequency was significantly high. These 2 contexts were where the antecedent was PRO and where both expressions maintained the same GR value (cells 1, 7; PROGR, by itaR~). Of these 2, the more significant context, and in- deed the most significant context in the whole ta- ble, was where the antecedent was PROGRsvv (cell X 2 = 27.1). This is also the context type where the demonstrative was predicted not to occur (i.e., where the antecedent was PROscrBj; cells 2,4), indicating a functional opposition between it and that. l° Most of the cases of the PRO antecedents consisted of occurrences of it (65%), indicating that N1 and N2 often have the same form: it. Previ- ously unreported data bear on the likelihood that adjacent tokens of it will co-specify. An analy- sis of all adjacent utterance pairs where each con- tained at least one token of referential it revealed that 30% were cases where both were subjects, of which 90% co-specified. In contrast, it occurred with opposing GR values 20% of the time, with comparatively fewer instances where both tokens co-specified (65%). In sum, the data show that given ar~ occurrence of it with an antecedent, the antecedent is likely rCf. [Sch85] [Sch84] for how it was determined that these were the maximally predictive classifications. sViz., speaker alternation, clause type (main or subor- dinate), parallelism, and various measures of distance be- tween pronoun and antecedent (e.g., intervening utterances, intervening referents, syntactic depth). Note also that no significant variation with respect to FA and (lit was found across individual speakers or conversations. 9Note that a probability of 5~ or less is generally taken to be higtfly significant. 10 The remaining 4 of the 8 PRO antecedent contexts were non-predictive. to be it, the GR of both expressions is likely to be SUB, and in either case (SUB or non-SUB), they will have the same GR value. The opposing GR pattern is not predictive (where GR of N1 is not the same as GR of N2). Nor is it predicted to oc- cur with an antecedent NP, and is predicted not to occur with an antecedent OTH. The 2 contexts singled out here indicate that it is a likely form for re-referring to a known, given entity--because the antecedent is PRO. Conversely, successive occur- rences of it in Ui and Ui+I generally co-specify if they have the same GR. The entity referred to by it in these two patterns is called a local center. The following local center establishment (LCE) rule en- capsulates how a local center is anticipated and maintained, both for discourse understanding (.4) and generation (B). A: Recognizing a Local Center: Two s- adjacent utterances U1 and U2 establish en- tity £ as a local center only if U1 contains a 3ds pronoun N1 referring to g, U2 contains a co-specifying 3ds pronoun N2, and N1 and N2 are both subjects or both non-subjects. In the canonical case, both are subjects. B: Generating a Local Center: To estab- lish g as a local center in a pair of s-adjacent utterances U1 and U2, use an expression of type N to refer to g in both utterances where each token, N1 and N2, is a 3ds pronoun, and each is the subject of its clause or each is not the subject of its clause. In the canonical case, both should be subjects. (Precondition: To establish an entity ,~ as a local center, C must be in the current focus space, and it must be possible to refer to it with a 3ds pro- noun.) Recall from §1 that the process of interpreting a pure index requires no search or inference, but depends only on how the discourse circumstances are currently construed. The semantic value of a pure index is a contextual attribute--e.g., current speaker--that must have a particular referential value whenever an utterance occurs. In many ways, a pronoun fulfilling the LC function is like a pure in- dex. Discourse initially, there is no LC, because the LCE rule depends minimally on an utterance pair. But for any utterance pair where the LCE rule has applied, there will be a discourse entity--a com- ponent of the speech situation--that is by default indexed to the use of subsequent referring expres- sions with the right lexico-grammatical properties. An LC conforms to the characteristics of a pure in- dexical in that it becomes established as a transient attribute of the speech situation analogous to the essential attribute current speaker. The relation of the referent to the referring expression is one- 65 to-one; no other referents are candidate LCs, and no other form can access the LC. The processing mechanism for interpreting subsequent expressions conforming to the LCE rule is highly constrained. It is analogous to, although not identical with, that for pure indexicals. The difference is that the lo- cal center is not lexicalized, but rather, must be established and maintained by certain conventions of usage. CPs can choose not to establish a LC, or can choose not to maintain it. 11 Kameyama [Kam86] proposed canonical and non-canonical property sharing patterns, but did not discuss what governs the choice between them. Here it is suggested that the non-canonical LC pat- tern, illustrated in 1), results from the interac- tion of two distinct pragmatic effects. In the non- canonical LC contexts, where the LC was realized by non-SUB, the grammatical subjects were most often 1st or 2nd person pronouns. 12 This data con- forms to an empirically supported proposal made by Givon and others [Giv76] [Li76] that preferred subjects are animate rather than inanimate, defi- nite rather than indefinite, pronouns rather than full NPs, and 1st or 2nd person rather than 3rd person, due to the facts that in English, gram- matical subjects often express discourse topics (cf. also IF J84]), that people prefer to talk about them- selves and other people, and that discourse topics are given. The interviews examined here were in- tentionaUy biased towards the discussion of non- animate entities, is But Givon's subject hierarchy predicts that, given a non-animate and an animate entity in a single utterance, the latter will more of- ten occur as the subject. Since every matrix-clause utterance can have only one subject, there is po- tential competition for the subject role. The data show that when SUB, reserved by the LCE rule for establishing a local center, is pre-empted by a Ist or 2nd person pronoun, it is still possible for LC to be realized by alternate means, namely by sharing of non-SUB. Thus the sharing of the GR value across utterances is a defining characteristic, as noted by Kameyama [Kam86]. The non-canonical LC con- 11 That CPs often do maintain an LC is borne out by data pertaining to cohesive chains, a succession of utterance pairs in which every utterance contains a co-speclfying pronoun token. There were 101 cohesive chains in the interview data, ranging in length from 2 to 13 successive utterances, con- talning 506 pronouns, the majority of which involved LC contexts; cf. [Pas90]. 12The two next most likely possibilities were an sallmate full NP, or a non-referential pleonastic element, e.g., existen- tial there. After that, there was a very small heterogeneous category. Note: subject always refers to a surface grammat- ical function. 13E.g., college courses, degree requirements, career op- tions, resttm~s, and so on. figuration results from an interaction between two separate organizing forces: the LC status of the 3d pronoun referent, and the attentional prominence of the speaker and hearer. (1) Sla: Slb: S1¢: Sxd: Sle: S2a: S2b: $3 : I don't have the mental capacity to handle uh what I would like to teach which'd be philosophy or history at U of C uh with that level students um maybe with time and experience I'll gain it but I don't have it now In example 1), the utterance pair $2 and $3 share a 1st person subject and a non-canonical lo- cal center. 14 In this case, the centering model can- not provide a principled answer to the question of whether the speaker--the grammatical subject-- or the speaker's 'mental capacity'--referred to by successive 3d pronoun direct objects--is the cur- rent center. In the model proposed here, $2 and S~ establish 'mental capacity' as a local center, an at- tentional status for regulating the generation and production of 3ds pronouns, and the question of which entity is more salient does not arise. But lo- cal centering does seem to have a secondary func- tion pertaining to the linkage between utterances at the level of intentional and segmental structure. In addition to sharing a default referent, clauses containing LC pronouns are often semantically alike in other ways. In an initial attempt to test for this similarity, utterance pairs with PRO an- tecedents were classified into those that did and did not conform to the LCE rule. No other con- texts were examined because contexts with OTH and NP antecedents were presumed to be even less like LC contexts. These utterance pairs were sorted into cases where the lexical root of the matrix verb in both clauses was identical (i.e., the verb of which the pronoun was an argument), but where the ut- terances were not verbatim repetitions. 15 The re- sults were that 30% of the LC contexts had the same verb, but only 11% of contexts differing from LC in that N2 was that instead of it. None of the contexts with opposing GI~ values for the two pronouns had the same verb, which is not surpris- ing given that for most verbs, each argument po- sition has a very distinct semantic role. In sum, by maintaining an LC and the same lexical verb, 14In interview excerpts, S is the student and C the coun- selor. Feedback cues from the addressee indicating contin- ued attention (e.g., uhhuh) have been omitted. 15In copular clauses, the be-complements were compared instead of the verb; ellipsis was counted as identity. the speaker continues to predicate the same type of information about the same entity. This pre- sumably serves as a cue that the speaker main- tains a common Discourse Segment Purpose (DSP, [GS86]) throughout both utterances--to convey in- formation about the local center with respect to the state of affairs conveyed by the verb. Insofar as local centering pertains to segment continuation, or to relating a new utterance to the DSP of a preceding utterance, a discourse plan to continue the current DSP need not refer directly to the sur- face grammatical choices which reflect that plan, but only to the current status of LC. If there is a current LC, then maintaining it would reflect the speaker's intention for the next utterance to con- tinue the same DSP as the prior utterance. The data assembled here indicate that local cen- tering not only constrains the interpretation of cer- tain pronouns, but also conveys the inter-utterance relevance of locally centered entities in a larger dis- course segment, or in the discourse as a whole. However, most of the entities referred to in the con- texts represented in Table I are not LCs. Logically, that means they can fall into several classes: e.g., entities that are former or potential LCs because they are in the universe of discourse and are rele- vant to a former or future DSP; entities that are in the universe of discourse but are not LCs be- cause they are peripheral to the current DSP; and finally, entities that are not yet in the universe of discourse. The next section will illustrate how the demonstrative picks out entities in the latter two classes. 3 New Entities, Anti-centers, and Non Entities The results presented in the preceding section in: dicate that referential it has different discourse ef- fects, depending on its grammatical role, and on various properties of its antecedent, which in turn depend on the status of the referent in the discourse context. Just as local centering is only one dis- course referring function that it participates in, it will be seen that there are several referring func- tions the demonstrative participates in, each with distinct preconditions and effects. Although pro- nouns are often thought of as identifying topical entities, that is not necessarily the case. English has a relatively impoverished inventory of pronouns in comparison to the Bantu language Chich~wa, which has two sets of definite pronouns, one of which is morphologically incorporated into the verb stem, and the other of which consists of indepen- NP Antecedent IT THAT Given 78 17 Not Given 31 71 Probability " ] , .0001 Table 2: Givenness and Lexical Choice of Pronoun dent morphemes IBM87]. is In their analysis of Chich~wa, Bresnan and Mchombo argue that of the two non-argument grammatical roles in LFG, WOP(ic) and fOC(us), the independent pronouns can only fill the FOC role, not TOP [BM87]. In their framework, no expression can simultaneously be TOP and FOC. x7 This is reminiscent of the pragmatic contrast in English between it and that in focus-marking constructions, as illustrated in 2a- b) below. That is acceptable, while it is not, as a syntactically focussed element: (2) a. That/*It I bought for my mother, but I could get another one for you. b. Pepper is okay, but don't add more curry. It's ?that/*it that makes me sneeze. These examples are compatible with the conver- sational data in the following way. If TOP and FOC are truly contrastive grammatical functions, the above examples show that that is more accept- able as FOC. We have seen that it is less likely when the antecedent is NP or OTH than when it is PRO, that it occurs often as SUB, and often with SUB antecedents. Thus it, whether fullfill- ing LC or not, correlates with other properties of discourse topics. An entity that has been referred to by an antecedent pronoun has already been lo- cated in the universe of discourse, and already has the informational status of given prior to the oc- currence of the pronoun itself, and thus is a likely topic. We have also seen that that is unlikely with PROsvB antecedents, which would correlate with a presumed likelihood for that to not express TOP. But further evidence regarding the informational and attentional status of the likely referents of that reinforces the presumed TOP/FOC contrast. The first case we'll look at involves NP an- tecedents. Table 2 shows the distribution of an- tecedent NPs, classifed according to whether they were given or not, by lexical choice of it or that. A referent was classified as given if it had been 16In addition, there is a separate set of demonstrative pronouns. 17More specifically, not at the same level of LFG func- tional clause structure. 67 mentioned previously, if it was closely associated with a previously mentioned entity (e.g., social worker and the social work profession), or if it was a commonly known individual entity whose iden- tity would would be known to either speaker (e.g., places such as New York City). The very low prob- ability for the X 2 of Table 2 (p = .01%) indicates that the tendency for that to occur with new an- tecedents and for it to occur with given antecedents is extremely significant. Further classifying the lo- cal utterance contexts by GR in various ways did not reveal any further significant distinctions. This result, while not counter-intuitive, is not one that would be obvious without looking at frequency dis- tributions in actual on-line discourse, since it can easily and naturally be used to co-specify with a new antecedent, or that with a given antecedent. Some examples from the interviews are shown in 3-4) with the relevant pronoun token and its NP antecedent in boldface. They have been particu- larly selected to show that the occurring pronoun can be felicitously replaced with the opposite choice (shown in parentheses). (3) Cla: Clb: Clc: C~ : Ca : (4) it is the service that you give to other people be it as a doctor or a social worker a psychiatrist or a lawyer you have a certain expertise and people use that (it) C1 " C2a: Cab: Ca : C4 : I know we've had information about it and uh if not you can a- just write directly to Bryn Mawr and ask them about the p~ogrRm and see if they still have it (that) One way to interpret these results is that a single reference to a new entity is insufficient to establish the entity as part of the universe of discourse, given the processing demands of actual on-line discourse. In the cases where an entity is already given, but is referred to by a full NP rather than a pronoun (for whatever reason), the entity can be successfully reinvoked in the immediately following utterance by a 3ds pronoun. If the entity is new, a single prior mention is not in general sufficient, with respect to these data, to predispose the use of a 3ds pronoun to reinvoke it. Instead, the demonstrative functions to incorporate these new entities into the context. The demonstrative has another singular func- tion with NP antecedents. Table 1 singles out 2 significant contexts where there was a full NP NP Ant. Relevant Not Rel. NPsvB/ITsvB 7 11 NPnon-SVB/IT .... sub 17 21 NPx/ITx 31 22 NPsuB/THATxuB 2 3 N P .... suB/TH AT .... sub 3:t 9 NPx/ITx 23 17 Table X ~ 14 Probability .016 Table 3: Subsequent Discourse Relevance antecedent (cells 13, 16). If the antecedent was an NPnonstrBs, there was an increased likelihood for thatnonstrns and a decreased likelihood for itst~B.r. Because itsunJ is the canonical indi- cator of LC, and because LCs are presumed to have discourse relevance (i.e., play a central role in the current DSP), I hypothesized that the link- age between an antecedent NPnonSUBJ and a co- specifying thatnonSUBJ served to mark the referent as being unlike a local center by being peripheral to the current DSP. This was tested by examin- ing how often an entity mentioned in the NP con- texts was mentioned later in the discourse. Table 3 depicts the contexts in which an antecedent NP was followed by it or that, where GR for each was SUB or non-SUB, or where the GR values differed (X). These 6 contexts were coded for whether the referent was referred to again within the 10 utter- ances following the utterance containing the pro- noun. If so, the entity was coded as relevant; else it was non-relevant. The low probability of 1.6% indicates a significant correlation. The 2 cells con- tributing the most to the overall significance were for the NPno~-StrB/THATno,,-strB context, with non-relevant entities occurring significantly often, and relevant entities occurring significantly rarely. This evidence supports the view that the features of this context function to re-invoke entities while simultaneously signalling their peripheral status. The final referring function discussed here is where the demonstrative has an OTH antecedent. When N1 is OTHnonSUB (contexts 21-24), itSUB is unlikely (context 21), and both cases of thatsuB (context 22) and thatno,,-SVB (context 24) are sig- nificantly frequent. I will argue that these OTH contexts exemplify intra-textual deixis, which is analogous to the cases of discourse deixis stud- ied by Webber [Web90]. I refer to these cases as intra-textual deixis because the deictic refer- ence involves referents related to grammatical con- stituents rather than to discourse segments. In previous work, I pointed out that the criti- cal feature of the antecedent type which favors the lexical choice of that is syntactic, namely the dis- tinction between NPs with lexical noun heads and other types of constituents [Sch84]. Contexts where N1 is an NP whose head is a derived nominaliza- tion (such as the careful choice of one's words)pat- tern like those where the head is a lexical noun. Is Gerundives fall into the OTH class. Unlike NPs, the OTH antecedents cannot be marked for def- initeness: *a/*the carefully choosing one's words versus a/the careful choice of words. Definiteness is one of the means for indicating whether a refer- ent is presupposed to be part of the current context. Thus a possible difference between the interpreta- tion of the two types of phrases carefully choosing one's words and a careful choice of words would have to do with whether there is a discourse entity in the context as a consequence of the occurrence of the phrase itself. (5) Via: C:~a : C2b: Csa: Csb: C4 : there are some books that we have that talk about interviewing um one's called Sweaty Palms which I think is a great title (laugh) um but it talks very interestingly about how to go about interviewing and that's that's going to be important Another feature of OTH antecedents pertains to their ability to evoke specific entities into the uni- verse of discourse. Compare the two pronouns in example 5). The token of it in C~a unambiguously refers to the one book called Sweaty Palms. The referent of that in C4 is much harder to pin down. Does it correspond to interviewing, or to how to go about interviewing? This example illustrates an inherent vagueness in the processing of finding a textual referent for a demonstrative which I will now describe in more detail. Webber [Web90] notes that deictic reference is inherently ambiguous, although I prefer the term vague, in that vagueness connotes an underspeci- fled interpretation that can be given a number of more specific readings. Webber argues persuasively that deictic reference to a discourse segment is re- stricted to references to open segments on the right frontier, but that there is still an ambiguity as to which segment might be referred to, due to the recursive nature of discourse segmentation. Since an open segment on the right frontier may contain lSMixed nominals, such as the careful choosing of one's words, occurred too rarely to have a discriminating effect on contexts favoring it or that. within it an embedded open segment that is also on the right frontier, a token of the demonstrative that refers to a discourse segment can be ambiguous be- tween a more inclusive segment and a less inclusive one [Web90]. The vagueness may be eliminated if the context in which the deictic expression occurs clearly selects one of the possible readings. This phenomenon pertaining to deictic reference to seg- ments is replicated in the cases where that has an OTH antecedent, thus in C4 of 5), the antecedent of the demonstrative pronoun could be interviewing, or the more inclusive expression go about interview- ing, or the more inclusive one yet how to go about interviewing. I will now argue that such expres- sions do not in and of themselves introduce entities into the universe of discourse. (6) UI: V2: Us: I noticed that Carol insisted on sewing her dressesk from non-synthetic fabric. That's an example of how observant I am. And theyk always turn out beautifully. (7) UI: V2: Us: I noticed that Caroli insisted on sewing her dresses from non-synthetic fabric. That's an example of how observant I am. *That's because shei's allergic to synthetics. (8) UI: V2: U3: I noticed that Carol/ insisted on sewing her dresses from non-synthetic fabric. She/should try the new rayon challis. *That's because she's allergic to synthetics. The examples in 6)-8) show that entities intro- duced by referential NPs in U1 are still available for pronominal reference in Us, after an intervening U2. Ux introduces the referring expressions Carol and her dresses. Example 6) shows that the refer- ent of her dresses is still available in U 3 even though it is not mentioned in U2. Instead, Us contains a pronoun that refers to the fact that is asserted by the whole utterance U1. In contrast, the refer- ent of the non-nominal sentence constituent--Carol insisted on sewing her dresses from non-synthetic fabric--is not available after an intervening sen- tence that contains a deictic reference to a differ- ent non-nominal constituent, as in 7), or after an intervening sentence that contains a reference to a discourse entity mentioned in U1, as in 8). The preceding examples show that OTH con- stituents do not introduce entities into the dis- course context. With such antecedents, the demon- 69 strative does not access a pre-existing discourse en- tity, but rather, plays a role in a referring function by virtue of which a new discourse entity is added to the context. The occurrence of the demonstra- tive triggers a referring function that is constrained by the semantics of the demonstrative pronoun and its local semantic context, the antecedent, and other contextual considerations. The result of ap- plying an appropriate referring function is to in- crement the context with the new discourse entity that is found to be the referent of the demonstra- tive pronoun. This investigation has shown that a pronoun does not achieve discourse reference in and of it- self. In combination with various linguistic prop- erties of the prior utterance, and depending on the status of the referent in the context, a pronoun may have distinct referring functions. Although this in- vestigation has focussed primarily on non-animate pronouns, future research is expected to show that elements of the contrast between it and that oc- cur with animate 3d pronouns (e.g., he, she) since these pronouns have both demonstrative and non- demonstrative uses. References IBM87] [rJ84] [Giv76] [GJW83] [Gro77] [GS86] [Isa7S] Joan Bresnan and Sam A. Mchombo. Topic, pronoun and agreement in Chiche@a. In M. Iida, S. Wechsler, and D. Zec, editors, Working Papers in Grammatical Theory and Discourse Structure, pages 1-60. CSLI, 1987. William A. Foley and Robert D. Van Valin Jr. Functioned Syntoz and Universal Gram- mar. Cambridge University Press, Cam- bridge, 1984. Talmy Givon. Topic, pronoun, and gram- matical agreement. In Charles N. Li, editor, Subject and Topic, pages 149-188. Academic Press, New York, 1976. Barbara J. Grosz, A. K. Joshi, and S. Wein- stein. Providing a unified account of definite noun phrases in discourse. In Proceedings of the $lst ACL, pages 44-50, 1983. Barbara Grosz. The Representation and Use o] Focus in Dialogue Understanding. PhD thesis, University of California, Berkeley, 1977. Barbara J. Grosz and Candace L. Sidner. At- tention, intentions and the structure of dis- course. Computational Linguistica, 12:175- 204, 1986. S. Isard. Changing the context. In E.L.Keenan, editor, Formal Semantics o] Natural Language, pages 287-296. Cam- bridge U. Press, Cambridge, 1975. [Kam86] [Kap89] [Li76] [Pas89] [Pasg0] [Pei35] [Pri81] [Sch84] [Sch85] [Sid83] [Webg0] Megumi Kameyama. A property-sharing constraint in centering. In Proceedings of the 24th Annual Meeting of the A CL, pages 200- 206, 1986. David Kaplan. Demonstratives. In J. Almog, J. Perry, and H. Wettstein, editors, Themes from Kaplan, pages 481-566. Oxford Univer- sity Press, New York, 1989. Charles N. Li. Subject and Topic. Academic Press, New York, 1976. Rebecca J. Passonneau. Getting at discourse referents. In Proceedings of the 27th Annual Meeting of the ACL, pages 51-59, 1989. Rebecca J. Passonneau. Getting and keep- ing the center of attention. In R. Weischedel and M. Bates, editors, Challenges in Natu- ral Language Processing. Cambridge Univer- sity Press, 1990. To appear; also available as Tech. Report GUCS-060-90, Dept. of Com- puter Science, Columbia University. Charles S. Peirce. In C. Hartshorne and P. Weiss, editors, Collected Papers o] Charles Sanders Peirce. Harvard University Press, Cambridge, MA, 1931-35. Ellen Prince. Towards a taxonomy of given- new information. In P. Cole, editor, Radical Pragniatics, pages 223-55. Academic Press, New York, 1981. Rebecca J. (Passonneau) Sehiffman. The two nominal anaphors it and that. In Proceedings of the 20th Regional Meeting of the Chicago Linguistic Society, pages 322-357, 1984. Rebecca J. (Passonneau) Schiffman. Dis- course Constraints on it and that: A Study o] Language Use in Career-Counseling Inter- views. PhD thesis, University of Chicago, 1985. Candace L. Sidner. Focusing in the compre- hension of definite anaphora. In M. Brady and R. C. Berwick, editors, Computational Models of Discourse, pages 267-330. The MIT Press, Cambridge, Massachusetts, 1983. Bonnie L. Webber. Structure and osten- sion in the interpretation of discourse deixis. Technical Report MS-CIS-90-58, LINC LAB 183, University of Pennsylvania Computer and Information Science Department, 1990. To appear in Language and Cognitive Pro- cesses. 70
1991
9
INFERRING DISCOURSE RELATIONS IN CONTEXT* Alex Lascarides Human Communication Research Centre, University of Edinburgh, 2 Buccleuch Place, Edinburgh alex@cogsc£, ed. ac. uk Nicholas Asher Center for Cognitive Science, University of Texas, Austin, Texas 78712 asher@cgs, utexas, edu :Ion Oberlander Human Communication Research Centre, University of Edinburgh, 2 Buccleuch Place, Edinburgh jonecogec£, ed. ac .uk Abstract We investigate various contextual effects on text interpretation, and account for them by providing contextual constraints in a logical theory of text interpretation. On the basis of the way these con- straints interact with the other knowledge sources, we draw some general conclusions about the role of domain-specific information, top-down and bot- tom-up discourse information flow, and the use- fulness of formalisation in discourse theory. Introduction: Time Switching and Amelioration Two essential parts of discourse interpretation in- volve (i) determining the rhetorical role each sen- tence plays in the text; and (ii) determining the temporal relations between the events described. Preceding discourse context has significant effects on both of these aspects of interpretation. For example, text (1) in vacuo may be a non-iconic explanation; the pushing caused the falling and so explains why Max fell. But the same pair of sentences may receive an iconic, narrative in- terpretation in the discourse context provided by (2): John takes advantage of Max's vulnerability while he is lying the ground, to push him over the edge of the cliff. (1) Max fell. John pushed him. (2) John and Max came to the cliff's edge. John applied a sharp blow to the back of Max's neck. Max fell. John pushed him. Max rolled over the edge of the cliff. a The support of the Science and Engineering Research Council through project number GR/G22077 is gratefully acknowledged. HCRC is supported by the Economic and SociM Research Council. We thank two anonymous re- viewers for their helpful comments. Moreover, the text in (3) in vacuo is incoherent, but becomes coherent in (4)'s context. (3) (4) ?Max won the race in record time. He was home with the cup. Max got up early yesterday. He had a lit- tle bite to eat. He had a light workout. He started the tournament in good form. He won the race in record time. He was home with the cup. He celebrated until late into the evening. So we can see that discourse context can time switch our interpretation of sentence pairs, (cf. (1) and (2)); and it can ameliorate it, (cf. (4)'s improvement of (3)). The purpose of this paper is two-fold: we attempt to capture formally these aspects of discourse context's impact on clausal attachment; and in the process, we assess whether the structure of the domain being described might be sufficient alone to account for the phenomena. Of course, the idea that discourse context con- strains the discourse role assigned to the current clause is by no means new. Reference resolution is influenced by discourse structure (cf. Grosz and Sidner 1986:188 for a very clear case); and it in turn influences discourse structure. Now, on the one hand, Polanyi and Scha (1984), Hobbs (1985), and Thompson and Mann (1987) have argued that 'genre' or 'rhetorical schemata' can influence the relations used in discourse attach- ment. On the other hand, Sibun (1992) has re- cently argued that domain-specific information, as opposed to domain-independent rhetorical in- formation, plays the central role. Both ideas are intriguing, but so far only the latter has been specified in sufficient detail to assess how it works in general, and neither has been applied to time switching or amelioration in particular. We limit our discussion to temporal aspects of discourse interpretation; our strategy here is to explore two possible contextual constraints; these state how the discourse context filters the set of discourse relations and temporal relations which may be used to attach the current clause to the representation of the text so far. We then frame contextual constraints in a logical theory of text interpretation, where their effects and interactions can be precisely calculated. We therefore first in- troduce a domain-specific contextual constraint, following Sibun, and then place it in a formal the- ory of discourse attachment called DICE, devel- oped in Lascarides and Asher (1991a). We then show how the proposed domain-constraint is in- sufficient, and demonstrate how it can be aug- mented by adding a rhetorical, or presentational constraint to the theory. Constraints from the Domain Context In the field of NL generation, Sibun (1992) has recently argued that coherent text must have a structure closely related to the domain structure of its subject matter; naturally, her remarks are also relevant to NL interpretation. She pursues a view that task structure, or more generally, do- main structure, is sufficient to account for many discourse phenomena (but cf. Grosz and Sidner 1986:182). She examines in detail the generation of paragraph-length texts describing the layout of a house. Houses have structure, following from a basic relation of spatial proximity, and there are also hierarchical levels to the structure (rooms can be listed without describing what's in them, or the objects within each room can be detailed). Either way, one constraint on text structure is defined in terms of the description's trajectory: the spatial direction the description moved in the domain, to get from the objects already described to the current one. The constraint is: don't change trajectory. Sibun argues that in the temporal do- main, the basic relation is temporal proximity. But Lascarides and Oberlander (1992a) urge that the temporal coherence of text is characterised in terms of, among other things, the stronger ba- sic relation of causal proximity. So in the latter domain, Sibun's domain constraint precludes tex- tual descriptions which procede from a cause to an effect to a further cause of that effect, or from effect to cause to effect. This Maintain Causal Trajectory (MCT) con- straint has two important attributes: first, it is domain-specific; secondly, it introduces into dis- course interpretation an element of top-down pro- cessing. To investigate these properties, and see how far they go towards explaining discourse time switch, and discourse amelioration, we now incor- porate MCT into DICE's formal model of discourse structure, where its interaction with other causal information and strategies for interpretation can be precisely calculated. Discourse Interpretation and Commonsense Entailment DICE (Discourse and C_ommonsense Entailment) starts with traditional discourse representation structures (cf. Kamp 1981), but goes on to as- sume with Grosz and Sidner (1986) that candi- date discourses possess hierarchical structure, with units linked by discourse relations modelled af- ter those proposed by IIobbs (1979, 1985) (cf. also Thompson and Mann 1987, Scha and Polanyi 1988). 1 Lascarides and Asher (1991a) use Narra- tion, Explanation, Background, Result and Elab- oration. These are the discourse relations central to temporal import and they are the only ones we consider here. Full coverage of text would require a larger set of relations, akin to that in Thompson and Mann (1987). DICE is a dynamic, logical theory for deter- mining the discourse relations between sentences in a text, and the temporal relations between the eventualities they describe. The logic used is the nonmonotonic logic Commonsense Entail- ment (CE) proposed by Asher and Morreau (1991). Implicatures are calculated via default rules. The rules introduced below are shown in Lascarides and Asher (1991a) to be manifestations of Gricean- style pragmatic maxims and world knowledge. Discourse Structure and Implicature A formal notation makes clear both the logical structure of these rules, and the problems involved in calculating implicature. Let (% ~,fl) be the update function, which means "the representa- XLascaxides and Asher (1991a) introduces the general framework and applies it to interpretation; Oberlander and Lascaxides (1992) and Lascarides and Oberlander (1992b) use the framework for generation. tion r of the text so far (of which a is already a part) is to be updated with the representation fl of the current clause via a discourse relation with a". Let a g /~ mean that a is a topic for fl; let e~ be a term referring to the main eventuality described by the clause a; and let fall(m, e~) mean that this event is a Max falling. Let el -~ e2 mean the eventuality et precedes e~, and cause(el,ei) mean el causes ei. Finally, we represent the defeasible connective as in Asher and Morreau (1991) as a conditional > (so ¢ > ¢ means 'if ¢, then normally ¢')and --* is the ma- terial conditional. The maxims for modelling im- plicature are then represented as schemas: 2 • Narration: (r,a, fl) > Narration(a, fl) • Axiom on Narration: Narration(a, fl) ---* ea -q e# • Explanation: (r, ^ caus ( , > Ezplanation( a, fl) • Axiom on Explanation: Explanation(a, fl) ~ ~ea -~ e~ • Push Causal Law: (r, a, 1~) ^ fall(m, ca) ^ push(j, m, ca) > cause(ea, ec,) • Causes Precede Effects: cause(ei, el) ---, "-,st -~ e2 • States Overlap: (r, a, fl) ^ state(e#) > overlap(ca, e#) • Background: (% a,fl) ^ overlap(e~, ca) > Background(a, fl) • Axiom on Background: Background(a, fl) ---. overlap(ca, c# ) The rules for Narration, Explanation and Back- ground constitute defeasible linguistic knowledge, and the axioms on them indefeasible linguistic knowledge. In particular, Narration and its ax- iom convey information about the pragmatic ef- fects of the descriptive order of events; unless there is information to the contrary, it is assumed that the descriptive order of events matches their 2Discourse structure and c~ ~t/3 are given model theo- retical interpretations in Asher (in press); e(~ abbreviates me(c~), which is formally defined in Lascarides and Asher (1991b) in an intuitively correct way. For simplicity, we have here ignored the modal nature of the indefeasible knowledge; in fact, an indefeasible rule is embedded within the necessity operator 1:3. 3 temporal order in interpretation. The Push Causal Law is a mixture of linguistic knowledge and world knowledge; given that the clauses are discourse- related somehow, the events they describe must normally be connected in a causal, part/whole or overlap relation; here, given the events in ques- tion, they must normally stand in a causal rela- tion. That Causes Precede their Effects is inde- feasible world knowledge. We also have laws relating the discourse struc- ture to the topic structure (Asher, in press): for example, A Common Topic for Narrative states that any clauses related by Narration must have a distinct, common (and perhaps implicit) topic: • A Common Topic for Narrative Narration(a, fl) -* ^ ^ /3) ^ The hierarchical discourse structure is similar to that in Scha and Polanyi (1988): Elaboration and Explanation are subordinating relations and the others are coordinating ones. Equally, this structure defines similar constraints on attach- ment: the current clause must attach to the pre- vious clause or else to the clauses it elaborates or explains. In other words, the open clauses are those on the right frontier. We do not directly en- code the nucleus/satellite distinction used in RST (Thompson and Mann, 1987). Interpretation by Deduction cE and the defeasible rules are used to infer the discourse and temporal---structures of candidate texts, cE represents nonmonotonic validity as ~. Three patterns of nonmonotonic inference are particularly relevant: • Defeasible Modus Ponens: ~ > ~b,~b ~ ¢ e.g. Birds normally fly, Tweety is a bird; so Tweety flies • The Penguin Principle: e.g. Penguins are birds, birds normally fly, penguins normally don't fly, Tweety is a penguin; so Tweety doesn't fly. • Nixon Diamond: Not: ¢ > X,¢ > -~X,¢,¢ ~ X (or -~X) e.g. Not: Quakers are pacifists, Republi- cans are not, Nixon is both a quaker and republican Nixon is a pacifist/Nixon is a non-pacifist. Iconic and Non-lconic text: In interpreting text (5) we attempt to attach the second clause to the first (so (a, c~, fl) holds, where a and fl are respectively the logical forms of the first and second clauses). (5) Max stood up. John greeted him. (1) Max fell. John pushed him. In the absence of further information, the only rule whose antecedent is satisfied is Narration. So we infer via Defeasible Modus Ponens that the Narration relation holds between its clauses. This then yields, assuming logical omniscience, an iconic interpretation; the standing up precedes the greeting. In contrast, text (1) verifies the an- tecedents to two of our defeasible laws: Narration and the Push Causal Law. The consequents of these default laws cannot both hold in a consis- tent KS. By the Penguin Principle, the law with the more specific antecedent wins: the Causal Law, because its antecedent logically entails Nar- ration's. Hence (1) is interpreted as: the push- ing caused the falling. In turn, this entails that the antecedent to Explanation is verified; and whilst conflicting with Narration, it's more spe- cific, and hence its consequent--Explanation-- follows by the Penguin Principle. 3 Notice that deductions about event structure and discourse structure are interleaved. Incoherence and popping: Consider the in- coherent text (3). (3) ?Max won the race in record time. He was home with the cup. The Win Law captures the intuition that if Max wins the race and he is at home, then these events normally don't temporally overlap--regardless of whether they're connected or not. • Win Law: win(max, race, ex) A athome(max, e2) > -~overlap(e x, e2) The appropriate knowledge base in the analysis of (3) satisfies States Overlap, the Win Law and Narration. The first two of these conflict, but their antecedents aren't logically related. They 3The formal details of how the logic CB models these interpretations are given in Lascarides and Asher (1991b). Although the double application of the Penguin Principle, as in (1), is not valid in general, they show that for the particular case considered here, GE validates it. 4 therefore form a pattern out of which a Nixon Diamond crystallises: no temporal or discourse relation can be inferred. We stipulate that it is in- coherent to assume that (% a,/3) if one can't infer which discourse relation holds between a and ft. So the assumption that the clauses are connected must be dropped, and hence no representation of (3) is constructed. DICE exploits this account of incoherence in its approach to discourse popping. When a Nixon Diamond occurs in attempting to attach the cur- rent clause to the previous one, they don't form a coherent text segment. So the current clause must attach to one of the other open clauses, resulting in discourse popping (Lascarides and Asher, 1991a). Trajectory in DICE It should be clear DICE's devices, while formal, are also quite powerful. However, the maxims introduced so far cannot actually explain either discourse time switching (cf. (1) vs (2)) or ame- lioration (cf. (3) vs (4)). Incorporating some form of contextual constraint may be one way to deal with such cases. Because DICE makes essen- tial use of nonmonotonic inference, adding con- textual constraints will alter the inferences with- out requiring modification of the existing knowl- edge representation. We now investigate the con- sequences of adding MCT. Maintain Causal Trajectory Suppose R(a, ~) holds for some discourse relation R; then a appears in the text before/3, and we use this fact to define MCT. The default law be- low states that if the existing discourse context is one where a cause/effect relation was described in that order, then the current clause should not describe a further cause of the effect: • Maintain Causal Trajectory: (r, fl,7)A In using this rule, an interpreter brings to bear 'top-down' information, in the following sense. Up to now, discourse and temporal relations have been determined by using the input discourse as data, and predicting the relations using general linguistic and world knowledge. Now, the inter- preter is permitted to 'remember' which predic- tion they made last time, and use this to constrain the kind of relation that can be inferred for at- taching the current clause; this new prediction needs no data to drive it. Of course, incoming data can prevent the prediction from being made; MCT is just a default, and (6) is an exception. (6) Max switched off the light. The room went pitch dark, since he had drawn the blinds too. Time Switching MCT says how the event structures predicted for preceding context can affect the temporal rela- tions predicted for the current clause. But how does it interact with other causal knowledge in DICE? Does it account for time switching? Since MCT is a contextual constraint, it will only inter- act with causal knowledge in a discourse context. So consider how it affects the attachment of (2c) and (2d). (2) a. John and Max came to the cliff's edge. Ot b. John applied a sharp blow to the back of Max's neck. fl c. Max fell. 7 d. John pushed him. 6 e. Max rolled over the edge of the cliff. Suppose that the logical forms of the clauses (2a- e) are respectively o~ to e, and suppose that the discourse structure up to and including 3" has been constructed in agreement with intuitions: Narration Narration (29 ~ ' ~ " "r Furthermore, assume, in line with intuitions, that the interpreter has inferred that e# caused e 7. Consider how 6 is to be attached to the above discourse structure. 3' is the only open clause; so (% 3', 6) must hold. The antecedents to three de- feasible laws are verified: the Push Causal Law and Narration just as before, and also MCT. The consequents of the Push Causal Law and MCT conflict; moreover, their antecedents aren't logi- cally related. So by the Nixon Diamond, we can't infer which event--or discourse--relation holds. Accordingly, the discourse is actually incoherent. Yet intuitively, a relation can be inferred: the push happened after the fall, and the clauses 3" and 6 must be related by Narration. On its own, MCT cannot account for time switch- ing (or, indeed, amelioration). In one sense this isn't surprising. Causal knowledge and MCT were in conflict in (2), and since both laws relate to the domain, but in incommensurable ways, nei- ther logic nor intuition can say which default is preferred. This suggests that using domain struc- ture alone to constrain interpretation will be in- sufficient. It seems likely that presentational is- sues will be significant in cases such as these; where domain-specific knowledge sources are in irresolvable conflict, aspects of the existing dis- course structure may help determine current clause attachment. Since MCT has some motivation, it would be preferrable to let presentational infor- mation interact with it, rather than replace it. Constraints from the Presentational Context To what degree does existing rhetorical structure determine clause attachment? It's plausible to suggest that a speaker-writer should not switch genre without syntactically marking the switch. Thus, if the preceding context is narrative, then a hearer-reader will continue to interpret the dis- course as narrative unless linguistic markers in- dicate otherwise; similarly for non-narrative con- texts (cf. Caenepeel 1991, Polanyi and Scha 1984). This constraint relies on the continuation of a characteristic pattern of discourse relations, rather than on maintaining trajectory on some domain relation. Let's call this a presentational constraint; it may be able to get the right analyses of (2) and (4). In (2), for example, the context to which John pushed him is attached is narrative, so ac- cording to the constraint this clause would be attached with Narration in agreement with in- tuitions. But clearly, this constraint must be a soft one, since discourse pops can occur without syntactic markers, as can interruptions (Polanyi 1985:306). Both of these cause a change in the discourse 'pattern' established in the preceding context. Patterns in DICE Can we use presentational constraints without ac- cidentally blocking discourse popping and inter- ruptions? The problem is to represent in formal terms exactly when an interpreter should try to preserve the pattern of rhetorical structure estab- lished in the context. Because DICE provides a formal account of how discourse popping occurs-- the Nixon Diamond is the key--we are in a good position to attempt this. Discourse Pattern and Inertia First, we define the discourse pattern established by the context in terms of a function DP. This takes as input the discourse structure for the pre- ceding context, filters out those discourse rela- tions which would break the pattern, and outputs the remaining set of relations. This is similar to Hobbs' (1985:25-26) notion of genre, where, for example (in his terms) a story genre requires that the type of occasion relation can be only problem- solution or event-outcome. How much of the preceding discourse context does DP take as input? At one extreme, it could be just the discourse relations used to attach the previous clause; the output would be those same discourse relations. At the other extreme, the whole discourse structure may be input; DP would have to establish the regularity in the configu- ration of discourse relations, and evaluate which discourse relation would preserve it when the new clause is added. We leave this question open; for the examples of time switching and amelioration we consider here, DP would produce the same re- sult whatever it takes as input--Narration. Using DP, we can represent the discourse pat- tern constraint. The intuition it captures is the following. If the sentence currently being pro- cessed can't attach to any of the open nodes be- cause there's a Nixon Diamond of irresolvable con- flict, then assume that the discourse relation to be used is defined by DP. In other words, discourse pattern preservation applies only when all other information prevents attachment at all available open nodes. To express this formally, we need a representation of a state in which a Nixon Di- amond has formed. In cE, we use the formula ± (meaning contradiction) and the connective &, whose semantics is defined only in the context of default laws (of. Asher and Morreau 1991b). Intuitively, (A&B) > _1_ means 'A and B are an- tecedents of default rules that lead to a conflict that can't be resolved'. We use this to represent cases where the infor- mation provided by the clauses ~ and /3 (which are candidates for attachment) form a Nixon Di- amond. Let Info(a) be glossed 'the information Info is true of the clause a'. It is an abbreviation for statements such as fall(max, ea), cause(e~, ep), and so on. If a Nixon Diamond occurs when at- tempting to attach a to/3 on the basis of infor- mation other than DP, the following holds: • In fo( ) A ln fo(/3) A ^ Zn/oO))&(7., > ±) We will use ND(a,/3) as a gloss for the above schema, and open(7., a) means a is an open clause in the discourse structure 7-; assume that DP(7.) returns some discourse relation R. So the presen- tational constraint for preserving discourse pat- tern is defined as follows: 4 • Inertia: (Vot)(open(7., a) A ND(a,/3)) > (3a')(open(r, a') A DP(7.)(a',/3)) The antecedent to Inertia is verified only when all the information availablc cxcept for the preced- ing discourse pattern--yields a Nixon Diamond in attempting the attachment of/3 at all open nodes. Inertia thus won't prevent discourse pop- ping, because there a Nixon Diamond is averted at a higher-level open node. The model of text processing proposed here restricts the kind of in- formation that's relevant during text processing: the discourse pattern is relevant only when all other information is insufficient. Like MCT, Iner- tia is top-down, in the sense that it relies on ear- lier predictions about other discourse relations, rather than on incoming data; but unlike MCT, the 'theory-laden' predictions are only resorted to if the data seems recalcitrant. 6 Time Switching We now look at text (2) in detail. Suppose as before that the discourse structure 7- for the first three clauses in (2) is (2'), and the task now is to attach 6 (i.e. John pushed him). The only open clause is 7, because the previous discourse relations are all Narration. Moreover, DP(v) is Narration. As before, a Nixon Diamond forms between MCT and the Push Causal Law in at- tempting to attach 6 to 3'- Where Area is the antecedent to MCT, and Apcl the antecedent to the Push Causal Law substituted with 7 and 6: 4Inertia features an embedded default connective. Only two nonmonotonic logics can express this: Circumscrip- tion and Or.. • Area A Apa A ((Apct&Ama) > I) So ND(7,8) is verified, and with it, the antecedent to Inertia; substituting in the Inertia schema the value of DP(r), the Nixon Diamond, and the open clauses yields the following: • Inertia for (2): (Area A Apa A ((Apet&Ama) > .L)) > Narration(7 , 6) The antecedent to Inertia entails that of Maintain Trajectory (Area) and that of Push Causal Law (Apcz). In cE the most specific law wins. So the discourse context in this case determines the re- lation between the fall and the push: it is Narra- lion. Hence even though WK yields a causal pref- erence for the pushing causing the falling, given the discourse context in which the pushing and falling are described in (2), Narration is inferred after all, and so the falling precedes the push. In this way, we can represent the presentational, and domain-specific, information that must be brought to bear to create a time switch. 5 Amelioration Now consider texts (3) and (4). A Nixon Dia- mond formed between Narration, States Overlap and the Win Law in the analysis of (3) above, leading to incoherence. Now consider attaching the same clauses (4e) and (4f). (4) a. b. ¢. d. e. f. g. Max got up early yesterday. He had a little bite to eat. He had a light workout. He started the tournament in good form. He won the race in record time. He was home with the cup. He celebrated until late into the evening. Given the discourse (4a-e), (4e) is the only open clause to which (4f) can attach. Moreover, as in (3), attempting to attach (4f) to (4e) results in a Nixon Diamond. So the antecedent to Iner- tia is verified. DP delivers Narration, since the discourse context is narrative. So (4e-f) is in- terpreted as a narrative. Compare this with (3), 5If a speaker-writer wanted to avoid this contextual inference pattern, and sustain the non-iconic reading, then they could switch to the pluperfect, for example. where no discourse relation was inferred, leading to incoherence. Inertia enables discourse context to establish coherence between sentence pairs that, in isola- tion, are incoherent. It would be worrying if Iner- tia were so powerful that it could ameliorate any text. But incoherence is still possible: consider replacing (4f) with (4if): f. ?Mary's hair was black. If world knowledge is coded as intuitions would suggest, then no common topic can be constructed for (4e) and (4g); and this is necessary if they are to be attached with Narration or Background-- the only discourse relations available given the de- feasible laws that are verified. Moreover, Inertia won't improve the coherence in this case because it predicts Narration, which because of Common Topic for Narration cannot be used to attach (4t*) to (4 0 . So the text is incoherent. Hobbs et al (1990) also explore the effects of linguistic and causal knowledge on interpretation, using abduction rather than deduction. Now, Konolige (1991) has shown that abduction and nonmonotonic deduction are closely related; but since Hobbs et al don't attempt to treat time- switching and amelioration, direct comparison here is difficult. However, the following points are rel- evant. First, weighted abduction, as a system of inference, isn't embeddable in CE, and vice versa. Secondly, the weights which guide abduction are assigned to predicates in a context-free fashion. Hobbs et al observe that this may make the ef- fects of context hard to handle, since 'the abduc- tion scheme attempts to make global judgements on the basis of strictly local information' [p48]. 7 Conclusion We examined instances of two types of contextual constraint on current clause attachment. These were Maintain Causal Trajectory, a domain con- straint; and Inertia, a presentational constraint. We argued that domain constraints seemed insuf- ficient, but that presentational constraints could constructively interact with them. This interac- tion then explains the two discourse interpreta- tion phenomena we started out with. Context can switch round the order of events; and it can ame- liorate an otherwise incoherent interpretation. Both of the constraints allow predictions about new discourse relations to be driven from previ- ous predictions. But MCT simply adds its predic- tion to the data-driven set from which the logic chooses, whereas discourse pattern and Inertia are only relevant to interpretation when the logic can otherwise find no discourse relation. This formalisation has also raised a number of questions for future investigation. For example, the discourse pattern (or Hobbsian 'genre') func- tion is important; but how much of the preceding discourse structure should the DP function take as input? How do we establish--and improve-- the linguistic coverage? What is the relation be- tween communicative intentions and contextual constraints? How do we actually implement con- textual constraints in a working system? The idea of contextual constraints is a famil- iar and comfortable one. In this respect, we have merely provided one way of formally pinning it down. Naturally, this requires a background log- ical theory of discourse structure, and we have used DICE, which has its own particular set of dis- course relations and implicature patterns. How- ever, the process of logically specifying the con- straints has two important and general benefits, independent of the particular formalisation we have offered. First, it demands precision and uni- formity in the statement both of the new con- straints, and of the other knowledge sources used in interpretation. Secondly, it permits a program- independent assessment of the consequences of the general idea of contextual constraints. References Asher, Nicholas [in press] Reference to Abstract Ob- jects in English: A Philosophical Semantics for Nat- ural Language Metaphysics. Dordrecht: Kluwer Aca- demic Publishers. Asher, Nicholas and Morreau, Michael [1991] Com- mon Sense Entailment: A Modal Theory of Non- monotonic Reasoning. In Proceedings of the 1Pth In- ternational Joint Conference on Artiflcial Intelligence. Caenepeel, Mimo [1991] Event Structure versus Discourse Coherence. In Proceedings of the Work- shop on Discourse Coherence, Edinburgh, 4-6 April, 1991. Grosz, Barbara and Sidner, Candy [1986] Atten- tion, Intentions, and the Structure of Discourse. Com- putational Linguistics, 12, 175-204. Hobbs, Jerry [1979] Coherence and Coreference. Cognitive Science, 3, 67-90. Hobbs, Jerry [1985] On the Coherence and Struc- ture of Discourse. Report No. CSLI-85-37, Center for the Study of Language and Information. Hobbs, Jerry, Stickel, Martin, Appelt, Doug and Martin, Paul [1990] Interpretation as Abduction. Tech- nical Note No. 499, Artificial Intelligence Center, SRI International, Menlo Park. Kamp, Hans [1981] A theory of truth and semantic representation. In Groenendijk, :i. A. G., Janssen, T. M. V. and Stokhof, M. B. :i. (eds.) Formal Methods in the Study of Language, Volume 136, pp277-322. Amsterdam: Mathematical Centre Tracts. Konolige, Kurt [1991] Abduction vs. Closure in Causal Theories. Technical Note No. 505, Artificial Intelligence Center, SRI International, Menlo Park. Lascarides, Alex and Asher, Nicholas [1991a] Dis- course Relations and Defensible Knowledge. In Pro- eeedings of the ~gth Annual Meeting of Association for Computational Linguistics, pp55-63. Lascarides, Alex and Asher, Nicholas [1991b] Dis- course Relations and Common Sense Entailment. D¥- XI~A deliverable 2.5B, available from Centre for Cog- nitive Science, University of Edinburgh. Lascarides, Alex and Oberlander,.:ion [1992a] Tem- poral Coherence and Defensible Knowledge. Theoret- ical Linguistics, 18. Lascarides, Alex and Oberlander, Jon [1992b] Ab- ducing Temporal Discourse. In Dale, R. et al (eds.) Aspects of Automated Natural Language Generation, pp167-182. Berlin: Springer-Verlag. Polanyi, Livia and Scha, Remko [1984] A Syntac- tic Approach to Discourse Semantics. In Proceedings of the $$nd Annual Meeting of the Association for Computational Linguistics, pp413-419. Polanyi, Livia [1985] A Theory of Discourse Struc- ture and Discourse Coherence. In Papers from the General Session at the Twenty-First Regional Meet- ing of the Chicago Linguistics Society, pp 25-27. Oberlander, :Ion and Lascarides, Alex [1992] Pre- venting False Temporal Implicatures: Interactive De- faults for Text Generation. In Proceedings of COL- ING92. Scha, Remko and Polanyi, Livia [1988] An aug- mented context free grammar. In Proceedings of the $$th Annual Meeting of the Association for Compu- tational Linguistics, pp573-577. Sibun, Penelope [1992] Generating Text without Trees. To appear in Computational Intelligence: Spe- cial Issue on Natural Language Generation, 8. Thompson, Sandra and Mann, William [1987] Rhe- torical Structure Theory: A Framework for the Anal- ysis of Texts. In IPRA Papers in Pragmatics, 1, pp79-105. 8
1992
1
Reasoning with Descriptions of Trees * James Rogers Dept. of Comp. & Info. Science University of Delaware Newark, DE 19716, USA K. Vijay-Shanker Dept. of Comp. & Info. Science University of Delaware Newark, DE 19716, USA ABSTRACT In this paper we introduce a logic for describing trees which allows us to reason about both the par- ent and domination relationships. The use of dom- ination has found a number of applications, such as in deterministic parsers based on Description the- ory (Marcus, Hindle & Fleck, 1983), in a com- pact organization of the basic structures of Tree- Adjoining Grammars (Vijay-Shanker & Schabes, 1992), and in a new characterization of the ad- joining operation that allows a clean integration of TAGs into the unification-based framework (Vijay- Shanker, 1992) Our logic serves to formalize the reasoning on which these applications are based. 1 Motivation Marcus, Hindle, and Fleck (1983) have intro- duced Description Theory (D-theory) which consid- ers the structure of trees in terms of the domination relation rather than the parent relation. This forms the basis of a class of deterministic parsers which build partial descriptions of trees rather than the trees themselves. As noted in (Marcus, Hindle & Fleck, 1983; Marcus, 1987), this approach is capa- ble of maintaining Marcus' deterministic hypothe- sis (Marcus, 1980) in a number of cases where the original deterministic parsers fail. A motivating example is the sentence: I drove my aunt from Peoria's car. The difficulty is that a deterministic parser must attach the NP "my aunt" to the tree it is constructing before evaluating the PP. If this can only be done in terms of the par- ent relation, the NP will be attached to the VP as its object. It is not until the genitive marker on "Peoria's" is detected that the correct attachment is clear. The D-theory parser avoids the trap by making only the judgment that the VP dominates the NP by a path of length at least one. Subsequent refinement can either add intervening components or not. Thus in this case, when "my aunt" ends up as part of the determiner of the object rather than the object itself, it is not inconsistent with its origi- nal placement. It is still dominated by the VP, just not immediately. When the analysis is complete, a tree, the standard referent, can be extracted from the description by taking immediate domination as the parent relation. *Tlfis work is supported by NSF grant IRI-9016591 72 In other examples given in (Marcus, Hindle &; Fleck, 1983) the left-of (linear precedence) rela- tion is partially specified during parsing, with in- dividuals related by "left-of or equals" or "left-of or dominates". The important point is that once a relationship is asserted, it is never subsequently rescinded. The D-theory parser builds structures which are always a partial description of its final product. These structures are made more specific, as parsing proceeds, by adding additional relation- ships. Our understanding of the difficulty ordinary de- terministic parsers have with these constructions is that they are required to build a structure cover- ing an initial segment of the input at a time when there are multiple distinct trees that are consistent with that segment. The D-theory parsers succeed by building structures that contain only those re- lationships that are common to all the consistent trees. Thus the choice between alternatives for the relationships on which the trees differ is deferred until they are distinguished by the input, possibly after semantic analysis. A similar situation occurs when Tree-Adjoining Grammars are integrated into the unification-based framework. In TAGs, syntactic structures are built up from sets of elementary trees by the adjunction operation, where one tree is inserted into another tree in place of one of its nodes. Here the difficulty is that adjunction is non-monotonic in the sense that there are relationships that hold in the trees being combined that do not hold in the resulting tree. In (Vijay-Shanker, i992), building on some of the ideas from D-theory, a version of TAG is intro- duced which resolves this by manipulating partial descriptions of trees, termed quasi-trees. Thus an elementary structure for a transitive verb might be the quasi-tree a' rather than the tree a (Figure I). In a ~ the separation represented by the dotted line between nodes referred to by vpl and vp2 denotes a path of length greater than or equal to zero. Thus a' captures just those relationships which are true in a and in all trees derived from a by adjunc- tion at VP. In this setting trees are extracted from quasi-trees by taking what is termed a circumscrip- live reading, where each pair of nodes in which one dominates the other by a path that is possibly zero is identified. This mechanism can be interpreted in a manner similar to our interpretation of the use of partial S /k NP VP v NP (3t s : Figure 1. Quasi-trees s/7 NP VP '~x Vp,~S vP2 descriptions in D-theory parsers. We view a tree in which adjunction is permitted as the set of all trees which can be derived from it by adjunction. That set is represented by the quasi-tree as the set of all relationships that are common to all of its members. The connection between partial descriptions of trees and the sets of trees they describe is made explicit in (Vijay-Shanker & Schabes, 1992). Here quasi-trees are used in developing a compact rep- resentation of a Lexicalized TAG grammar. The lexicon is organized hierarchically. Each class of the hierarchy is associated with that set of relation- ships between individuals which are common to all trees associated with the lexical items in the class but not (necessarily) common to all trees associated with items in any super-class. Thus the set of trees associated with items in a class is characterized by the conjunction of the relationships associated with the class and those inherited from its super-classes. In the case of transitive verbs, figure 2, the rela- tionships in al can be inherited from the class of all verbs, while the relationships in a2 are associ- ated only with the class of transitive verbs and its sub-classes. The structure a' of figure 1 can be derived by combining a2 with al along with the assertion that v2 and Vl name the same object. In any tree described by these relationships either the node named vpl must dominate vp~ or vice versa. Now in al, the relationship "vpl dominates vl" does not itself preclude vpx and vl from naming the same ob- ject. We can infer, however, from the fact that they are labeled incompatibly that this is not the case. Thus the path between them is at least one. From a2 we have that the path between vp2 and v2 is precisely one. Thus in all cases vpl must dominate vp2 by a path of length greater than or equal to zero. Hence the dashed line in a '. The common element in these three applications is the need to manipulate structures that partially describe trees. In each case, we can understand this as a need to manipulate sets of trees. The structures, which we can take to be quasi-trees in each case, represent these sets of trees by capturing 73 the set of relationships that are common to all trees in the set. Thus we are interested in quasi-trees not just as partial descriptions of individual trees, but as a mechanism for manipulating sets of trees. Reasoning, as in the LTAG example, about the structures described by combinations of quasi-trees requires some mechanism for manipulating the quasi-trees formally. Such a mechanism requires, in turn, a definition of quasi-trees as formal struc- tures. While quasi-trees were introduced in (Vijay- Shanker, 1992), they have not been given a precise definition. The focus of the work described here is a formal definition of quasi-trees and the develop- ment of a mechanism for manipulating them. In the next section we develop an intuitive un- derstanding of the structure of quasi-trees based on the applications we have discussed. Following that, we define the syntax of a language capable of expressing descriptions of trees as formulae and introduce quasi-trees as formal structures that de- fine the semantics of that language. In section 4 we establish the correspondence between these for- mal models and our intuitive idea of quasi-trees. We then turn to a proof system, based on semantic tableau, which serves not only as a mechanism for reasoning about tree structures and checking the consistency of their descriptions, but also serves to produce models of a given consistent description. Finally, in section 7 we consider mechanisms for de- riving a representative tree from a quasi-tree. We develop one such mechanism, for which we show that the tree produced is the circumscriptive read- ing in the context of TAG, and the standard refer- ent in the context of D-theory. Due to space limi- tations we can only sketch many of our proofs and have omitted some details. The omitted material can be found in (Rogers & Vijay-Shanker, 1992). 2 Quasi-Trees In this section, we use the term relationship to in- formally refer to any positive relationship between individuals which can occur in a tree, "a is the par- ent of b" for example. We will say that a tree satis- fies a relationship if that relationship is true of the individuals it names in that tree. Ot x : NP VP ~ % v 1 'x~, v O~ 2 : vP vP2 '~v NP Figure 2. Structure Sharing in a Representation of Elementary Structures It's clear, from our discussion of their applica- tions, that quasi-trees have a dual nature -- as a set of trees and as a set of relationships. In for- malizing them, our fundamental idea is to identify those natures. We will say that a tree is (partially) described by a set of relationships if every relation- ship in the set is true in the tree. A set of trees is then described by a set of relationships if each tree in the set is described by the set of relationships. On the other hand, a set of trees is characterized by a set of relationships if it is described by that set and if every relationship that is common to all of the trees is included in the set of relationships. This is the identity we seek; the quasi-tree viewed as a set of relationships characterizes the same quasi- tree when viewed as a set of trees. Clearly we cannot easily characterize arbitrary sets of trees. As an example, our sets of trees will be upward-closed in the sense that, it will contain every tree that extends some tree in the set, ie: that contains one of the trees as an initial sub-tree. Sim- ilarly quasi-trees viewed as sets of relationships are not arbitrary either. Since the sets they character- ize consist of trees, some of the structural properties of trees will be reflected in the quasi-trees. For in- stance, if the quasi-tree contains both the relation- ships '% dominates b" and "b dominates c" then every tree it describes will satisfy "a dominates c" and therefore it must contain that relationship as well. Thus many inferences that can be made on the basis of the structure of trees will carry over to quasi-trees. On the other hand, we cannot make all of these inferences and maintain any distinction between quasi-trees and trees. Further, for some inferences we will have the choice of making the inference or not. The choices we make in defining the structure of the quasi-trees as a set of relation- ships will determine the structure of the sets of trees we can characterize with a single quasi-tree. Thus these choices will be driven by how much expressive power the application needs in describing these sets. Our guiding principle is to make the quasi-trees as tree-like as possible consistent with the needs of our applications. We discuss these considerations more fully in (Rogers &5 Vijay-Shanker, 1992). One inference we will not make is as follows: from "a dominates b" infer either "a equals b" or, for 74 some a' and b', "a dominates a', a' is the parent of b', and b' dominates b". In structures that enforce this condition path lengths cannot be left partially specified. As a result, the set of quasi-trees required to characterize s' viewed as a set of trees, for in- stance, would be infinite. Similarly, we will not make the inference: for all a, b, either "a is left-of b", "b is left-of a", "a dom- inates b", or "b dominates a". In these structures the left-of relation is no longer partial, ie: for all pairs a, b either every tree described by the quasi- tree satisfies "a is left-of b" or none of them do. This is not acceptable for D-theory, where both the anal- yses of "pseudo-passives" and coordinate structures require single structures describing sets including both trees in which some a is left-of b and others in which the same a is either equal to or properly dominates that same b (Marcus, Hindle & Fleck, 1983). Finally, we consider the issue of negation. If a tree does not satisfy some relationship then it sat- isfies the negation of that relationship, and vice versa. For quasi-trees the situation is more subtle. Viewing the quasi-tree as a set of trees, if every tree in that set fails to satisfy some relationship, then they all satisfy the negation of that relationship. Hence the quasi-tree must satisfy the negated rela- tionship as well. On the other hand, viewing the quasi-tree as a set of relationships, if a particular relationship is not included in the quasi-tree it does not imply that none of the trees it describes satis- fies that relationship, only that some of those trees do not. Thus it may be the case that a quasi-tree neither satisfies a relationship nor satisfies its nega- tion. Since trees are completed objects, when a tree satisfies the negation of a relationship it will always be the case that the tree satisfies some (positive) re- lationship that is incompatible with the first. For example, in a tree "a does not dominate b" iff "a is left-of b", "b is left-of a", or "b properly dom- inates a". Thus there are inferences that can be drawn from negated relationships in trees that may be incorporated into the structure of quasi-trees. In making these inferences, we dispense with the need to include negative relationships explicitly in the quasi-trees. They can be defined in terms of the positive relationships. The price we pay is that to characterize the set of all trees in which "a does not dominate b", for instance, we will need three quasi-trees, one characterizing each of the sets in which "a is left-of b", "b is left-of a", and % prop- erly dominates a". 3 Language Our language is built up from the symbols: K -- non-empty countable set of names, 1 r -- a distinguished element of K, the root <1, ~+, ,~*, --< -- two place predicates, parent, proper domination, domination, and left-of respectively, -- equality predicate, A, V, -~ -- usual logical connectives (,), [, ] -- usual grouping symbols Our atomic formulae are t ,~ u, t ¢+ u, t <* u, t -< u, and t ~ u, where t, u • K are terms. Literals are atomic formulae or their negations. Well-formed- formulae are generated from atoms and the logical connectives in the usual fashion. We use t, u, v to denote terms and ¢, ¢ to denote wffs. R denotes any of the five predicates. 3.1 Models Quasi-trees as formal structures are in a sense a reduced form of the quasi-trees viewed as sets of relationships. They incorporate a canonical sub- set of those relationships from which the remaining relationships can be deduced. Definition 1 A model is a tuple (H,I, 7),79,.A,£), where: H is a non-empty universe, iT. is a partial function from K to Lt (specifying the node referred to by each name), 7 9, .4, 79, and £ are binary relations over It (assigned to % ,a +, ,a*, and -4 respectively). Let T( denote 27(r). Definition 2 A quasi-tree is a model satisfying the conditions Cq : For all w, x, y, z • 11, c~ (~,~) •79, c= (z, =) • 79, ca (=, y), (y, ~) • 79 ~ (=, ~) • 79, c4 (~, ~), (y, ~) • 79 (=, y) • 79 or (y, =) • 79, c5 (=, y) • ,4 ~ (=, y) • 79, ca (x,y) •.4 and (w,x), (y, z) • 79 ::~ (w, ~) • A, c~ (=, y) • 19 ~ (z, y) • A c8 (z, z) • 79 1 We use names rather than constants to clarify the link to description theory. 75 (z, y) • z: or (y, z) • z: or (y, =) • v or (z, y) • 79, v0 (=, y) • z and (=, w), (y, z) • 79 (w, z) • £, Clo (x,y) • z and (w,x) •79 (w, y) • z or (~, ~), (~, y) • A, C~1 (~, y) • Z and (~o, y) • 79 (~, w) • C or (w, =), (w, y) • .4, c~2 (~, y) • z and (y, z) • C ~ (~, z) • C, And meeting the additional condition: for every x,z • U the set B=z = {Y I (x,Y),(Y,Z) • 79} is finite, ie: the length of path from any node to any other is finite. 2 A quasi-tree is consistent iff CC~ (x,y) • A ~ (y,x) ¢ 79, CC2 (z, y) • £ =:, (=, y) ¢ 79, (y, =) ¢ 79, and (y, =) ¢ z:. It is normal iff RCx for all x # y • H, either (~, y) ¢ 79) or (y, ~) ¢ 7). At least one normal, consistent quasi-tree (that consisting of only a root node) satisfies all of these conditions simultaneously. Thus they are consis- tent. It is not hard to exhibit a model for each condition in which that condition fails while all of the others hold. Thus the conditions are indepen- dent of each other. Trees are distinguished from (ordinary) quasi- trees by the fact that 79 is the reflexive, transi- tive closure of P, and the fact that the relations 79, 79, ,4, £ are maximal in the sense that they can- not be consistently extended. Definition 3 A consistent, normal quasi-tree M is a tree iff Tel 79M = (7~M)*, TC2 for all pairs (x, y) • U M X l~ M, exactly one of the following is true: (=, y), (y,z) • 79M; (z,y) • .AM; (y, =) • A M; (=, y) • z:M; or (y, =) • 1: M. Note that TC1 implies that .A M -- (79M)+ as well. It is easy to verify that a quasi-tree meets these con- ditions iff (H M, 79M) is the graph of a tree as com- monly defined (Aho, Hopcroft & Ullman, 1974). 3.2 Satisfaction The semantics of the language in terms of the models is defined by the satisfaction relation be- tween models and formulae. Definition 4 A model M satisfies a formula ¢ (M ~ ¢) as follows: 2 The additional condition excludes "non-standard" mod- els which include components not connected to the root by a finite sequence of immediate domination links. M ~ t,~* u i ff M~t<* u iff M ~ t ,~ u i ff M ~ t C~ u i ff M ~ t ,~+ u iff M ~t,~+u iff M~t<u iff M ~ t -.< u i ff M ~ ~t ~ u iff M ~",~ff iff M ~¢A¢ iff M ~-~(¢A¢) iff M ktV¢ iff (zM(t),Z~(~)) e VM; (ZM(t), Z~(U)) ~ L', (ZM(~),ZM(t)) • C ~, or (z~(~),zM(t)) • .4"; (z'(t),z'(~)) • v" a.d (ZM(u),Z~(t)) • VM; (ZM(t), ZM(,,)) • .4 M, (ZM(u),ZM(t)) • ,4 M, (Z'(t), Z'(,.,)) • c', or (z'(~),zM(t)) • c M (zu(t),ZM(u)) • AM; (ZM(,,),Z~(t)) • V M, (ZM(t),ZM(~)) • z~ ~, or (ZM(~),ZM(t)) • CM; (ZM(t),ZM(~)) • vM; (zM(u),z~(t)) • v ~, (z~(t),Z~(u)) • z: ~, (ZM(u), :z:M(t)) • z: ~, or (z~(t), =), (=,z~(u)) • A ~, for some x • l~M ; (z'(t),z~(~)) • c; (z~(~),z~(t)) • ~, (IM(t),:~M(u)) • V, or (z~(~),z~(t)) • v; U~¢; M ~¢ andM ~¢; M ~¢ orM~--l¢; M~¢orM~¢; M ~-~(¢V¢) iffM~-~¢ andM~'~¢. In addition we require that ZM(k) be defined for all k occurring in the formula. It is easy to verify that for all quasi-trees M (3t, u, R)[M ~ t R u,-~t R u] ==~ M inconsistent. If 2: M is surjective then the converse holds as well. It is also not hard to see that if T is a tree 4 Characterization We now show that this formalization is complete in the sense that a consistent quasi-tree as defined characterizes the set of trees it describes. Recall that the quasi-tree describes the set of all trees which satisfy every literal formula which is satis- fied by the quasi-tree. It characterizes that set if every literal formula which is satisfied by every tree in the set is also satisfied by the quasi-tree. The property of satisfying every formula which is satis- fied by the quasi-tree is captured formally by the notion of subsumption, which we define initially as a relationship between quasi-trees. Definition 5 Subsumption. Suppose M = (l~M,~ M 7)M,'DM,.AM,f-.M) and t M ~ M j M ~ M ~ M I M ~ M = (14 ,Z ,7 ) ,7) ,,4 ,£ ) are consis- tent quasi-trees, then M subsumes M z (M ~ M I) iff there is a function h : lA M ~ 14 M' such that: 76 zM'(t) = h(7:M(t)), (x, y) e 7)M =V (h(x), h(y)) e 7)M' (x, y) e V M ~ (h(z), h(y)) E 7 )M', (x, y) E .A M =v (h(x), h(y)) e .A M', (x, y) e £M ~ (h(x),h(y)) e £M'. We now claim that any quasi-tree Q is subsumed by a quasi-tree M iff it is described by M. Lemma 1 If M and Q are normal, consistent quasi-trees and 3 M is surjective, then M E Q iff for all formulae ¢, M ~ ¢ ~ Q ~ ¢. The proof in the forward direction is an easy in- duction on the structure of ¢ and does not depend either on normality or surjectiveness of I M. The opposite direction follows from the fact that, since Z M is surjective, there is a model M' in which/~M' is the set of equivalence classes wrt ~ in the domain of Z M, such that M E M~ E Q- The next lemma allows us, in many cases, to as- sume that a given quasi-tree is normal. Lemma 2 For every consistent quasi-tree M, there is a normal, consistent quasi-tree M ~ such that M E M~, and for all normal, consistent quasi- tree M', M E M" ::¢. M ~ E M'. The lemma is witnessed by the quotient of M with respect to S M, where sM = { (x, y) I (x, y), (y, x) e vM}. We can now state the central claim of this sec- tion, that every consistent quasi-tree characterizes the set of trees which it subsumes. Proposition 1 Suppose M is a consistent quasi- tree. For all literals ¢ M ~ ¢ ¢~ (VT, tree)[M E T ::~ T ~ ¢] The proof follows from two lemmas. The first estab- lishes that the set of quasi-trees subsumed by some quasi-tree M is in fact characterized by it. The sec- ond extends the result to trees. Their proofs are in (Rogers & Vijay-Shanker, 1992). Lemma 3 If M is a consistent quasi-tree and ¢ a literal then (3Q, consistent quasi-tree)[M E_ Q and Q ~ -~¢] Lemma 4 If M is a consistent quasi-tree, then there exists a tree T such that M E T. Proof(of proposition 1) (VT) [M _ T :=~ T b ¢] ¢=~ -~(3T)[M _ T and T ~ -~¢] (:=~ by consistency, ¢== by completeness of trees) ¢V -~(3Q, consistent q-t)[M E Q and Q ~ -~¢] (==~ by lemma 4, ¢= since T is a quasi-tree) (::~ by lemma 3, ¢=: by lemma 1) O 5 Semantic Tableau Semantic tableau as introduced by Beth (Beth, 1959; Fitting, 1990) are used to prove validity by means of refutation. We are interested in satisfi- ability rather than validity. Given E we wish to build a model of E if one exists. Thus we are in- terested in the cases where the tableau succeeds in constructing a model. The distinction between these uses of semantic tableau is important, since our mechanism is not suitable for refutational proofs. In particular, it cannot express "some model fails to satisfy ¢" ex- cept as "some model satisfies -¢". Since our logic is non-classical the first is a strictly weaker condition than the second. Definition 6 Semantic Tableau: A branch is a set, S, of formulae. A configuration is a collection, {S1,...,S~}, of branches. A tableau is a sequence, (C1,..., Cnl, of configura- tions where each Ci+~ is a result of the application of an inference rule to Ci. If s is an inference rule, (Ci\{S}) U {sl,..., s',} is the result of applying the rule to G iff z eG. A tableau for ~, where E is a set of formulae, is a tableau in which C1 = {E}. A branch is closed iff (9¢)[{¢,--,¢} C 5']. A con- figuration is closed iff each of its branches is closed, and a tableau is closed iff it contains some closed configuration. A branch~ configuration, or tableau that is not closed is open. 5.1 Inference Rules Our inference rules fall into three groups. The first two, figures 3 and 4, are standard rules for propositional semantic tableau extended with equality (Fitting, 1990). The third group, figure 5, embody the properties of quasi-trees. The --,,~ rule requires the introduction of a new name into the tableau. To simplify this, tableau are carried out in a language augmented with a count- ably infinite set of new names from which these are drawn in a systematic way. The following two lemmas establish the correct- ness of the inference rules in the sense that no rule increases the set of models of any branch nor elim- inates all of the models of a satisfiable branch. Lemma 5 Suppose S' is derived from S in some tableau by some sequence of rule applications. Sup- pose M is a model, then: M~S'::~M~S. This follows nearly directly from the fact that all of our rules are non-strict, ie: the branch to which an inference rule is applied is a subset of every branch introduced by its application. Lemma 6 If S is a branch of some configuration of a tableau and ,S' is the set of branches resulting from applying some rule to S, then if there is a 77 consistent quasi-tree M such that M ~ S, then for some 5;~ E S' there is a consistent quasi-tree M' such that M' ~ S~. We sketch the proof. Suppose M ~ S. For all but --,,a it is straightforward to verify M also sat- isfies at least one of the S~. For ~,~, suppose M fails to satisfy either u ,~* t or -,t ,~* u. Then we claim some quasi-tree satisfies the third branch of the conclusion. This must map the new constant k to the witness for the rule. M has no such require- ment, but since k does not occur in S, the value of 2: M(k) does not affect satisfaction of S. Thus we get an appropriate M' by modifying z M' to map k correctly. Corollary 1 If there is a closed tableau for ¢ then no consistent quasi-tree satisfies ¢. No consistent quasi-tree satisfies a closed set of for- mulae. The result then follows by induction on the length of the tableau. 6 Constructing Models We now turn to the conditions for a branch to be sufficiently complete to fully specify a quasi-tree. In essence these just require that all formulae have been expanded to atoms, that all substitutions have been made and that the conditions in the definition of quasi-trees are met. 6.1 Saturated Branches Definition 7 A set of sentences S is downward saturated iff for all formulae ¢, ¢, and terms t, u, v: 1-Is CVCES=v.¢ES orCES 1-13 -',(¢ V ¢) E S =¢, ",¢ E S and ",¢ E S I-I 4 C A C E S =~ ff E S and C E S 1-I6 t ,~ t E S for all terms t occurring in S 117 tl ~ ul,t2 ~, uz E S =~ tl ,~* t2 E S ~ ul ,~* u2 E S, tl ,~+ t2 E S =¢, ul ,~+ u2 E S, tl ~ t2 E S ==~ u 1 <l u 2 ~ S, tl -< t2 E S =¢. Ul -.4 u2 E S, tl ~ t2 E S ~ ua ,~ u2 E S. t118 r ,~* t E S for all terms t occurring in S H9 t~uES~t,~* uES 111,o t ~ u E S =C, -,t ,~* u E S or ~u ,~* t E S 11,, t,~* u,u~* tES~t~uES I-I,z t ,~" u, u ,~* v E S ~ t ,~* v E S H*3 t ,~* v, u ,~* v E S ~ t ,~* u E S or u ,~* t E S H, 4 -.t ,~* u E S t-< uES oru-<t GS oru,¢ t ES H, 5 t ,~+ u E S ~ t ,~* u, ~u ,~* t E S H,6 t ,~+ u,s,~* t,u,~* vES ~ s,~+ v~S H*7 ~t ,~+ u E S ~ --t ,~* u E S or u .~* t E S H,8 t ,~ u E S ::C, t ,~+ u E S S,.¢ v¢ s,¢v¢,¢ I s,¢v¢,¢ S,¢A¢ A S,¢ A¢,¢,¢ S, "m "~ ~ S,-~-~¢, ¢ V s,-X¢ v ¢) s,-X¢ v ¢),-~¢,-~¢ ~V S,-~(¢ A ¢) S,-~(¢ A ¢), "-~¢ I s,-4¢ A ¢),-'~¢ -~A Figure 3. Elementary Rules 1-1, 9 t ,a v E S :----~ u -4 v E S or v -4 u E S or u ,~* t E S or v ,~* u E S H2o ",t ,~ u E S ::~ u ,~* t E S or-~t ,~* u E S or t ,~+ w, w ,~+ u E S, for some term w H2x t -4 u E S ~ -~t ,~* u, -~u ,~* t, --,u -4 t E S I-I2~* t -4 u, t ,~* s,u ,~* v E S ~ s -4 v E S H23 t -4 u, v ,~* t E S v -4 u E S or v ,~ + t, v ,~ + u E S 1-124 t -4 u, v ,l* u E S =~ t -4 v E S or v ,~ + t, v ,~ + u E S H25 t-4u, u-4vES~t-4vES H26 ~t-4 uE S=¢, u -4 t E S or t ,~* u E S or u ,~* t E S. The next lemma (essentially Hintikka's lemma) establishes the correspondence between saturated branches and quasi-trees. Lemma 7 For every consistent downward satu- rated set of formulae S there is a consistent quasi- tree M such that M ~ S. For every finite consis- tent downward saturated set of formulae, there is a such a quasi-tree which is finite. Again, we sketch the proof. Consider the set T(S) of terms occurring in a downward saturated set S. I-I6 and I-/7 assure that ~ is reflexive and substi- tutive. Sincet ~u,u~v E S=~t ~v E S, and u~u,u,~vE S~v~ u E Sby substitution of v for (the first occurrence of) u, it is transitive and symmetric as well. Thus ~ partitions T(S) into equivalence classes. Define the model H as follows: u n = 7"(s)/~, z~(k) = [k]~, :pH = {([t]~., [u]~) It '~ u ~ S}, :p. = {([t]~., [u]~.) It "~* u E S}, .A H = {([t]~,[u]~) I t,~+ uE S}, c" = {([t]~, [u]~) I t -4 ~ ~ s}. Since each of the conditions C1 through Cx2 corre- sponds directly to one of the saturation conditions, it is easy to verify that H satisfies Cq. It is equally easy to confirm that H is both consistent and nor- mal. 78 We claim that ¢ E S =¢- H ~ ¢. As is usual for versions of Hintikka's lemma, this is established by an induction on the structure of ¢. Space prevents us from giving the details here. For the second part of the lemma, if the set of formulae is finite, then the set of terms (and hence the set of equivalence classes) is finite. 6.2 Saturated Tableau Since all of our inference rules are non-strict, if a rule once applies to a branch it will always apply to a branch. Without some restriction on the applica- tion of rules, tableau for satisfiable sets of formulae will never terminate. What is required is a control strategy that guarantees that no rule applies to any tableau more than finitely often, but that will al- ways find a rule to apply to any open branch that is not downward saturated. Definition 8 Let EQs be the reflexive, symmetric, transitive closure of { (t, u) l t ~ u e S}. An inference rule, I, applies to some branch S of a configuration C iff • S is open • S • {Si I Si results from application of I to S} • if I introduces a new constant a occurring in formulae Cj(a) E Si, there is no term t and pairs (ul, va), (u2, v2), . . . E EQs such that for each of the Cj, ¢{t/a, ul/Vl,~2/v2,...} E S. (Where ¢{t/a, Ul/Vl, U2/V2,...} denotes the re- sult of uniformly substituting t for a, ul for vl, etc., in ¢.) The last condition in effect requires all equality rules to be applied before any new constant is in- troduced. It prevents the introduction of a formula involving a new constant if an equivalent formula already exists or if it is possible to derive one using only the equality rules. We now argue that this definition of applies does not terminate any branch too soon. Lemma 8 If no inference rule applies to an open branch S of a configuration, then S is downward saturated. This follows directly from the fact that for each of H1 through H26, if the implication is false there is a corresponding inference rule which applies. 5: ,5', t ,~ t any term t occurring in 5: ~ (reflexivity of ,~) 5:, t u, ¢(t) s,t u, +(t), ¢(?) ~s (substitution) ¢(i) denotes the result of substituting u for any or all occurrences oft in ¢. Figure 4. Equality Rules 5: 5:, r <1" t t any term occurring in S ort=r <1" (r minimum wrt <1") 5:, t ~ u (reflexivity of <1") S, t ~ u, t .~* u, u ,~* t <1r 5:,t <1" U, u <1" t 5:,t<1" u, u ,~* t, t ~, u * (anti-symmetry) <1 a S,t ~ U <1" 5:,t ~ u,-.t <1* u [ 5:,t # u,-~u <1* t r'. S, t <1" u, u <1" v * (transitivity) 5:~ t <1" U~ U <1" V~ t <1" V <it 5:, t .~* V~ U <1" V 5:, t <1" v, u .~* v, t ,~* u [ 5:, t ,~* v, u .~* v, u <1" t <1~ (branches linearly ordered) 5:~ --,t <1" u ---1<1" 5:, -~t <1* u, t -4 u [ 5:,-~t<1" u,u-4t [ S, "-,t <1* u, u <1 +t 5:, t <1 + u 5:, t ,~+ u, s <1" t, u <1" v 5:,t<1 + u, t <1* u, --,u <1* t <1+1 5:,t<1 + u, s <1* t, u <1* v, s <1 + v ~1+ 2 5:, -,t <1 + u 5:t t <1 u -1<1 + <11 5:, -~t <1 + u, -~t 4* u I 5:,-.t<1 + u, u <1* t 5:, t <1u, t <1 + u 5:, t <1v <12 5:,t<1v, u-4v [ 5:,t<1v, v-4u I 5:,t<1v, u<1*t [ 5:,t<1v, v<1* u any term u occurring in 5:. S~ ~t <J u "n<1 S,-.t <1u, u <1* t [ S,-.t ~ u,-~t <1* u [ 5:, ".t <1 u, t <1 + k, k <1 + u k new name 5:, t -4 U S, t -4 U, t <1* 8, U <1" V -<a "42 5:,t -4 u, ~t <1" u, ~u <1" t, ~U -4 t 5:~t -4 u,t <1" s~u <1" V,s -4 V 5:, t -4 u, v <1* t -<a 5:, t -4 u, v ,~* t, v -4 u [ 5:, t -4 u, v ,~* t, v <1+ t, v <1+ u 5:, t -4 u, v <1* u 5:~ t -4 U, v'~* u, t -4 v [ 5: , t -4 U , U -4 V -<t 5:~ t -4 U~ V "~* U~ V <1 + t~ V <1+ U 5:, "~t -4 u S , t .-4 u , u -4 v , t -4 v "44 ,5',--t-~u,u-~t [ S,--,t-4u, t<1*u [ S,--,t-4u, u<1*t Figure 5. Tree Rules -,-< 79 Proposition 2 (Termination) All tableau for fi- nite sets of formulae can be extended to tableau in which no rule applies to the final configuration. This follows from the fact that the size of any tableau for finite sets of formulae has a finite upper bound. The proof is in (Rogers & Vijay-Shanker, 1992). Proposition 3 (Soundness and Completeness) A saturated tableau for a finite set of formulae exists iff there is a consistent quasi-tree which sat- isfies E. Proof: The forward implication (soundness) follows from lemma 7. Completeness follows from the fact that if E is satisfiable there is no closed tableau for E (corollary 1), and thus, by propo- sition 2 and lemma 8, there must be a saturated tableau for E. [] 7 Extracting Trees from Quasi-trees Having derived some quasi-tree satisfying a set of relationships, we would like to produce a "mini- mal" representative of the trees it characterizes. In section 3.1 we define the conditions under which a quasi-tree is a tree. Working from those conditions we can determine in which ways a quasi-tree M may fail to be a tree, namely: , (~oM)* is a proper subset of:D M, • L M and/or 7) M may be partial, ie: for some t,u, U ~: (t -~ uV-~t -~ u) or U ~ (t ,~* u V -~t ,~* u). The case of partial L: M is problematic in that, while it is possible to choose a unique representa- tive, its choice must be arbitrary. For our applica- tions this is not significant since currently in TAGs left-of is fully specified and in parsing it is always resolved by the input. Thus we make the assump- tion that in every quasi-tree M from which we need to extract a tree, left-of will be complete. That is, for all terms t,u: M ~ t -~ uV-~t -~ u. Thus M ~ t ~* u V-~t ~* u ::v M ~ u ~* t. Suppose M ~ u ,~* t and M ~: (t 4" u V-~t ,~* u), and that zM(u) = x and zM(t) = y. In D-theory, this case never arises, since proper domination, rather than domination, is primitive. It is clear that the TAG applications require that x and y be iden- tified, ie: (y, x) should be added to/)m. Thus we choose to complete 7) M by extending it. Under the assumption that /: is complete this simply means: if M ~ -~t ,~* u, 7) M should be extended such that M ~ t ,~* u. That M can be extended in this way consistently follows from lemma 3. That the re- sult of completing ~)M in this way is unique follows from the fact that, under these conditions, extend- ing "D M does not extend either ,A M or ~M. The details can be found in (Rogers & Vijay-Shanker, 1992). In the resulting quasi-tree domination has been resolved into equality or proper domination. To arrive at a tree we need only to expand pM such that (,pM)* .: ~)M. In the proof of lemma 4 we show that this will be the case in any quasi-tree T closed under: (x, z) E A T and (Yy)[(z, y) fL A T or (y, z) ft A T] (z, z) • pT (x, y) • £w and (y, x) ~ £T U .A T u) • v r. The second of these conditions is our mechanism for completing/)M. The first amounts to taking immediate domination as the parent relation -- precisely the mechanism for finding the standard referent. Thus the tree we extract is both the cir- cumscriptive reading of (Vijay-Shanker, 1992) and the standard referent of (Marcus, Hindle & Fleck, 1983). References Aho, A. V., Hopcroft, J. E., & Ullman, J. D. (1974). The Design and Analysis of Computer Algo- rithms. Reading, MA: Addison-Wesley. Beth, E. W. (1959). The Foundations of Mathe- matics. Amsterdam: North-Holland. Fitting, M. (1990). First-order Logic and Auto- mated Theorem Proving. New York: Springer- Verlag. Marcus, M. P. (1980). A Theory of Syntactic Recog- nition for Natural Language. MIT Press. Marcus, M. P. (1987). Deterministic parsing and description theory. In P. Whitelock, M. M. Wood, H. L. Somers, R. Johnson, & P. Ben- nett (Eds.), Linguistic Theory and Computer Applications. Academic Press. Marcus, M. P., Hindle, D., & Fleck, M. M. (1983). D-theory: Talking about talking about trees. In Proceedings of the 21st AnnuaiMeeting of the Association for Computational Linguistics, Cambridge, MA. Rogers, J. & Vijay-Shanker, K. (1992). A formal- ization of partial descriptions of trees. Techni- cal Report TR92-23, Dept. of Comp. and Info. Sci., University of Delaware, Newark, DE. Vijay-Shanker, K. (1992). Using descriptions of trees in a tree-adjoining grammar. Computa- tional Linguistics. To appear. Vijay-Shanker, K. & Schabes, Y. (1992). Structure sharing in lexicalized tree-adjoining grammars. In Proceedings of the 16th International Con- ference on Computational Linguistics (COL- ING'92), Nantes. 80
1992
10
COMPARING TWO GRAMMAR-BASED GENERATION A CASE STUDY Miroslav Martinovic and Tomek Strzalkowski Courant Institute of Mathematical Sciences New York University 715 Broadway, rm. 704 New York, N.Y., 10003 ALGORITHMS: ABSTRACT In this paper we compare two grammar-based gen- eration algorithms: the Semantic-Head-Driven Genera- tion Algorithm (SHDGA), and the Essential Arguments Algorithm (EAA). Both algorithms have successfully addressed several outstanding problems in grammar- based generation, including dealing with non-mono- tonic compositionality of representation, left-recursion, deadlock-prone rules, and nondeterminism. We con- centrate here on the comparison of selected properties: generality, efficiency, and determinism. We show that EAA's traversals of the analysis tree for a given lan- guage construct, include also the one taken on by SHDGA. We also demonstrate specific and common situations in which SHDGA will invariably run into serious inefficiency and nondeterminism, and which EAA will handle in an efficient and deterministic manner. We also point out that only EAA allows to treat the underlying grammar in a truly multi-directional manner. 1. INTRODUCTION Recently, two important new algorithms have been published ([SNMP89], [SNMP90], [S90a], [S90b] and [$91]) that address the problem of automated genera- tion of natural language expressions from a structured representation of meaning. Both algorithms follow the same general principle: given a grammar, and a struc- tured representation of meaning, produce one or more corresponding surface strings, and do so with a mini- mal possible effort. In this paper we limit our analysis of the two algorithms to unification-based formalisms. The first algorithm, which we call here the Seman- tic-Head-Driven Generation Algorithm (SHDGA), uses information about semantic heads ~ in grammar rules to obtain the best possible traversal of the generation tree, using a mixed top-down/bottom-up strategy. The semantic head of a rule is the literal on the right-hand side that shares the semantics with the literal on the left. The second algorithm, which we call the Essential Ar- guments Algorithm (EAA), rearranges grammar pro- ductions at compile time in such a way that a simple top-down left-to-right evaluation will follow an opti- mal path. Both algorithms have resolved several outstanding problems in dealing with natural language grammars, including handling of left recursive rules, non-mono- tonic compositionality of representation, and deadlock- prone rules 2. In this paper we attempt to compare these two algorithms along their generality and efficiency lines. Throughout this paper we follow the notation used in [SNMP90]. 2. MAIN CHARACTERISTICS OF SHDGA'S AND EAA'S TRAVERSALS SHDGA traverses the derivation tree in the seman- tic-head-first fashion. Starting from the goal predicate node (called the root), containing a structured repre- sentation (semantics) from which to generate, it selects a production whose leg-hand side semantics unifies with the semantics of the root. If the selected production passes the semantics unchanged from the left to some nonterminal on the right (the so-called chain rule), this later nonterminal becomes the new root and the algo- rithm is applied recursively. On the other hand, if no right-hand side literal has the same semantics as the root (the so called non-chain rule), the production is expanded, and the algorithm is reeursively applied to every literal on its right-hand side. When the evalu- ation of a non-chain rule is completed, SHDGA con- nects its left-hand side literal (called the pivot) to the initial root using (in a backward manner) a series of appropriate chain rules. At this time, all remaining literals in the chain rules are expanded in a fixed order (left-to-right). 81 2 Deadlock-prone rules are rules in which the order of the ex- pansion of right-hand side literals cannot be determined locally (i.e. using only information available in this rule). Since SHDGA traverses the derivation tree ha the fashion described above, this traversal is neither top- down ('I'D), nor bottom-up (BU), nor left-to-right (LR) globally, with respect to the entire tree. However, it is LR locally, when the siblings of the semantic head literal are selected for expansion on the right-hand side of a chain rule, or when a non-chain rule is evaluated. In fact the overall traversal strategy combines both the TD mode (non-chain rule application) and the BU mode (backward application of chain rules). EAA takes a unification grammar (usually Prolog- coded) and normalizes it by rewriting certain left re- cursive rules and altering the order of right-hand side nonterminals in other rules. It reorders literals ha the original grammar (both locally within each rule, and globally between different rules) ha such a way that the optimal traversal order is achieved for a given evalu- ation strategy (eg. top-down left-to-righ0. This restruc- turing is done at compile time, so in effect a new executable grammar is produced. The resulting parser or generator is TD but not LR with respect to the origi- nal grammar, however, the new grammar is evaluated TD and LR (i.e., using a standard Prolog interpreter). As a part of the node reordering process EAA calcu- lates the minimal sets of essential arguments (msea's) for all literals ha the grammar, which in turn will al- low to project an optimal evaluation order. The opti- mal evaluation order is achieved by expanding only those literals which are ready at any given moment, i.e., those that have at least one of their mseas instantiated. The following example illustrates the traversal strategies of both algorithms. The grammar is taken from [SNMP90], and normalized to remove deadlock-prone rules in order to simplify the exposition? (0) sentence/deel(S)--> s(f'mite)/S. (1) sentence/imp(S) -- > vp(nonfmite,[np(_)/you]) IS. ,,..... (2) s(Form)/S - > Subj, vp(Form,[Subj/S. ...°°°. (3) vp(Form,Subcat)/S -- > v(Form,Z)/S, vpl(Form,Z)/Subcat. (4) vpl(Form,[Compl[ Z])/Ar --> vpl(Form, Z)/Ar, Compl. (5) vpl(Form,Ar)/Ar. (6) vp(Form,[Subj])/S -- > v(Form,[Subj])/VP, anx(Form, [Subj],VP)/S. (7) anx(Form,[Subjl,S)/S. (8) aux(Form,[Subjl,A)/Z--> adv(A)/B, aux(Form[Subj],B)/Z. ....... (9) v(finite,[np(_)/O,np(3-sing)lS])llove(S,O) -- > [loves]. (10) v(f'mite, [np(_)/O,p/up,np(3 -sing)/S])/ call_up(S,O) -- > [calls]. (11) v(fmite,[np(3-sing)/S])/leave(S) -- > [leaves]. ...... ° (12) np(3-sing)/john -- > [john]. (13) np(3-pl)/friends -- > [friends]. (14) adv(VP)/often(VP)--> [often]. The analysis tree for both algorithms is presented on the next page. (Figure 1.). The input semantics is given as decl(call_up~ohnfriends)). The output string be- comes john calls up friends. The difference lists for each step are also provided. They are separated from the rest of the predicate by the symbol I- The differ- ent orders in which the two algorithms expand the branches of the derivation tree and generate the termi- nal nodes are marked, ha italics for SHDGA, and in roman case for EAA. The rules that were applied at each level are also given. If EAA is rerun for alternative solutions, it will pro- duce the same output string, but the order in which nodes vpl (finite,[p/up,np(3-sing)/john])/[Subj]/Sl_S2, and np(..)/~ends/S2__l] (level 4), and also, vp1(finite,[np(3- sing)/john])/[Subj]/S1_S12, and p/up/S12_S2, at the level below, are visited, will be reversed. This hap- pens because both literals in both pairs are ready for the expansion at the moment when the selection is to be made. Note that the traversal made by SHDGA and the first traversal taken by EAA actually generate the terminal nodes ha the same order. This property is formally defined below. Definition. Two traversals T' and T" of a tree T are said to be the same-to-a-subtree (stas), if the follow- hag claim holds: Let N be any node of the tree T, and S~ ..... S all subtrees rooted at N. If the order in which the subtrees will be taken on for the traversal by T' is S? ..... S. n and by T" S. t ..... S.", then SJ =SJ ..... S."=S.". s s .1 J l .I t j (S~ is one of the subtrees rooted at N, for any k, and 1) Stas however does not imply that the order in which the nodes are visited will necessarily be the same. 3 EAA eliminates such rules using global node reordering ([$91]). 82 sentence/decl(call._up0ohn, friends)) I St ring_[l s(ftnite)/call up(john, friends) IString._[] SubJ l String_SO npO-slng)/joh n I String_SO np(3-sing)/john I UohnlS0LS0 10/ Rule (12) john /V IV q)(rmJte,ISubjl)/caUup(john,trien~) I S0_[] v(finite,Z)/call_up0ohn, friends) I SOSI vpl(nnlte,Z)/lSubjl I Sl [1 v(finite,[np( )/friends,p/up, np(3-~ng)/john])/ vpl(finite, [npO/friends,p/up, np(3-sing)/john])/ i~t]l_u p(j oh n, friends) I lcalls[ SII._Sl [SubjllSl_.[] S Rule(lO) calls vpl(finite, [p/up,ni)(3-singJIjohn])/[Subj] ISIS2 I I $ 6 RUle (4) vpl(flnite, [np(3-sing)/john)/[Subj] [ SI._S12 p/uplSl2_S2 l o/upll~l~l_S2 4 7 RUle(S) 6 Is v'pl(fln~,[np(3-slng)/john])/[np(3-si~ljohn] l Sl_ Sl H up U Rule ~0) Rule (1) Rul¢~ Sule~ up(_)/rr~lS2_[l np(3-pl)/frlendsl[~l Ill 8 1 9 Rule (13) 11I friends III FIGURE 1: EAA's and SHDGA's Traversals of An Analysis Tree. 3. GENERALITY-WISE SUPERIORITY OF EAA OVER SHDGA The traversals by SHDGA and EAA as marked on the graph are stas. This means that the order in which the terminals were produced (the leaves were visited) is the same (in this case: calls up friends john). As noted previously, EAA can make other traversals to produce the same output string, and the order in which the terminals are generated will be different in each case. (This should not be confused with the order of the ter- minals in the output string, which is always the same). The orders in which terminals are generated during al- ternative EAA traversals are: up calls friends john, friends calls up john, friends up calls john. In general, EAA can be forced to make a traversal corresponding to any permutation of ready literals in the right-hand side of a rule. We should notice that in the above example SHDGA happened to make all the right moves, i.e., it always expanded a literal whose msea happened to be instan- tiated. As we will see in the following sections, this will not always be the case for SHDGA and will be- come a source of serious efficiency problems. On the other hand, whenever SHDGA indeed follows an optimal traversal, EAA will have a traversal that is same- to-a-subtree with it. The previous discussion can be summarized by the next theorem. 83 Theorem: If the SHDGA, at each particular step dur- ing its implicit traversal of the analysis tree, visits only the vertices representing literals that have at least one of their sets of essential arguments instantiated at the moment of the visit, then the traversal taken by the SHDGA is the same-to-a-subtree (stas) as one of the traversals taken by EAA. The claim of the theorem is an immediate consequence of two facts. The first is that the EAA always selects for the expansion one of the literals with a msea cur- rently instantiated. The other is the definition of traversals being same-to-a-subtree (always choosing the same subtree for the next traversal). The following simple extract from a grammar, de- fining a wh-question, illustrates the forementioned (see Figure 2. below): ........ .° (1) whques/WhSem--> whsubj(Num)/WhSubj, whpred(Num,Tense, [WhSubj,WhObj]) /WhSem, whobj/WhObj. o.o ...... . ....... ,.° (2) whsubj(_.)/who -- > [who]. (3) whsubj(__)/what --> [what]. °° ...... ,° (4) whpred(sing,perf, [Subj, Obj])/wrote(Subj, Obj) -> [wrote]. ...°.,,,°, (5) whobj/this--> [this]. °°oo,ooo°° The input semantics for this example is wrote(who,this), and the output string who wrote this. The numbering for the edges taken by the SHDGA is given in italics, and for the EAA in roman case. Both algorithm~ expand the middle subtree first, then the left, and finally the right one. Each of the three subtrees has only one path, there- fore the choices of their subtrees are unique, and there- fore both algorithms agree on that, too. However, the way they actually traverse these subtrees is different. For example, the middle subtree is traversed bottom- up by SHDGA and top-down by EAA. whpred is expanded first by SI-IDGA (because it shares the se- mantics with the root, and there is an applicable non- chain rule), and also by EAA (because it is the only literal on the right-hand side of the rule (1) that has one of its msea's instantiated (its semantics)). After the middle subtree is completely expanded, both sibling literals for the whpred have their semantics in- stantiated and thus they are both ready for expansion. We must note that SHDGA will always select the left- most literal (in this case, whsubj), whether it is ready or not. EAA will select the same in the first pass, but it will expand whobj first, and then whsubj, if we force a second pass. In the first pass, the terminals are gen- erated in the order wrote who this, while in the second pass the order is wrote this who. The first traversal for EAA, and the only one for SHDGA are same-to-a- subtree. 4. EFFICIENCY-WISE SUPERIORITY OF EAA OVER SHDGA The following example is a simplified fragment of a parser-oriented grammar for yes or no questions. Using this fragment we will illustrate some deficiencies of SHDGA. o.°o.ooo.. (1) sentence/ques(askif(S)) -- > yesnoq/askif(S). (2i" ye's'noq/asld f(S)--> auxverb(Num,Pers,Form)/Aux, subj (Num,Pers)/Subj, mainverb(Form, [Sub j, Obj])/Verb, obj(_,J/Obj, adj([Verb])/S. wb,p~wr~e(wko.a,~) [ Q,m_U whs~bj(Num)/WhSJj l (~,es RI whpred(Num, Form, [WhSubj,WhObjD/ wrole(who,this) I RIR2 wl~bj/WhObj I~_11 1 2 wrote 1 I 3 3 ~4 4 w~ thh H ll~ill !I! ~TJ, t* O) su~ 4er3 1II ~" 11 FIGURE 2: EAA's and SHDGA's STAS Traversals of Who Question's Analysis Tree. 84 (3) auxverb(sing, one,pres__perf)/laave(pres__perf, sing) --> [have]. (4) aux_verb(sing,one,pres_cont)/be(pres_cont, sing-l)--> [am]. (5) auxverb(sing,one,pres)/do(pres,sing- 1) -- > [do]. (6) aux_verb(sing,two,pres)/do(pres,sing-2)--> [do]. (7) aux_verb(sing,three,pres)/do(pres,sing-3) -- > [does]. (8) aux_verb(pl,one,pres)/do(pres,pl-1) -- > [do]. (9) subj(Num,Pers)/Subj -- > np(Num, Pers,su)/Subj. (10) obj(Num,Pers)/Obj -- > np(Num,Pers,ob)/Obj. (11) np(Num,Pers,Case)/NP --> noun(Num,Pers, Case)/NP. (12) np(Num,Pers,Case)/NP --> pnoun(Num,Pers, Case)/NP. (13) pnoun(sing,two,su)/you -- > [you]. (14) pnoun(sing,three,ob)/him -- > [him]. (15) main_verb(pres,[Subj,Obj])/see(Subj,Obj) --> [see]. (15a) main_verb(pres__perf, [Subj, Obj ])/seen(Subj, Obj ) --> [seen]. (15b) mainverb(perf, [Subj,Obj])/saw(Subj, Obj) --> [saw]. (16) adj([Verb])/often(Verb)--> [often]. The analysis tree (given on Figure 3.) for the input semantics ques ( askif (often (see (you,him) ) ) ) (the output string being do you see him often) is presented below. Both algorithms start with the rule (1). SHDGA se- lects (1) because it has the left-hand side nonterminal with the same semantics as the root, and it is a non- chain rule. EAA selects (1) because its left-hand side unifies with the initial query (-?- sentence (OutString__G) / ques(askif(often(see(you,him)))) ). Next, rule (2) is selected by both algorithms. Again, by SHDGA, because it has the left-hand side nonter- minal with the same semantics as the current root (yesnoq/askif...), and it is a non-chain rule; and by EAA, because the yesnoq/askif.., is the only nonterminal on the right-hand side of the previously chosen rule and it has an instantiated msea (its semantics). The crucial difference takes place when the right-hand side of rule (2) is processed. EAA deterministically selects adj for expansion, because it is the only rhs literal with an instantiated msea's. As a result of expanding adj, the main verb semantics becomes instantiated, and therefore main__verb is the next literal selected for expansion. After processing of main_verb is completed, Subject, Object, and Tense variables are instantiated, so that both subj and obj become ready. Also, the tense argument for aux_verb is instantiated (Form in rule (2)). After subj, se ntee~e/ques(askifloft en(see(yoo,him)))) ] String_[] ' I 1 yesnoqlaskiffonenlsee(you,him))) [ String_[] Ru~ (z) Rule aux_verb(sing,t wo, pres)/ do(pres,sing-2) [ Idol ROI_R0 Rule(o) 11 3 do V 1 sobj(sing,two)/ main_verb(pres, [you, him])/ obj(sing,three) youI[youlRl] RI see(you,him) ] [see [ R2]_R2 him [ [him ] R3]_R3 Role (9) Rule (15) Rule (10) 5 6 4 7 8 10 np(sing,two,su)/ see np(sing,three,ob)/ you I [you I R I]_RI 1I II1 him I [him [ R3]_R3 Rule.z)[ Rule(l,) I 6 5 9 9 pnoun(sing,two,su)/ pnoun(slng,three,ob)/ you I [you[ RI]RI him l [him I R3LR3 Rule (13) ] Rule (14) I 7 4 10 8 you him llI // IV /V adj([see(you,him) ])/ often(see( you, him)) I [one~ I [ILl Rule (16) I 3 11 often I V FIGURE 3: EAA's and SHDGA's Traversals of If Question's Analysis Tree. 85 and obj are expanded (in any order), Num, and Pers for aux_verb are bound, and finally aux_verb is ready, too. In contrast, the SHDGA will proceed by selecting the leftmost literal (auxverb(Num,Pers,Form)/Aux) of the rule (2). At this moment, none of its arguments is instantiated and any attempt to unify with an auxiliary verb in a lexicon will succeed. Suppose then that have is returned and unified with aux_verb with pres._perf as Tense and sing_l as Number. This restricts further choices of subj and main_verb. However, obj will still be completely randomly chosen, and then adj will reject all previous choices. The decision for rejecting them will come when the literal adj is expanded, because its semantics is often(see(you,him)) as inherited from yesnoq, but it does not match the previous choices for aux_verb, subj, main_verb, and obj. Thus we are forced to backtrack repeatedly, and it may be a while before the correct choices are made. In fact the same problem will occur whenever SHDGA selects a rule for expansion such that its leftmost right- hand side literal (first to be processed) is not ready. Since SHDGA does not check for readiness before ex- panding a predicate, other examples similar to the one discussed above can be found easily. We may also point out that the fragment used in the previous example is extracted from an actual computer grammar for Eng- lish (Sager's String Grammar), and therefore, it is not an artificial problem. The only way to avoid such problems with SHDGA would be to rewrite the underlying grammar, so that the choice of the most instantiated literal on the righthand side of a rule is forced. This could be done by chang- ing rule (2) in the example above into several rules which use meta nonterminals Aux, Subj, Main_Verb, and Obj in place of literals attx verb, subj, mainverb, and obj respectively, as shown below: . . . . . ° . . . . yesnoq/askif(S)--> askif/S. askif/S -- > Aux, Subj, Main Verb, Obj, adj ([Verb],[Aux,S-ubj,Main_Verb,Obj])IS. . . . . . . . . . . Since Aux, Subj, Main_Verb, and Obj are uninstan- tiated variables, we are forced to go directly to adj first. After adj is expanded the nonterminals to the left of it will become properly instantiated for expansion, so in effect their expansion has been delayed. However, this solution seems to put additional bur- den on the grammar writer, who need not be aware of the evaluation strategy to be used for its grammar. Both algorithms handle left recursion satisfactorily. SHDGA processes recursive chain rules rules in a con- strained bottom-up fashion, and this also includes dead- lock prone rules. EAA gets rid of left recursive rules during the grammar normalization process that takes place at compile-time, thus avoiding the run-time overhead. 5. MULTI-DIRECTIONALITY Another property of EAA regarded as superior over the SHDGA is its mult-direcfionality. EAA can be used for parsing as well as for generation. The algorithm will simply recognize that the top-level msea is now the string, and will adjust to the new situation. More- over, EAA can be run in any direction paved by the predicates' mseas as they become instantiated at the time a rule is taken up for expansion. In contrast, SHDGA can only be guaranteed to work in one direction, given any particular grammar, although the same architecture can apparently be used for both generation, [SNMP90], and parsing, [K90], [N89]. The point is that some grammars (as shown in the example above) need to be rewritten for parsing or generation, or else they must be constructed in such a way so as to avoid indeterminacy. While it is possible to rewrite grammars in a form appropriate for head- first computation, there are real grammars which will not evaluate efficiently with SHDGA, even though EAA can handle such grammars with no problems. 6. CONCLUSION In this paper we discussed several aspects of two natu- ral language generation algorithms: SHDGA and EAA. Both algorithms operate under the same general set of conditions, that is, given a grammar, and a structured representation of meaning, they attempt to produce one or more corresponding surface strings, and do so with a minimal possible effort. We analyzed the perform- ance of each algorithm in a few specific situations, and concluded that EAA is both more general and more ef- ficient algorithm than SHDGA. Where EAA enforces the optimal traversal of the derivation tree by precom- puting all possible orderings for nonterminal expan- sion, SHDGA can be guaranteed to display a compa- 86 rable performance only if its grammar is appropriately designed, and the semantic heads are carefully assigned (manually). With other grammars SHDGA will follow non-optimal generation paths which may lead to ex- treme inefficiency. In addition, EAA is a truly multi-directional algo- rithm, while SHDGA is not, which is a simple conse- quence of the restricted form of grammar that SHDGA can safely accept. This comparison can be broadened in several direc- tions. For example, an interesting problem that remains to be worked out is a formal characterization of the grammars for which each of the two generation algo- rithms is guaranteed to produce a finite and/or opti- mal search tree. Moreover, while we showed that SHDGA will work properly only on a subset of EAA's grammars, there may be legitimate g ~ that neither algorithm can handle. 7. ACKNOWLEDGEMENTS This paper is based upon work supported by the Defense Advanced Research Project Agency under Contract N00014-90-J-1851 from the Office of Naval Research, the National Science Foundation under Grant IRI-89-02304, and the Canadian Institute for Robot- ics and Intelligent Systems (IRIS). REFERENCES [C78] COLMERAUER, A. 1978. "Metamor- phosis Grammars." In Natural Language Communi- cation with Computers, Edited by L. Bole. Lecture Notes in Computer Science, 63. Springer-Verlag, New York, NY, pp. 133-189. [D90a] DYMETMAN, M. 1990. "A Gener- alized Greibach Normal Form for DCG's." CCRIT, Laval, Quebec: Ministere des Communications Can- ada. [D90b] DYMETMAN, M. 1990. "Left-Re- cursion Elimination, Guiding, and Bidirectionality in Lexical Grammars." To Appear. [DA84] DAHL, V., and ABRAMSON, H. 1984. "On Gapping Grammars." Proceedings of the Second International Conference on Logic Programming.Uppsala, Sweden, pp. 77-88. [DI88] DYMETMAN, M., and ISABELLE, P. 1988. "Reversible Logic Grammars for Machine Translation." Proceedings of the 2nd International Conference on Theoretical and Methodological Issues in Machine Translation of Natural Languages. Car- negie-Mellon University, Pittsburgh, PA. [DIP90] DYMETMAN, M., ISABELLE, P., and PERRAULT, F. 1991. "A Symmetrical Approach to Parsing and Generation." Proceedings of the 13th International Conference on Computational Linguis- tics (COLING-90). Helsinki, Finland, Vol. 3., pp. 90- 96. [GM89] GAZDAR, G., and MELLISH, C. 1989. Natural £zmguage Processing in Prolog. Addison- Wesley, Reading, MA. [K90] KAY, M. 1990. "Head-Driven Pars- ing." In M. Tomita (ed.), Current Issues in Parsing Technology, Kluwer Academic Publishers, Dordrecht, the Netherlands. [K84] KAY, M. 1984. "Functional Unifica- tion Grammar: A Formalism for Machine Translation." Proceedings of the lOth International Conference on Computational Linguistics (COLING-84). Stanford University, Stanford, CA., pp. 75-78. [N89] VAN NOORD, G. 1989. ~An Over- view of Head-Driven Bottom-Up Generation." In Pro- ceedings of the Second European Workshop on Natu- ral Language Generation. Edinburgh, Scotland. [PS90] PENG, P., and STRZALKOWSKI, T. 1990. "An Implementation of A Reversible Grammar." Proceedings of the 8th Conference of the Catmdian So- ciety for the Computational Studies of Intelligence (CSCS1-90). University of Ottawa, Ottawa, Ontario, pp. 121-127. [S90a] STRZALKOWSKI, T. 1990. "How to Invert A Natural Language Parser into An Efficient Gen- erator: An Algorithm for Logic Grammars." Proceed- ings of the 13th International Conference on Compu- tational Linguistics (COLING-90). Helsinki, Finland, Vol. 2., pp. 90-96. [S90b] STRZALKOWSKI, T. 1990. "Revers- ible Logic Grammars for Natural Language Parsing and Generation." Computational Intelligence Journal, Volume 6., pp. 145-171. 87 [$91] STRZALKOWSKI, T. 1991. "A Gen- eral Computational Method for Grammar Inversion." Proceedings era Workshop Sponsored by the Special Interest Groups on Generation and Parsing of the ACL. Berkeley, CA., pp. 91-99. [SNMP89] SHIEBER, S.M., VAN NOORD, G., MOORE, R.C., and PEREIRA, F.C.N. 1989. "A Semantic-Head-Driven Generation Algorithm for Uni- fication-Based Formalisms." Proceedings of the 27th Meeting of the ACL. Vancouver, B.C., pp. 7-17. [SNMP90] SHIEBER, S.M., VAN NOORD, G., MOORE, R.C., and PEREIRA, F.C.N. 1990. "Semantic-Head-Driven Generation." Computational Linguistics, Volume 16, Number 1. [W88] WEDEKIND, J. 1988. "Generation as Structure Driven Derivation.* Proceedings of the 12th International Conference on Computational Linguis- tics (COL1NG-88). Budapest, Hungary, pp. 732-737. 08
1992
11
RECOGNITION OF LINEAR CONTEXT-FREE REWRITING SYSTEMS* Giorgio Satta Institute for Research in Cognitive Science University of Pennsylvania Philadelphia, PA 19104-6228, USA [email protected] ABSTRACT The class of linear context-free rewriting sys- tems has been introduced as a generalization of a class of grammar formalisms known as mildly context-sensitive. The recognition problem for lin- ear context-free rewriting languages is studied at length here, presenting evidence that, even in some restricted cases, it cannot be solved efficiently. This entails the existence of a gap between, for exam- ple, tree adjoining languages and the subclass of lin- ear context-free rewriting languages that generalizes the former class; such a gap is attributed to "cross- ing configurations". A few other interesting conse- quences of the main result are discussed, that con- cern the recognition problem for linear context-free rewriting languages. 1 INTRODUCTION Beginning with the late 70's, there has been a consid- erable interest within the computational linguistics field for rewriting systems that enlarge the gener- ative power of context-free grammars (CFG) both from the weak and the strong perspective, still re- maining far below the power of the class of context- sensitive grammars (CSG). The denomination of mildly context-sensitive (MCS) has been proposed for the class of the studied systems (see [Joshi et al., 1991] for discussion). The rather surprising fact that many of these systems have been shown to be weakly equivalent has led researchers to generalize *I am indebted to Anuj Dawax, Shyam Kaput and Owen Rainbow for technical discussion on this work. I am also grateful to Aravind Joshi for his support in this research. None of these people is responsible for any error in this work. This research was partially funded by the following grants: ARO grant DAAL 03-89-C-0031, DARPA grant N00014-90- J-1863, NSF grant IRI 90-16592 and Ben Franklin grant 91S.3078C-1. 89 the elementary operations involved in only appar- ently different formalisms, with the aim of captur- ing the underlying similarities. The most remark- able attempts in such a direction are found in [Vijay- Shanker et al., 1987] and [Weir, 1988] with the in- troduction of linear context-free rewriting systems (LCFRS) and in [Kasami et al., 1987] and [Seki et a/., 1989] with the definition of multiple context-free grammars (MCFG); both these classes have been in- spired by the much more powerful class of gener- alized context-free grammars (GCFG; see [Pollard, 1984]). In the definition of these classes, the gener- alization goal has been combined with few theoret- ically motivated constraints, among which the re- quirement of efficient parsability; this paper is con- cerned with such a requirement. We show that from the perpective of efficient parsability, a gap is still found between MCS and some subclasses of LCFRS. More precisely, the class of LCFRS is carefully studied along two interesting dimensions, to be pre- cisely defined in the following: a) the fan-out of the grammar and b) the production length. From previous work (see [Vijay-Shanker et al., 1987]) we know that the recognition problem for LCFRS is in P when both dimensions are bounded. 1 We complete the picture by observing NP-hardness for all the three remaining cases. If P~NP, our result reveals an undesired dissimilarity between well known for- malisms like TAG, HG, LIG and others for which the recognition problem is known to be in P (see [Vijay- Shanker, 1987] and [Vijay-Shanker and Weir, 1992]) and the subclass of LCFRS that is intended to gener- alize these formalisms. We investigate the source of the suspected additional complexity and derive some other practical consequences from the obtained re- suits. 1 p is the class of all languages decidable in deterministic polynomial time; NP is the class of all languages decidable in nondeterministic polynomial time. 2 TECHNICAL RESULTS This section presents two technical results that are . the most important in this paper. A full discussion of some interesting implications for recognition and parsing is deferred to Section 3. Due to the scope of the paper, proofs of Theorems 1 and 2 below are not carried out in all their details: we only present formal specifications for the studied reductions and discuss the intuitive ideas behind them. 2.1 PRELIMINARIES Different formalisms in which rewriting is applied independently of the context have been proposed in computational linguistics for the treatment of Nat- ural Language, where the definition of elementary rewriting operation varies from system to system. The class of linear context-free rewriting systems (LCFRS) has been defined in [Vijay-Shanker et al., 1987] with the intention of capturing through a gen- eralization common properties that are shared by all these formalisms. The basic idea underlying the definition of LCFRS is to impose two major restrictions on rewriting. First of all, rewriting operations are applied in the derivation of a string in a way that is independent of the context. As a second restriction, rewriting op- erations are generalized by means of abstract com- position operations that are linear and nonerasing. In a LCFR system, both restrictions are realized by defining an underlying context-free grammar where each production is associated with a function that encodes a composition operation having the above properties. The following definition is essentially the same as the one proposed in [Vijay-Shanker et al., 1987]. Definition 1 A rewriting system G = (VN, VT, P, S) is a linear context-free rewriting system if: • (i) VN is a finite set of nonterminal symbols, VT is a finite set of terminal symbols, S E VN is the start symbol; every symbol A E VN is associated with an integer ~o(A) > O, called the fan-out of A; (it) P is afinite set of productions of the form A --+ f(B1, B2,...,Br), r >_ O, A, Bi E VN, 1 < i < r, with the following restrictions: (a) f is a function in C °, where D = (V~.) ¢, ¢ is the sum of the fan-out of all Bi's and c = (b) f(xl,l,..., Zl,~(B,),..., xr,~(B.)) = (Yz,...,Y~(a)) is defined by some grouping into ~(A) sequences of all and only the elements in the sequence zx,1, ... ,Zr,~o(v,),ax, ...,ao, a >__ O, where aiEVT, l <i<a. The languages generated by LCFR systems are called LCFR languages. We assume that the start- ing symbol has unitary fan-out. Every LCFR sys- tem G is naturally associated with an underlying context-free grammar Gu. The usual context-free derivation relation, written =¢'a, , will be used in the following to denote underlying derivations in G. We will also use the reflexive and transitive closure of such a relation, written :=~a, • As a convention, whenever the evaluation of all functions involved in an underlying derivation starting with A results in a ~(A)-tuple w of terminal strings, we will say that * A derives w and write A =~a w. Given a nonter- minal A E VN, the language L(A) is the set of all ~(A)-tuples to such that A =~a w. The language generated by G, L(G), is the set L(S). Finally, we will call LCFRS(k) the class of all LCFRS's with fan-out bounded by k, k > 0 and r-LCFRS the class of all LCFRS's whose productions have right-hand side length bounded by r, r > 0. 2.2 HARDNESS FOR NP The membership problem for the class of linear context-free rewriting systems is represented by means of a formal language LRM as follows. Let G be a grammar in LCFRS and w be a string in V.~, for some alphabet V~; the pair (G, w) belongs to LRM if and only if w E L(G). Set LRM naturally represents the problem of the recognition of a linear context-free rewriting language when we take into account both the grammar and the string as input variables. In the following we will also study the de- cision problems LRM(k) and r-LRM, defined in the obvious way. The next statement is a characteriza- tion of r-LRM. Theorem 1 3SAT _<p I-LRM. Outline of the proof. Let (U, C) be an arbitrary in- stance ofthe 3SAT problem, where U = {Ul,..., up} is a set of variables and C = {Cl,...c,} is a set of clauses; each clause in C is represented by a string of length three over the alphabet of all lit- erals, Lu = {uz,~l,...,up,~p}. The main idea in the following reduction is to use the derivations of the grammar to guess truth assignments for U and to 90 use the fan-out of the nonterminal symbols to work out the dependencies among different clauses in C. For every 1 < k < p_ let .Ak = {c i [ uk is a substring of ci} and let .Ak = {c i [ ~k is a substring of cj}; let also w = clc2 ...ca. We define a linear context-free rewriting system G = (tiN, C, P, S) such that VN = {~/i, Fi [ 1 < i < p + 1} U {S}, every nonterminal (but S) has fan-out n and P contains the following productions (fz denotes the identity function on (C*)a): (i) S --* f0(T~), s f0(Fd, where fo(xl,..., xn) = za ... Xn; (ii) for every 1 < k < p and for every cj E .At: n - Tt -"* fl(Tk+l), Tk h(Fk+x), where = (=1,... ,=.); (iii) for every 1 < k < p and for every c i E Ak: Fk --* ~(kD (Fk), Fk --. h(Tk+l), --. h(fk+x), where 7(k'i)(xx, .... z,) = (Zl,... ,xici,... ,z,); (iv) Tp+l --*/p+10, A+10, where fp+10 = (~,"', C). From the definition of G it directly follows that w E L(G) implies the existence of a truth-assignment that satisfies C. The converse fact can he shown starting from a truth assignment that satisfies C and constructing a derivation for w using (finite) induc- tion on the size of U. The fact that (G, w) can he constructed in polynomial deterministic time is also straightforward (note that each function fO) or 7~ j) in G can he specified by an integer j, 1 _~ j _~ n). D The next result is a characterization of LRM(k) for every k ~ 2. Theorem 2 3SAT _<e LRM(2). Outline of the proof. Let (U,C) be a generic in- stance of the 3SAT problem, U = {ul,... ,up} and C = {Cl,...,Cn} being defined as in the proof of Theorem 1. The idea in the studied reduction is the following. We define a rather complex string w(X)w(2).., w(P)we, where we is a representation of the set C and w (1) controls the truth assignment for the variable ui, 1 < i < p. Then we construct a grammar G such that w(i) can be derived by G only in two possible ways and only by using the first string components of a set of nonterminals N(0 of fan-out two. In this way the derivation of the substring w(X)w(2) ... w(p) by nonterminals N(1),..., N (p) cor- responds to a guess of a truth assignment for U. Most important, the right string components of non- terminals in N (i) derive the symbols within we that are compatible with the truth-assignment chosen for ui. In the following we specify the instance (G, w) of LRM(2) that is associated to (U, C) by our reduc- tion. For every 1 _< i _< p, let .Ai = {cj [ ui is in- cluded in cj} and ~i = {cj [ ~i is included in cj}; let also ml = [.Ai[ + IAil. Let Q = {ai,bi [ 1 <_ i _< p} be an alphabet of not already used sym- bols; for every 1 <_ i <_ p, let w(O denote a se- quence of mi + 1 alternating symbols ai and bi, i.e. w(O E (aibl) + U (albi)*ai. Let G -- (VN, QUC, P, S); we define VN ---- {S} U {a~ i) I 1 <_ i <_ p, 1 <_ j <_ mi} and w = w(t)w(=)...w(P)cxc2...ea. In order to specify the productions in P, we need to introduce further notation. We define a function a such that, for every 1 _< i _< p, the clauses Ca(i,1),Ca(i,2),'"Ca(i,lAd) are all the clauses in .Ai and the clauses ea(i,l.a,l+l),...ca(i,m0 are all the clauses in ~i. For every 1 < i < p, let 7(i, 1) = albi and let 7(i, h) = ai (resp. bl) if h is even (resp. odd), 2 < h < mi; let also T(i, h) = ai (resp. bi) ifh is odd (resp. even), 1 < h < mi - 1, and let ~(i, mi) = albi (resp. biai) if mi is odd (resp. even). Finally, let P z = ~"~i=1 mi. The following productions define set P (the example in Figure 1 shows the two possible ways of deriving by means of P the substring w(0 and the corresponding part of Cl ... ca). (i) for every 1 < i < p: (a) for 1 < h < [~4,[: Ai') .-+ (7(i,h),cc,(i,h)), A(i) ~ (7(i, h), e), (b) for JAil+ 1 < h < mi: h), A (i) ~ ('~(i, h), c,(i,h)), A (0 --~ (~(i, h), e); (ii) S--* f(Ail),...,A~!,..., A~), 91 i I w =... ai bi al bi ai Cjl A ~ CJl , $ .ll , ... c i:z ... c j3 ... cs4 ... E c~,E E Figure 1: Let .Ai = {ej2,ej,} and ~i = {cja,cjs}. String w (i) can be derived in only two possible ways in G, corresponding to the choice ui = trne/false. This forces the grammar to guess a subset of the clauses contained in ,Ai/.Ai, in such a way that all of the clauses in C are derived only once if and only if there exists a truth-assignment that satisfies C. where f is a function of 2z string variables de- fined as f(z~l),y~l),, g(1) • (1) Z(p) • (p)l • ., ~ l , Y ~ l , . . . 1 fl~plyrnpj "-" z(1)z(1) z 0) .z~yay2..y. 1 2 "'" ml-. and for every 1 _ j _< n, yj is any sequence of all variables y(i) such that ~(i, h) = j. It is easy to see that [GI and I wl are polynomi- ally related to I UI and I C l- From a derivation of w G L(G), we can exhibit a truth assignment that satisfies C simply by reading the derivation of the prefix string w(X)w(2)...w (p). Conversely, starting from a truth assignment that satisfies C we can prove w E L(G) by means of (finite) induction on IU l: this part requires a careful inspection of all items in the definition of G. ra 2.3 COMPLETENESS FOR NP The previous results entail NP-hardness for the de- cision problem represented by language LRM; here we are concerned with the issue of NP-completeness. Although in the general case membership of LRM in NP remains an open question, we discuss in the following a normal form for the class LCFRS that enforces completeness for NP (i.e. the proposed nor- mal form does not affect the hardness result dis- cussed above). The result entails NP-completeness for problems r-LRM (r > 1) and LRM(k) (k > 2). We start with some definitions. In a lin- ear context-free rewriting system G, a derivation A =~G w such that w is a tuple of null strings is called a null derivation. A cyclic derivation has the underlying form A ::~a. aAfl, where both ~ and derive tuples of empty strings and the overall ef- fect of the evaluation of the functions involved in the derivation is a bare permutation of the string components of tuples in L(A) (no recombination of components is admitted). A cyclic derivation is min- imal if it is not composed of other cyclic deriva- tions. Because of null derivations in G, a deriva- tion A :~a w can have length not bounded by any polynomial in [G I; this peculiarity is inherited from context-free languages (see for example [Sippu and Soisalon-Soininen, 1988]). The same effect on the length of a derivation can be caused by the use of cyclic subderivations: in fact there exist permuta- tions of k elements whose period is not bounded by any polynomial in k. Let A f and C be the set of all nonterminals that can start a null or a cyclic deriva- tion respectively; it can be shown that both these sets can be constructed in deterministic polynomial time by using standard algorithms for the computa- tion of graph closure. For every A E C, let C(A) be the set of all permu- tations associated with minimal cyclic productions starting with A. We define a normal form for the class LCFRS by imposing some bound on the length of minimal cyclic derivations: this does not alter the weak generative power of the formalism, the only consequence being the one of imposing some canon- ical base for (underlying) cyclic derivations. On the basis of such a restriction, representations for sets C(A) can be constructed in deterministic polynomial time, again by graph closure computation. Under the above assumption, we outline here a proof of LRMENP. Given an instance (G, w) of the LRM problem, a nondeterministic Turing machine 92 M can decide whether w E L(G) in time polynomial in I(G, w) l as follows. M guesses a "compressed" representation p for a derivation S ~c w such that: (i) null subderivations within p' are represented by just one step in p, and (ii) cyclic derivations within p' are represented in p by just one step that is associated with a guessed permutation of the string components of the involved tuple. We can show that p is size bounded by a polynomial in I (G, w)[. Furthermore, we can verify in determin- istic polynomial time whether p is a valid derivation of w in G. The not obvious part is verifying the permutation guessed in (ii) above. This requires a test for membership in the group generated by per- mutations in C(A): such a problem can be solved in deterministic polynomial time (see [Furst et ai., 19801). 3 IMPLICATIONS In the previous section we have presented general results regarding the membership problem for two subclasses of the class LCFRS. Here we want to discuss the interesting status of "crossing depen- dencies" within formal languages, on the base of the above results. Furthermore, we will also derive some observations concerning the existence of highly efficient algorithms for the recognition of fan-out and production-length bounded LCFR languages, a problem which is already known to be in the class P. 3.1 CROSSING CONFIGURATIONS As seen in Section 2, LCFRS(2) is the class of all LCFRS of fan-out bounded by two, and the mem- bership problem for the corresponding class of lan- guages is NP-complete. Since LCFRS(1) = CFG and the membership problem for context-free lan- guages is in P, we want to know what is added to the definition of LCFRS(2) that accounts for the dif- ference (assuming that a difference exists between P and NP). We show in the following how a binary relation on (sub)strings derived by a grammar in LCFRS(2) is defined in a natural way and, by dis- cussing the previous result, we will argue that the additional complexity that is perhaps found within LCFRS(2) is due to the lack of constraints on the way pairs of strings in the defined relation can be composed within these systems. Let G E LCFRS(2); in the general case, any non- terminal in G having fan-out two derives a set of pair of strings; these sets define a binary relation that is called here co-occurrence. Given two pairs (Wl, w'l) and (w~, w'~) of strings in the co-occurrence relation, there are basically two ways of composing their string components within a rule of G: either by nesting (wrapping) one pair within the other, e.g. wlw2w~w~l, or by creating a crossing configu- ration, e.g. wlw2w'lw~; note how in a crossing con- figuration the co-occurrence dependencies between the substrings are "crossed". A close inspection of the construction exhibited by Theorem 2 shows that grammars containing an unbounded number of crossing configurations can be computationally com- plex if no restriction is provided on the way these configurations are mutually composed. An intuitive idea of why such a lack of restriction can lead to the definition of complex systems is given in the follow- ing. In [Seki et al., 1989] a tabular method has been presented for the recognition of general LCFR lan- guages as a generalization of the well known CYK algorithm for the recognition of CFG's (see for in- stance [Younger, 1967] and [Aho and Ullman, 1972]). In the following we will apply such a general method to the recognition of LCFRS(2), with the aim of hav- ing an intuitive understanding of why it might be dif- ficult to parse unrestricted crossing configurations. Let w be an input string of length n. In Figure 2, the case of a production Pl : A --* f ( B1, B2, . . . , Br ) is depicted in which a number r of crossing con- figurations are composed in a way that is easy to recognize; in fact the right-hand side of Pl can be recognized step by step. For a symbol X, assume B2 I I I I I I I I I i Figure 2: Adjacent crossing configurations defining a production Pl : A ~ f(B1, B2,..., Br) where each of the right-hand side nonterminals has fan-out two. that the sequence X, (il, i2),..., (iq-1, iq) means X derives the substrings of w that matches the po- sitions (i1,i2),..., (iq-l,iq) within w; assume also that A[t] denotes the result of the t-th step in the recognition of pl's right-hand side, 1 < t < r. Then each elementary step in the recognition of Pl can 93 be schematically represented as an inference rule as follows: A[t], (ia, i,+a), (S',, J,+*) • B,+a, (it+a, it+s), (jr+a, Jr+2) Air + 1], (ia, it+s), (jl, Jr+2) O) The computation in (1) involves six indices ranging over {1..n}; therefore in the recognition process such step will be computed no more than O(n 6) times. B2 B3 ... i ~ °" I I I I I I I I I I I I I I I Figure 3: Sparse crossing configurations defining a production P2 : A ~ f(B1, Bs,..., Br); every non- terminal Bi has fan-out two. On the contrary, Figure 3 presents a production P2 defined in such a way that its recognition is consider- ably more complex. Note that the co-occurrence of the two strings derived by Ba is crossed once, the co- occurrence of the two strings derived by B2 is crossed twice, and so on; in fact crossing dependencies in P2 are sparse in the sense that the adjacency property found in production Pl is lost. This forces a tabular method as the one discussed above to keep track of the distribution of the co-occurrences recognized so far, by using an unbounded number of index pairs. Few among the first steps in the recognition of ps's right-hand side are as follows: A[2], (i1, i4), (i5, i6) Bz, li4,i51, lis,igl At3], (it, i6), (is, i9) A[3], (il, i6), (is, i9) B4,(i6, ir),{il,,im} A[4], (il, i7), (is, i9), (iai, i12) A[4], (it, i7), (is, i9), (ixl, i]2) /35, (i7, is), (ilz, i14) (2) a[51, (it, i9), (/ix, it2), (ilz, i14) From Figure 3 we can see that a different order in the recognition of A by means of production P2 will not improve the computation. Our argument about crossing configurations shows why it might be that recognition/parsing of LCFRS(2) cannot be done efficiently. If this is true, we have a gap between LCFR systems and well known mildly context-sensitive formalisms whose membership problem is known to have polynomial solutions. We conclude that, in the general case, the addition of restrictions on crossing configurations should be seriously considered for the class LCFRS. As a final remark, we derive from Theorem 2 a weak generative result. An open question about LCFRS(k) is the existence of a canonical bilinear form: up to our knowledge no construction is known that, given a grammar G E LCFRS(k) returns a weakly equivalent grammar G ~ E 2-LCFRS(k). Since we know that the membership problem for 2-LCFRS(k) is in P, Theorem 2 entails that the construction under investigation cannot take poly- nomial time, unless P=NP. The reader can easily work out the details. 3.2 RECOGNITION OF r-LCFRS(k) Recall from Section 2 that the class r-LCFRS(k) is defined by the simultaneous imposition to the class LCFRS of bounds k and r on the fan-out and on the length of production's right-hand side respectively. These classes have been discussed in [Vijay-Shanker et al., 1987], where the membership problem for the corresponding languages has been shown to be in P, for every fixed k and p. By introducing the no- tion of degree of a grammar in LCFRS, actual poly- nomial upper-bounds have been derived in [Seki et al., 1989]: this work entails the existence of an inte- ger function u(r, k) such that the membership prob- lem for r-LCFRS(k) can be solved in (deterministic) time O(IGIIwlU(r'k)). Since we know that the mem- bership problems for r-LCFRS and LCFRS(k) are NP-hard, the fact that u(r, k) is a (strictly increas- ing) non-asymptotic function is quite expected. With the aim of finding efficient parsing al- gorithms, in the following we want to know to which extent the polynomial upper-bounds men- tioned above can be improved. Let us consider for the moment the class 2-LCFRS(k); if we restrict our- selves to the normal form discussed in Section 2.3, we know that the recognition problem for this class is NP-complete. Assume that we have found an op- timal recognizer for this class that runs in worst case time I(G, w, k); therefore function I determines the best lower-bound for our problem. Two cases then arises. In a first case we have that ! is not bounded by any polynomial p in ]G I and Iwl: we can eas- ily derive that PcNP. In fact if the converse is true, then there exists a Turing machine M that is able to recognize 2-LCFRS in deterministic time I(G, w)I q, for some q. For every k > 0, construct a Turing machine M (k) in the following way. Given (G, w) as input, M (~) tests whether G E2-LCFRS(k) (which 94- is trivial); if the test fails, M(t) rejects, otherwise it simulates M on input (G, w). We see that M (k) is a recognizer for the class 2-LCFRS(k) that runs in deterministic time I(G, w)I q. Now select k such that, for a worst case input w E ~* and G E 2- LCFRS(k), we have l(G, w,k) > I(G, w)Iq: we have a contradiction, because M (k) will be a recognizer for 2-LCFRS(k) that runs in less than the lower- bound claimed for this class. In the second case, on the other hand, we have that l is bounded by some polynomial p in [G [ and I w I; a similar argument applies, exhibiting a proof that P=NP. From the previous argument we see that finding the '"oest" recognizer for 2-LCFRS(k) is as difficult as solving the P vs. NP question, an extremely dif- ficult problem. The argument applies as well to r- LCFRS(k) in general; we have then evidence that considerable improvement of the known recognition techniques for r-LCFRS(k) can be a very difficult task. 4 CONCLUSIONS We have studied the class LCFRS along two dimen- sions: the fan-out and the maximum right-hand side length. The recognition (membership) problem for LCFRS has been investigated, showing NP-hardness in all three cases in which at least one of the two di- mensions above is unbounded. Some consequences of the main result have been discussed, among which the interesting relation between crossing configura- tions and parsing efficiency: it has been suggested that the addition of restrictions on these configu- rations should be seriously considered for the class LCFRS. Finally, the issue of the existence of effi- cient algorithms for the class r-LCFRS(k) has been addressed. References [Aho and Ullman, 1972] A. V. Aho and J. D. Ull- man. The Theory of Parsing, Translation and Compiling, volume 1. Prentice-Hall, Englewood Cliffs, N J, 1972. [Furst et al., 1980] M. Furst, J. Hopcroft, and E. Luks. Polynomial-time algorithms for permu- tation groups. In Proceedings of the 21 th IEEE Annual Symposium on the Foundations of Com- puter Science, 1980. [Joshi et aL, 1991] A. Joshi, K. Vijay-Shanker, and D. Weir. The convergence of mildly context- 95 sensitive grammatical formalisms. In P. Sells, S. Shieber, and T. Wasow, editors, Foundational Issues in Natual Language Processing. MIT Press, Cambridge MA, 1991. [Kasami et al., 1987] T. Kasami, H. Seki, and M. Fujii. Generalized context-free grammars, mul- tiple context-free grammars and head grammars. Technical report, Osaka University, 1987. [Pollard, 1984] C. Pollard. Generalized Phrase Structure Grammars, Head Grammars and Nat- ural Language. PhD thesis, Stanford University, 1984. [Seki et al., 1989] H. Seki, T. Matsumura, M. Fujii, and T. Kasami. On multiple context-free gram- mars. Draft, 1989. [Sippu and Soisalon-Soininen, 1988] S. Sippu and E. Soisalon-Soininen. Parsing Theory: Languages and Parsing, volume 1. Springer-Verlag, Berlin, Germany, 1988. [Vijay-Shanker and Weir, 1992] K. Vijay-Shanker and D. J. Weir. Parsing con- strained grammar formalisms, 1992. To appear in Computational Linguistics. [Vijay-Shanker et al., 1987] K. Vijay-Shanker, D. J. Weir, and A. K. Joshi. Characterizing structural descriptions produced by various grammatical for- malisms. In 25 th Meeting of the Association for Computational Linguistics (ACL '87), 1987. [Vijay-Shanker, 1987] K. Vijay-Shanker. A Study of Tree Adjoining Grammars. PhD thesis, Depart- ment of Computer and Information Science, Uni- versity of Pennsylvania, 1987. [Weir, 1988] D. J. Weir. Characterizing Mildly Context-Sensitive Grammar Formalisms. PhD thesis, Department of Computer and Information Science, University of Pennsylvania, 1988. [Younger, 1967] D. H. Younger. Recognition and parsing of context-free languages in time n 3. In- formation and Control, 10:189-208, 1967.
1992
12
ACCOMMODATING CONTEXT CHANGE Bonnie Lynn Webber and Breck Baldwin Department of Computer and Information Science University of Pennsylvania Philadelphia, PA 19104-6389 Interact: {bonnie~central,breck@linc}.cis.upenn.edu* ABSTRACT Two independent mechanisms of context change have been discussed separately in the literature - context change by entity introduction and context change by event simulation. Here we discuss their integration. The effectiveness of the integration de- pends in part on a representation of events that cap- tures people's uncertainty about their outcome - in particular, people's incomplete expectations about the changes effected by events. We propose such a representation and a process of accommodation that makes use of it, and discuss our initial implementa- tion of these ideas. Introduction Consider the following example: Example 1 John made a handbag from an inner-tube. a. He sold it for twenty dollars. b. *He sold them for fifty dollars. c. He had taken it from his brother's car. d. Neither of them was particularly useful. Here two entities are introduced via indefinite noun phrases (NPs) in the first sentence. The alternative follow-ons (a-d) show that subsequent reference to those entities is constrained. In particular, (b) high- lights the difference in their existential status, even though there is no syntactic difference in how they are introduced. Now consider *This work was partially supported by ARO grant DAAL 03-89-C-0031, DARPA grant N00014-90-J-1863, and NSF grant IRI 90-16592 to the University of Pennsylvania. The paper draws upon material first presented at the workshop on Defensible Reasoning in Semantics and Pragmatics held at the European Summer School on Logic, Language and Infor- mation, Saarbr~cken, Germany, August 1991. Example 2 Mix the flour, butter and water. a. Knead the dough until smooth and shiny. b. Spread the paste over the blueberries. c. Stir the batter until all lumps are gone. In each of the alternative follow-on (a-c), a different definite NP refers to the result of the mixing, even though the terms "dough", "paste" and "batter" are not interchangeable. (They denote substances with different consistencies, from a pliant solid - dough - to a liquid - batter.) In both these examples, events 1 are mentioned that change the world being described. These exam- ples will be used to show why the two mechanisms of context change discussed separately in the litera- ture (context change by entity introduction and con- text change by event simulation) must be integrated (Section 2). For such integration to be effective, we argue that it must be based on a representation of events that captures people's uncertainty about their outcome - in particular, people's incomplete expec- tations about the changes effected by events. An un- derstanding system can then use these expectations to accommodate [15] the particular changes that are mentioned in subsequent discourse (Section 3). In Section 4, we discuss our initial implementation of these ideas. This work is being carried out as part of a project (AnlmNL) aimed at creating animated task simu- lations from Natural Language instructions [2; 4; 5; 6; 7; 14; 20]. Instructions are a form of text rich in the specification of events intended to alter the world in some way. Because of this, the issues discussed in this paper are particularly important to both under- standing and generating instructions. 96 1Event is used informally to mean any kind of action or process. Mechanisms of Context Change Computational Linguistics research has recognized two independent mechanisms of context change. The first to have been recognized might be called context change by entity introduction. It was first imple- mented in Woods' question-answering system LU- NAR [21; 22]. For each non-anaphoric referential noun phrase (NP) in a question, including a ques- tioned NP itself, LUNAR would create a new con- stant symbol to represent the new entity, putting an appropriate description on its property list. For ex- ample, if asked the question "Which breccias contain molybdenum?", LUNAR would create one new con- stant to represent molybdenum and another to repre- sent the set of breccias which contain molybdenum. Each new constant would be added to the front of LUNAR's history list, thereby making it available as a potential referent for subsequent pronominal and definite NP anaphors (e.g. "Do they also contain ti- tanium?"). Webber [19] further developed this pro- cedure for introducing and characterizing discourse entities available for anaphoric reference A similar mechanism of context change is embed- ded in formal dynamic theories of discourse, includ- ing Kamp's Discourse Representation Theory [11] and Heim's File Change Semantics [10]. We briefly describe Heim's approach, to show this similarity. Heim's files constitute an intermediate level of rep- resentation between the sentences of a text and the model which gives them their truth values. A sen- tence can be viewed as denoting a function from an input file to an output file. Each indefinite NP in a sentence requires a new file card in the output file which does not appear in the input file, on which is inscribed the properties of the new entity. Each definite NP must either map to an existing file card or have a semantic association with an existing card, allowing it to be accommodated into the discourse. In the latter case, a new file card is inserted in the input file which the definite NP is now taken as map- ping to. Context change therefore consists of new annotations to existing cards and new cards added for indefinite NPs and accommodated definite NPs. The files do not change in any other way that reflects events described in the text. Formal theories of discourse have been broadened to allow for types of "embedded contexts" associated with modals [17] and with propositional attitudes [1]. Although they have also begun to deal with problems of tense and the temporal relationship of events de- 97 scribed in a text [12; 16], there is still no connection between the events described in a text and the indi- viduals introduced therein. Context change by event simulation is a feature of Dale's recent Natural Language generation system EPICURE [3], which generates recipe texts from an underlying plan representation. In EPICURE, the in- dividuals available for reference change in step with the events described in the text. ~ In a sense, EPI- CURE is simulating the effects of the events that the text describes. In implementing this, Dale represents actions with STRIPS-like operators which can change the world from one state to another. Each object and state in EPICURE has a unique index, with the set of ob- jects available in a given state constituting its work- ing set. With respect to objects 3, an action can have two types of effects: it can change a property of an object (e.g., from being an individual carrot to be- ing a mass of grated carrot), or it can add an object to or remove it from the world, as represented in the current working set (e.g., flour disappears as an independent entity when combined with water, and dough appears). The preconditions and postcondi- tions of each action indicate the objects required in the working set for its performance and the changes it makes to objects in the working set as a result. For example, ADD (in the sense of "add X to Y") has as preconditions that X and Y be in the current working set and as post-conditions, that X and Y are absent from the resulting working set and a new object Z is present whose constituents are X and Y. The form of recipe that EPICURE generates is the common one in which a list of ingredients is followed by instructions as to what to do with them. Thus all entities are introduced to the reader in this ini- tial list (e.g., "four ounces of butter beans", "a large onion", "some sea salt", etc.) before any mention of the events that will (deterministically) change their properties or their existential status. As a result, in the text of the recipe, EPICURE only embodies con- text change by event simulation: no new entities are introduced in the text that are not already known from the list of ingredients. 2In earlier work, Grosz [8] noticed that in task-oriented di- alogues, the performance of actions could alter what objects the speakers would take to be in .focus and hence take as the intended referents of definite pronouns and NPs. However, ac- tual changes in the properties and existential status of objects due to actions were not part of Grosz' study. ZDale construes and also implements the notion of object very broadly, so that the term applies equally well to a two- pound package of parsnips and a tablespoon of salt Our work on integrating these two mechanisms of context change involves dropping Dale's assumption that states are complete specifications of an underly- ing model. (To emphasize that descriptions are par- tial, we will use the term situation rather than state.) As in EPICURE, actions are represented here by op- erators - functions from one situation to another. The meaning of a clause is given in terms of these operators. 4 Also as in EPICURE, the term working set is used for the set of entities in the discourse con- text. For clarity, we refer to the working set associ- ated with the situation prior to the described event as the WSi, and the working set associated with the situation after it as the WSo. An indefinite NP in the clause may introduce an entity into the WSi. Al- ternatively, it may denote an entity in the WSo that corresponds to a result of the event being described. Whether an entity introduced into WSi persists into WSo will depend on the particular event. This is characterized as in EPICURE by preconditions on WSi and postconditions on WSo, plus a default as- sumption, that if an action is not known to affect an object and the text does not indicate that the object has been affected, then one assumes it has not been. For example, consider an operator corresponding to MAKE X FROM Y (in the sense used in Exam- ple 1). Its precondition is that X is in WSi. Its postconditions are that X is not in WSo, Y is in WSo, and mainConstituentOf(Y,X). In response to the sentence "John made a handbag from an inner- tube" (or alternatively, "John made an inner-tube into a handbag"), a new entity (xx) corresponding to inner-tube would be introduced into the current WSi. The situation resulting from the MAKE action contains a new entity (z2) corresponding to its prod- uct, which is what "a handbag" is taken to denote. The postconditions on MAKE specify that zl does not persist into WSo as a separate object. 5 Now consider the alternative follow-ons to Exam- ple 1. The sentence He sold it for $20. describes a subsequent event. Its WSi is the WSo of the previous utterance, augmented by an entity in- troduced by the NP $20. Entities introduced into 4We are ignoring a clause's aspectual character here - that it may not imply the completion of the denoted action. What is offered here are necessary but not sufficient features of a solution. SNon-destructive constructive actions such as "build", "as- semble", etc. (e.g. "build a house of Lego blocks") do not have this property: constituent entities retain their individual existence. 98 WSi that persist through to WSo continue to be available for reference in clauses describing subse- quent events, as illustrated by the subsequent ref- erence to John ('°ne") above. The alternative follow-on He had taken it from his brother's car. describes the situation prior to the previous event. Its WSi is the WSi of the previous event, aug- mented by entities corresponding to "his brother" and "his brother's car. The only way to refer anaphorically to entities from different working sets is with a follow-on that refers aternporally across sit- uations (e.g. "Neither of them was particularly use- ful). To date, we have not found any individual event descriptions whose semantics requires specifying more than the situations prior to and following the event. This is not to say that events cannot be described in terms of a sequence of situations (e.g. "John began to mix the flour, butter and water. He mixed them for 5 minutes. He finished mixing them."). The point is that the semantics of a single event description appears to require no more than specifying properties of WSi and WSo. Before discussing Example 2 in detail in the next section, we would like to draw the reader's attention to two variations of that example: ExAmple 3 a. Mix the flour and butter into a dough. b. Mix the nuts and butter into the dough. What is of interest is the different roles that the prepositional phrase plays in these two cases and how they are disambiguated. In 3a, "into a dough" speci- fies the goal of the mixing. An operator representing this sense of MIX X INTO Y would, like the operator for MAKE Y FROM X above, have as its precondition that X is in WSi. Its post-conditions are that Y is in WSo and that constituentsOf(Y,X). In response to 3a, the definite NP "the flour and butter" would have to be resolved against entities already in WSi, while "a dough" would be taken to denote the new entity entered into WSo, corresponding to the product of the mixing. In 3b however, "into the dough" specifies the des- tination of the ingredients, with mixing having this additional sense of translational motion. An opera- tor representing this sense of MIX X INTO Y would have as its precondition that both X and Y are in WSi. Its post-conditions are that Y is in WSo and that X is added to the set of constituents of Y. In response to 3b, not only would the definite NP "the nuts and butter" have to be resolved against entities already in WSI, but "the dough" would have to be so resolved as well. With a definite NP in a MIX INTO prepositional phrase, disambiguating between these two senses is simple: it can only be the latter sense, because of the precondition that its referent already be in WSi. With an indefinite NP however, it can only be a mat- ter of preference for the first sense. Expectation and Accommoda- tion For the integration proposed above to effectively handle Example 4 below (Example 2 from the Intro- duction) and Example 5, one needs both a more ac- curate representation of people's beliefs about events and a way of dealing with those beliefs. Example 4 Mix the flour, butter and water. a. Knead the dough until smooth and shiny. b. Spread the paste over the blueberries. c. Stir the batter until all lumps are gone. Example 5 John carved his father a chair for his birthday. a. The wood came from Madagascar. b. The marble came from Vermont. If the definite NPs in examples 4 and 5 are taken as definite by virtue of their association with the pre- viously mentioned event (just as definites have long been noted as being felicitous by virtue of their as- sociation with previously mentioned objects), then Example 4 shows people associating a variety of dif- ferent results with the same action and Example 5, a variety of different inputs. To deal with this, we argue for 1. characterizing an agent's knowledge of an action in terms of partial constraints on its WSi and partial expectations about its WSo; 2. accommodating [15] definite NPs in subsequent utterances as instantiating either a partial con- straint in WSi or a partial expectation in WSo. There appear to be three ways in which an agent's knowledge of an action's constraints and expecta- tions may be partial, each of which manifests it- self somewhat differently in discourse: the knowledge may be abstract, it may be disjunctive, or it may in- volve options that may or may not be realized. Abstract Knowledge. An agent may believe that an action has a predictable result, without being able to give its particulars. For example, an agent may know that when she adds white paint to any other color paint, she gets paint of a lighter color. Its par- ticular color will depend on the color of the original paint and the amount of white she adds. In such cases, one might want to characterize the agent's partial beliefs as abstract descriptions. The agent may then bring those beliefs to bear in generating or understanding text describing events. That is, in both narrative and instructions, the speaker is taken to know more about what has happened (or should happen) than the listener. The listener may thus not be able immediately to form specific expectations about the results of described events. But she can accommodate [15] a definite NP that can be taken to denote an instantiation of those expectations. In Example 4, for example, one might character- ize the agent's expectation about the object result- ing from a blending or mixing action abstractly as a mizture. Given an instruction to mix or blend some- thing, the agent can then accommodate a subsequent definite reference to a particular kind of mixture - a batter, a paste or a dough - as instantiating this ex- pectation. An agent's knowledge of the input constraints on an action may be similarly abstract, characterizing, for example, the input to "carve" as a unit of solid material. Having been told about a particular carv- ing action, a listener can understand reference to a unit of particular material (stone, wood, ice, etc.) as instantiating this input object. Disjunctive Knowledge. An experienced agent has, for example, alternative expectations about the result of beating oil into egg yolks: the resulting ob- ject will be either an emulsion (i.e., mayonnaise) or a curdled mass of egg yolk globules floating in oil. Most often, one of the disjuncts will correspond to the in- tended result of the action, although "intended" does not necessarily imply "likely". (The result may in fact be quite unpredictable.) In a text, the disjunc- tive knowledge that an agent has, or is meant to have, about actions is manifest in the descriptions given of all (or several) alternatives. Often, the unintended alternatives are presented in a conditional mood. Options. A third type of partial knowledge that an agent may have about an action is that it may or may not produce a particular, usually secondary, result, depending on circumstances. As with disjunctive ex- pectations, these results are unpredictable. A corn- 99 mon way to specify options such as these in recipes is with the '~f any" construction, as in Ex-mple 6 Saute garlic until lightly browned. Remove the burnt bits, if any, before continuing. Our work to date has focussed on modelling an agent's abstract knowledge of actions and how it can be used in updating context and accommodat- ing subsequent referring expressions, as in Exam- ples 4 and 5. e These abstract constraints and ex- pectations can be applied immediately as a clause describing their associated action is processed. Con- text changes will then reflect explicit lexical material, when present, as in Mix the flour, butter and water into a paste. or simply the agent's (abstract) expectations, when explicit lexical material is not present, as in Mix the flour, butter and water. In the latter case, a subsequent definite NP denoting a particular kind of mixture (the solution, the paste, etc) can be taken as referring to an entity that is in the current working set, merely refining its descrip- tion, as in Example 4 above. Initial Implementation Entity Introduction and Elimination The Natural Language and reasoning components of the AnimNL project are being implemented in Prolog. In our initial implementation of context change, entities can be entered into the context by either entity introduction or event simulation, but they are never actually removed. Instead, actions are treated as changing the properties of entities, which may make them inaccessible to subsequent actions. For example, mixing flour, butter and water (Exam- pies 3a and 4) is understood as changing the prop- erties of the three ingredients, so that they are no longer subject to independent manipulation. (Here we are following Hayes' treatment of "liquid pieces" [9] which holds, for example, that the piece of wa- ter that was in a container still "exists" even after being poured into a lake: It is just no longer indepen- dently accessible.) This approach seems to simplify eTenenberg has used an abstraction hierarchy of action de- scriptions to simplify the task of planning [18], and Kautz, to simplify plan inference [13]. This same knowledge can be applied to language processing. 100 re~rence res~ution decisions, but we are not rigidly committed to it. The mechanism for changing propert~s and intro- ducing entit~s uses STRIPS-like operators such as mix(E,X,Y) precond: [manipulable(X)] delete: [manipulable(X)] postcond: [mixture(Y) k manipulable(Y) & constituentsOf(Y,X)] which would be instantiated in the case of mixing flour, butter and water to mix(el,(f,w,b},m) & flour(f) • water(w) butter(b) ~ definite((f,w,b}) precond: [manipulable({f,w,b})] delete: [manipulable({f,w,b})] postcond: [mixture(m) ~ manipulable(m) k constituentsOf(m,~f,w,b~)] The predicate in the header definite({f.w,b}) is an instruction to the back chainer that unique an- tecedents need to be found for each member of the set. (In recipes, the antecedents may be provided through either the previous discourse or the ingredi- ents list.) If definite is absent, as in the case of interpreting "mix some flour, water and butter" ,the back chainer introduces new entities into the work- ing set. It also inserts into the working set a new en- tity corresponding to the postcondition mixture(m), whether this entity has a lexical realization (as in Ex- ample 3a) or not (as in Example 4). Abstract Knowledge of Actions The mix operator shown above introduces a new en- tity in the WSo mixture(m) which is the the result of successful mixing. The definite NP in Example 4a "the dough" both takes m as an antecedent and pro- vides more information about m's make-up - that it is dough. The definite reference resolution algorithm applies the knowledge that the existence of a mixture in the discourse is consistent with that mixture being dough, and the discourse is updated with dough(m). The application of unsound inference, in this case that the mixture is dough (or in 4b, paste, or in 4c, batter) is supported in a backchaining environment via the following axioms: [mixture(X)] ==> [dough(X)] [mixture(X)] ==> [paste(X)] [mixture(X)] ==> [batter(X)] This axiomatization is problematic in not prevent- ing the back chainer from proving that the mixture which was subsequently referred to as dough, is also a batter. That is, there is no mechanism which treats the axioms as being mutually exclusive. This is han- dled by a consistency checker which takes every new assertation to the discourse model, and determines that it is consistent with all 1-place relations that hold of the entity. Disjunctive Knowledge about Actions The various forms of partial specification of actions can be represented as explicit disjunction in an ac- tion knowledge base/ For example, mix has sev- eral operator realizations that reflect the action's completion and its success. The first category of (un)successfully (in)completed actions is represented by an event modifier which determines which action description is pulled from the action KB. In the case of mixing, successfully completed actions are repre- sented more fully as: mix(E,X,M) ~ complete(El ~ successful(El precond: [manipulable (X)] delete : [manipulable(X)] postcond: [mixture(M) k manipulable(N) constituentsOf (M, X)] This is the same basic representation as before, ex- cept with the 'to be mixed' entities unspecified, and the event modifiers added. Agents differ in their expectations about incom- plete mixing action. The following entry has the same preconditions and delete list as above, but the post-condition differs in that there is no mixture in- troduced to the discourse. mix(E,X) ~ incomplete(E) precond: [manipulable (X)] delete: [manipulable(X)] postcond: [] A different agent could have a different characteriza- tion of incomplete mixings - for example, a postcon- dition introducing an entity describable as mess (m), or incomplete\_mixture(m). The point is that de- gree of completion does effect the introduction of new entities into the discourse model. One can envision other event modifiers that change the impact of an action on the WSo, either with properties of entities changing or individuals being introduced or not. 7An abstraction hierarchy has not yet been constructed. The next class of disjunctive action descriptions are those that introduce contingencies that are not naturally handled by event modifiers as above. Con- sider the following representations of two different outcomes of sauteing garlic: saute(E,Y,X) k complete(El precond: [sauteable(Y)] delete: [] postcond: [sauteed(Y) • burnt_bits(X)] saute(E,Y) & complete(E) precond: [sauteable(Y)] delete: [] postcond: [sauteed(Y)] The only difference in the entries is that one intro- duces burnt bits and the other does not. Ideally, one would like to combine these representations under a single, more abstract entry, such as proposed in [18]. Even with appropriate abstract operators though, the fact that we are modelling discourse introduces a further complication. That is, instructions may address several contingencies in the discourse, so the issue is not that one must be chosen for the discourse, but any number may be mentioned, for example Example 7 Dribble I/2 c. oil into the egg yolks, beating steadily. If you do this carefully, the result will be mayonnaise. If it curdles, start again. This is a substantial challenge to representing the meaning of instructions in the discourse model be- cause (as above) the various outcomes of an action may be mutually exclusive. Here, successful comple- tion of the action introduces 'mayonnaise(m)' into the discourse model, while unsuccessful completion introduces 'curdled_mess(m)'. One possible solution is to partition the discourse model into different contexts, corresponding to dif- ferent outcomes. This too has been left for future exploration. 101 Conclusion We hope to have shown that is is both necessary and possible to integrate the two types of context change mechanisms previously discussed in the lit- erature. The proposed integration requires sensitiv- ity to both syntactic/semantic features of Natural Language text (such as definiteness, tense, mood,etc) and to the same beliefs about actions that an agent uses in planning and plan inference. As such, one has some hope that as we become more able to en- dow Natural Language systems with abilities to plan and recognize the plans of others, we will also be able to endow them with greater language processing ca- pabilities as well. References [1] Asher, N. A Typology for Attitude Verbs and their Anaphoric Properties. Linguistics and Philosophy 10(2), May 1987, pp. 125-198. [2] Norman Badler, Bonnie Webber, Jeff Esakov and Jugal Kalita. Animation from Instruc- tions. Making Them Move: Mechanics, Con- trol and Animation of Articulated Figures. Morgan-Kaufmann, 1990. [3] Dale, R. Generating Referring Expressions: Constructing Descriptions in a Domain of Ob- jects and Processes. PhD Thesis, University of Edinburgh, 1989. (Cambridge MA: MIT Press, forthcoming). [4] Di Eugenio, B. Action Representation for Nat- ural Language Instructions. Proc. 1991 Annual Meeting of the Assoc. for Computational Lin- guistics, Berkeley CA, June 1991, pp. 333-334. [5] Di Eugenio, B. Understanding Natural Lan- guage Instructions: The Case of Purpose Clauses. Proc. 1992 Annual Meeting of the Assoc. for Computational Linguistics, Newark DL, July 1992. [6] Di Eugenio, B. and Webber, B. Plan Recogni- tion in Understanding Instructions. Proc. First Int'! Conf. on AI Planning Systems, College Park MD, June 1992. [7] Di Eugenio, B. and White, M. On the Interpre- tation of Natural Language Instructions. Proc. 1992 Int. Conf. on Computational Linguistics (COLING-92), Nantes, France, July 1992. 102 [8] Grosz, B. The Representation and Use of Fo- cus in Dialogue Understanding. Technical Note 151, Artificial Intelligence Center, SRI Interna- tional, 1977. [9] Hayes, Patrick. Naive Physics I: Ontology for Liquids. Reprinted in J. Hobbs and R. Moore (eds.), Formal Theories of the Com- monsense World. Norwood NJ: ABLEX Pub- lishing, 1985. [10] Heim, I. The Semantics of Definite and Indef- inite Noun Phrases. PhD dissertation, Univer- sity of Massachusetts, Amherst MA, 1982. [11] Kamp, H. A Theory of Truth and Semantic Representation. In J. Groenendijk, T. Janssen and M. Stokhof (eds.), Truth, Interpretation and Information, Dordrecht: Foris, 1981, pp. 1-41. [12] Kamp, H. and Rohrer, C. manuscript of book on temporal reference. To appear. [13] Kautz, H. A Circumseriptive Theory of Plan Recognition. In J. Morgan, P. Cohen and M. Pollack (eds.), Intentions in Communication. Cambrdige MA: MIT Press, 1990. [14] Levison, L. Action Composition for the Ani- mation of Natural Language Instructions. Dept of Computer & Information Science, Univ. of Pennsylvania, Technical Report MS-CIS-91- 28, September 1991. [15] Lewis, D. Scorekeeping in a Language Game. J. Philosophical Logic 8, 1979, pp. 339-359. [16] Linguistics and Philosophy 9(1), February 1986. Special issue on Tense and Aspect in Dis- course. [17] Roberts, C. Modal Subordination and Pronominal Anaphora in Discourse. Lin- guistics and Philosophy 12(6), December 1989, pp. 683-721. [18] Tenenberg, J. Inheritance in Automated Plan- ning. Proc. Principles of Knowledge Represen- tation and Reasoning (KR'89), Morgan Kauf- mann, 1989, pp. 475-485. [19] Webber, B. A Formal Approach to Discourse Anaphora. Technical Report 3761, Bolt Be- ranek and Newman, Cambridge MA, 1978. (Published by Garland Press, New York, 1979.) [20] Webber, B., Badler, N., Di Eugenio, B., Levi- son, L. and White, M. Instructing Animated Agents. Proc. First US-Japan Workshop on Integrated Systems in Multi-Media Environ- ments, Las Cruces NM, December 1991. [21] Woods, W., Kaplan, R. and Nash-Webber, B. The Lunar Sciences Natural Language Infor- mation System: Final Report. Technical Re- port 2378, Bolt Beranek and Newman, Cam- bridge MA, 1972. [22] Woods, W. Semantics and Quantification in Natural Language Question Answering. Ad- vances in Computers, Volume 17, Academic Press, 1978. 103
1992
13
INFORMATION RETRIEVAL USING ROBUST NATURAL LANGUAGE PROCESSING Tomek Strzalkowski and Barbara Vauthey1" Courant Institute of Mathematical Sciences New York University 715 Broadway, rm. 704 New York, NY 10003 [email protected] ABSTRACT We developed a prototype information retrieval sys- tem which uses advanced natural language process- ing techniques to enhance the effectiveness of tradi- tional key-word based document retrieval. The back- bone of our system is a statistical retrieval engine which performs automated indexing of documents, then search and ranking in response to user queries. This core architecture is augmented with advanced natural language processing tools which are both robust and efficient. In early experiments, the aug- mented system has displayed capabilities that appear to make it superior to the purely statistical base. INTRODUCTION A typical information retrieval fiR) task is to select documents from a database in response to a user's query, and rank these documents according to relevance. This has been usually accomplished using statistical methods (often coupled with manual encoding), but it is now widely believed that these traditional methods have reached their limits. 1 These limits are particularly acute for text databases, where natural language processing (NLP) has long been considered necessary for further progress. Unfor- tunately, the difficulties encountered in applying computational linguistics technologies to text pro- cessing have contributed to a wide-spread belief that automated NLP may not be suitable in IR. These difficulties included inefficiency, limited coverage, and prohibitive cost of manual effort required to build lexicons and knowledge bases for each new text domain. On the other hand, while numerous experiments did not establish the usefulness of NLP, they cannot be considered conclusive because of their very limited scale. Another reason is the limited scale at which NLP was used. Syntactic parsing of the database con- tents, for example, has been attempted in order to extract linguistically motivated "syntactic phrases", which presumably were better indicators of contents than "statistical phrases" where words were grouped solely on the basis of physical proximity (eg. "college junior" is not the same as "junior college"). These intuitions, however, were not confirmed by experi- ments; worse still, statistical phrases regularly out- performed syntactic phrases (Fagan, 1987). Attempts to overcome the poor statistical behavior of syntactic phrases has led to various clustering techniques that grouped synonymous or near synonymous phrases into "clusters" and replaced these by single "meta- terms". Clustering techniques were somewhat suc- cessful in upgrading overall system performance, but their effectiveness was diminished by frequently poor quality of syntactic analysis. Since full-analysis wide-coverage syntactic parsers were either unavail- able or inefficient, various partial parsing methods have been used. Partial parsing was usually fast enough, but it also generated noisy data_" as many as 50% of all generated phrases could be incorrect (Lewis and Croft, 1990). Other efforts concentrated on processing of user queries (eg. Spack Jones and Tait, 1984; Smeaton and van Rijsbergen, 1988). Since queries were usually short and few, even rela- tively inefficient NLP techniques could be of benefit to the system. None of these attempts proved con- clusive, and some were never properly evaluated either. t Current address: Laboratoire d'lnformatique, Unlversite de Fribourg, ch. du Musee 3, 1700 Fribourg, Switzerland; [email protected]. i As far as the aut~natic document retrieval is concerned. Techniques involving various forms of relevance feedback are usu- ally far more effective, but they require user's manual intervention in the retrieval process. In this paper, we are concerned with fully automated retrieval only. 2 Standard IR benchmark collections are statistically too small and the experiments can easily produce counterintuitive results. For example, Cranfield collection is only approx. 180,000 English words, while CACM-3204 collection used in the present experiments is approx. 200,000 words. 104 We believe that linguistic processing of both the database and the user's queries need to be done for a maximum benefit, and moreover, the two processes must be appropriately coordinated. This prognosis is supported by the experiments performed by the NYU group (Strzalkowski and Vauthey, 1991; Grishman and Strzalkowski, 1991), and by the group at the University of Massachussetts (Croft et al., 1991). We explore this possibility further in this paper. OVERALL DESIGN Our information retrieval system consists of a traditional statistical backbone (Harman and Candela, 1989) augmented with various natural language pro- cessing components that assist the system in database processing (stemming, indexing, word and phrase clustering, selectional restrictions), and translate a user's information request into an effective query. This design is a careful compromise between purely statistical non-linguistic approaches and those requir- ing rather accomplished (and expensive) semantic analysis of data, often referred to as 'conceptual retrieval'. The conceptual retrieval systems, though quite effective, are not yet mature enough to be con- sidered in serious information retrieval applications, the major problems being their extreme inefficiency and the need for manual encoding of domain knowledge (Mauldin, 1991). In our system the database text is first pro- cessed with a fast syntactic parser. Subsequently cer- tain types of phrases are extracted from the parse trees and used as compound indexing terms in addi- tion to single-word terms. The extracted phrases are statistically analyzed as syntactic contexts in order to discover a variety of similarity links between smaller subphrases and words occurring in them. A further filtering process maps these similarity links onto semantic relations (generalization, specialization, synonymy, etc.) after which they are used to transform user's request into a search query. The user's natural language request is also parsed, and all indexing terms occurring in them are identified. Next, certain highly ambiguous (usually single-word) terms are dropped, provided that they also occur as elements in some compound terms. For example, "natural" is deleted from a query already containing "natural language" because "natural" occurs in many unrelated contexts: "natural number", "natural logarithm", "natural approach", etc. At the same time, other terms may be added, namely those which are linked to some query term through admis- sible similarity relations. For example, "fortran" is added to a query containing the compound term "program language" via a specification link. After the final query is constructed, the database search fol- lows, and a ranked list of documents is returned. It should be noted that all the processing steps, those performed by the backbone system, and these performed by the natural language processing com- ponents, are fully automated, and no human interven- tion or manual encoding is required. FAST PARSING WITH TI'P PARSER TIP flagged Text Parser) is based on the Linguistic String Grammar developed by Sager (1981). Written in Quintus Prolog, the parser currently encompasses more than 400 grammar pro- ductions. It produces regularized parse tree represen- tations for each sentence that reflect the sentence's logical structure. The parser is equipped with a powerful skip-and-fit recovery mechanism that allows it to operate effectively in the face of ill- formed input or under a severe time pressure. In the recent experiments with approximately 6 million words of English texts, 3 the parser's speed averaged between 0.45 and 0.5 seconds per sentence, or up to 2600 words per minute, on a 21 MIPS SparcStation ELC. Some details of the parser are discussed below .4 TIP is a full grammar parser, and initially, it attempts to generate a complete analysis for each sentence. However, unlike an ordinary parser, it has a built-in timer which regulates the amount of time allowed for parsing any one sentence. If a parse is not returned before the allotted time elapses, the parser enters the skip-and-fit mode in which it will try to "fit" the parse. While in the skip-and-fit mode, the parser will attempt to forcibly reduce incomplete constituents, possibly skipping portions of input in order to restart processing at a next unattempted con- stituent. In other words, the parser will favor reduc- tion to backtracking while in the skip-and-fit mode. The result of this strategy is an approximate parse, partially fitted using top-down predictions. The flag- ments skipped in the first pass are not thrown out, instead they are analyzed by a simple phrasal parser that looks for noun phrases and relative clauses and then attaches the recovered material to the main parse structure. As an illustration, consider the following sentence taken from the CACM-3204 corpus: 3 These include CACM-3204, MUC-3, and a selection of nearly 6,000 technical articles extracted from Computer Library database (a Ziff Communications Inc. CD-ROM). 4 A complete description can be found in (Strzalkowski, 1992). 105 The method is illustrated by the automatic con- struction of beth recursive and iterative pro- grams opera~-tg on natural numbers, lists, and trees, in order to construct a program satisfying certain specifications a theorem induced by those specifications is proved, and the desired program is extracted from the proof. The italicized fragment is likely to cause additional complications in parsing this lengthy string, and the parser may be better off ignoring this fragment alto- gether. To do so successfully, the parser must close the currently open constituent (i.e., reduce a program satisfying certain specifications to NP), and possibly a few of its parent constituents, removing corresponding productions from further considera- tion, until an appropriate production is reactivated. In this case, TIP may force the following reductions: SI ---> to V NP; SA --~ SI; S -~ NP V NP SA, until the production S --+ S and S is reached. Next, the parser skips input to lind and, and resumes normal process- ing. As may be expected, the skip-and-fit strategy will only be effective if the input skipping can be per- formed with a degree of determinism. This means that most of the lexical level ambiguity must be removed from the input text, prior to parsing. We achieve this using a stochastic parts of speech tagger 5 to preprocess the text. WORD SUFFIX TRIMMER Word stemming has been an effective way of improving document recall since it reduces words to their common morphological root, thus allowing more successful matches. On the other hand, stem- ming tends to decrease retrieval precision, if care is not taken to prevent situations where otherwise unre- lated words are reduced to the same stem. In our sys- tem we replaced a traditional morphological stemmer with a conservative dictionary-assisted suffix trim- mer. 6 The suffix trimmer performs essentially two tasks: (1) it reduces inflected word forms to their root forms as specified in the dictionary, and (2) it con- verts nominalized verb forms (eg. "implementation", "storage") to the root forms of corresponding verbs (i.e., "implement", "store"). This is accomplished by removing a standard suffix, eg. "stor+age", replacing it with a standard root ending ("+e"), and checking the newly created word against the dictionary, i.e., we check whether the new root ("store") is indeed a legal word, and whether the original root ("storage") s Courtesy of Bolt Beranek and Newman. We use Oxford Advanced Learner's Dictionary (OALD). is defined using the new root ("store") or one of its standard inflexional forms (e.g., "storing"). For example, the following definitions are excerpted from the Oxford Advanced Learner's Dictionary (OALD): storage n [U] (space used for, money paid for) the storing of goods ... diversion n [U] diverting ... procession n [C] number of persons, vehicles, ete moving forward and following each other in an orderly way. Therefore, we can reduce "diversion" to "divert" by removing the suffix "+sion" and adding root form suffix "+t". On the other hand, "process+ion" is not reduced to "process". Experiments with CACM-3204 collection show an improvement in retrieval precision by 6% to 8% over the base system equipped with a standard morphological stemmer (in our case, the SMART stemmer). HEAD-MODIFIER STRUCTURES Syntactic phrases extracted from TIP parse trees are head-modifier pairs: from simple word pairs to complex nested structures. The head in such a pair is a central element of a phrase (verb, main noun, etc.) while the modifier is one of the adjunct argu- ments of the head. 7 For example, the phrase fast algorithm for parsing context-free languages yields the following pairs: algorithm+fast, algorithm+parse, parse+language, language+context.free. The following types of pairs were considered: (1) a head noun and its left adjec- tive or noun adjunct, (2) a head noun and the head of its right adjunct, (3) the main verb of a clause and the head of its object phrase, and (4) the head of the sub- ject phrase and the main verb, These types of pairs account for most of the syntactic variants for relating two words (or simple phrases) into pairs carrying compatible semantic content. For example, the pair retrieve+information is extracted from any of the fol- lowing fragments: information retrieval system; retrieval of information from databases; and informa- tion that can be retrieved by a user-controlled interactive search process. An example is shown in Figure 1. g One difficulty in obtaining head-modifier 7 In the experiments reported here we extracted head- modifier word pairs only. CACM collection is too small to warrant generation of larger compounds, because of their low frequencies. s Note that working with the parsed text ensures a high de- gree of precision in capturing the meaningful phrases, which is especially evident when compared with the results usually obtained from either unprocessed or only partially processed text (Lewis and Croft, 1990). 106 SENTENCE: The techniques are discussed and related to a general tape manipulation routine. PARSE STRUCTURE: [[be], [[verb,[and,[discuss],[relate]]], [subject,anyone], [object,[np,[n,technique],[t..pos,the]]], [to,[np,[n,routine],[t_pos,a],[adj,[general]], [n__pos,[np,[n,manipulation]] ], [n._pos,[np,[n,tape]]]]]]]. EXTRACTED PAIRS: [discuss,technique], [relate,technique], [routine,general], [routine,manipulate], [manipulate,tape] Figure 1. Extraction of syntactic pairs. pairs of highest accuracy is the notorious ambiguity of nominal compounds. For example, the phrase natural language processing should generate language+natural and processing+language, while dynamic information processing is expected to yield processing+dynamic and processing+information. Since our parser has no knowledge about the text domain, and uses no semantic preferences, it does not attempt to guess any internal associations within such phrases. Instead, this task is passed to the pair extrac- tor module which processes ambiguous parse struc- tures in two phases. In phase one, all and only unam- biguous head-modifier pairs are extracted, and fre- quencies of their occurrence are recorded. In phase two, frequency information of pairs generated in the first pass is used to form associations from ambigu- ous structures. For example, if language+natural has occurred unambiguously a number times in contexts such as parser for natural language, while processing+natural has occurred significantly fewer times or perhaps none at all, then we will prefer the former association as valid. TERM CORRELATIONS FROM TEXT Head-modifier pairs form compound terms used in database indexing. They also serve as occurrence contexts for smaller terms, including single-word terms. In order to determine whether such pairs signify any important association between terms, we calculate the value of the Informational Contribution (IC) function for each element in a pair. Higher values indicate stronger association, and the element having the largest value is considered semantically dominant. 107 The connection between the terms co- occurrences and the information they are transmitting (or otherwise, their meaning) was established and discussed in detail by Harris (1968, 1982, 1991) as fundamental for his mathematical theory of language. This theory is related to mathematical information theory, which formalizes the dependencies between the information and the probability distribution of the given code (alphabet or language). As stated by Shannon (1948), information is measured by entropy which gives the capacity of the given code, in terms of the probabilities of its particular signs, to transmit information. It should be emphasized that, according to the information theory, there is no direct relation between information and meaning, entropy giving only a measure of what possible choices of messages are offered by a particular language. However, it offers theoretic foundations of the correlation between the probability of an event and transmitted information, and it can be further developed in order to capture the meaning of a message. There is indeed an inverse relation between information contributed by a word and its probability of occurrence p, that is, rare words carry more information than common ones. This relation can be given by the function -log p (x) which corresponds to information which a single word is contributing to the entropy of the entire language. In contrast to information theory, the goal of the present study is not to calculate informational capacities of a language, but to measure the relative strength of connection between the words in syntactic pairs. This connection corresponds to Harris' likeli- hood constraint, where the likelihood of an operator with respect to its argument words (or of an argument word in respect to different operators) is defined using word-combination frequencies within the linguistic dependency structures. Further, the likeli- hood of a given word being paired with another word, within one operator-argument structure, can be expressed in statistical terms as a conditional proba- bility. In our present approach, the required measure had to be uniform for all word occurrences, covering a number of different operator-argument structures. This is reflected by an additional dispersion parame- ter, introduced to evaluate the heterogeneity of word associations. The resulting new formula IC (x, [x,y ]) is based on (an estimate of) the conditional probabil- ity of seeing a word y to the right of the word x, modified with a dispersion parameter for x. lC(x, [x,y ]) - f~'Y nx + dz -1 where f~,y is the frequency of [x,y ] in the corpus, n~ is the number of pairs in which x occurs at the same position as in [x,y], and d(x) is the dispersion parameter understood as the number of distinct words with which x is paired. When IC(x, [x,y ]) = 0, x and y never occur together (i.e., f~.y=0); when IC(x, [x,y ]) = 1, x occurs only with y (i.e., fx,y = n~ and dx = 1). So defined, IC function is asymmetric, a pro- perry found desirable by Wilks et al. (1990) in their study of word co-occurrences in the Longman dic- tionary. In addition, IC is stable even for relatively low frequency words, which can be contrasted with Fano's mutual information formula recently used by Church and Hanks (1990) to compute word co- occurrence patterns in a 44 million word corpus of Associated Press news stories. They noted that while generally satisfactory, the mutual information for- mula often produces counterintuitive results for low- frequency data. This is particularly worrisome for relatively smaller IR collections since many impor- tant indexing terms would be eliminated from con- sideration. A few examples obtained from CACM- 3204 corpus are listed in Table 1. IC values for terms become the basis for calculating term-to-term simi- larity coefficients. If two terms tend to be modified with a number of common modifiers and otherwise appear in few distinct contexts, we assign them a similarity coefficient, a real number between 0 and 1. The similarity is determined by comparing distribu- tion characteristics for both terms within the corpus: how much information contents do they carry, do their information contribution over contexts vary greatly, are the common contexts in which these terms occur specific enough? In general we will credit high-contents terms appearing in identical con- texts, especially if these contexts are not too com- monplace. 9 The relative similarity between two words Xl and x2 is obtained using the following for- mula (a is a large constant): l0 SIM (x l ,x2) = log (or ~, simy(x t ,x2)) y where simy(x 1 ,x2) = MIN (IC (x 1, [x I ,Y ]),IC (x2, [x 2,Y ])) * (IC(y, [xt,y]) +IC(,y, [x2,y])) The similarity function is further normalized with respect to SIM(xl,xl). It may be worth pointing out that the similarities are calculated using term co- 9 It would not be appropriate to predict similarity between language and logarithm on the basis of their co-occurrence with naturaL to This is inspired by a formula used by Hindie (1990), and subsequently modified to take into account the asymmetry of IC meab-'ure. word head+modifier IC coeff. distribute normal minimum relative retrieve inform size medium editor text system parallel read character implicate legal system distribute make recommend infer deductive share resource distribute+normal distribute+normal minimum+relative minimum+relative retrieve +inform retrieve+inform size +medium size+medium editor+text editor+text system+parallel system+parallel read+character read+character implicate+legal implicate+legal system+distribute system+distribute make+recommend make+recommend infer+deductive infer+deductive share +resource share+resource 0.040 0.115 0.200 0.016 0.086 0.004 0.009 0.250 0.142 0.025 0.001 0.014 0.023 0.007 0.035 0.083 0.002 0.037 0.024 0.142 0.095 0.142 0.054 0.042 Table 1. IC coefficients obtained from CACM-3204 occurrences in syntactic rather than in document-size contexts, the latter being the usual practice in non- linguistic clustering (eg. Sparck Jones and Barber, 1971; Crouch, 1988; Lewis and Croft, 1990). Although the two methods of term clustering may be considered mutually complementary in certain situa- tions, we believe that more and stronger associations can be obtained through syntactic-context clustering, given sufficient amount of data and a reasonably accurate syntactic parser. ~ QUERY EXPANSION Similarity relations are used to expand user queries with new terms, in an attempt to make the n Non-syntactic contexts cross sentence boundaries with no fuss, which is helpful with short, succinct documc~nts (such as CACM abstracts), but less so with longer texts; sec also (Grishman et al., 1986). 108 final search query more comprehensive (adding synonyms) and/or more pointed (adding specializa- tions). 12 It follows that not all similarity relations will be equally useful in query expansion, for instance, complementary relations like the one between algol and fortran may actually harm system's performance, since we may end up retrieving many irrelevant documents. Similarly, the effectiveness of a query containing fortran is likely to diminish if we add a similar but far more general term such as language. On the other hand, database search is likely to miss relevant documents if we overlook the fact that for. tran is a programming language, or that interpolate is a specification of approximate. We noted that an average set of similarities generated from a text corpus contains about as many "good" relations (synonymy, specialization) as "bad" relations (anto- nymy, complementation, generalization), as seen from the query expansion viewpoint. Therefore any attempt to separate these two classes and to increase the proportion of "good" relations should result in improved retrieval. This has indeed been confirmed in our experiments where a relatively crude filter has visibly increased retrieval precision. In order to create an appropriate filter, we expanded the IC function into a global specificity measure called the cumulative informational contri- bution function (ICW). ICW is calculated for each term across all contexts in which it occurs. The gen- eral philosophy here is that a more specific word/phrase would have a more limited use, i.e., would appear in fewer distinct contexts. ICW is simi- lar to the standard inverted document frequency (idf) measure except that term frequency is measured over syntactic units rather than document size units./3 Terms with higher ICW values are generally con- sidered more specific, but the specificity comparison is only meaningful for terms which are already known to be similar. The new function is calculated according to the following formula: ICt.(w) if both exist ICR(w) ICW(w)=I~R(w) otherwiseif°nly ICR(w)exists n Query expansion (in the sense considered here, though not quite in the same way) has been used in information retrieval research before (eg. Sparck Jones and Tait, 1984; Harman, 1988), usually with mixed results. An alternative is to use tenm clusters to create new terms, "metaterms", and use them to index the database instead (eg. Crouch, 1988; Lewis and Croft, 1990). We found that the query expansion approach gives the system more flexibility, for instance, by making room for hypertext-style topic exploration via user feedback. t3 We believe that measuring term specificity over document-size contexts (eg. Sparck Jones, 1972) may not be ap- propriate in this case. In particular, syntax-based contexts allow for where (with n~, d~ > 0): 14 n~ ICL(W) = IC ([w,_ ]) - d~(n~+d~-l) n~ ICR(w) = IC ([_,w ]) = d~(n~+d~-l) For any two terms wl and w2, and a constant 8 > 1, if ICW(w2)>8* ICW(wl) then w2 is considered more specific than ' wl. In addition, if SIMno,,(wl,w2)=¢~> O, where 0 is an empirically established threshold, then w2 can be added to the query containing term wl with weight ~.14 In the CACM-3204 collection: ICW (algol) = 0.0020923 ICW(language) = 0.0000145 ICW(approximate) = 0.0000218 ICW (interpolate) = 0.0042410 Therefore interpolate can be used to specialize approximate, while language cannot be used to expand algol. Note that if 8 is well chosen (we used 8=10), then the above filter will also help to reject antonymous and complementary relations, such as SIM~o,~(pl_i, cobol)=0.685 with ICW (pl_i)=O.O175 and ICW(cobol)=O.0289. We continue working to develop more effective filters. Examples of filtered similarity relations obtained from CACM-3204 corpus (and their sim values): abstract graphical 0.612; approximate interpolate 0.655; linear ordi- nary 0.743; program translate 0.596; storage buffer 0.622. Some (apparent?) failures: active digital 0.633; efficient new 0.580; gamma beta 0.720. More similarities are listed in Table 2. SUMMARY OF RESULTS The preliminary series of experiments with the CACM-3204 collection of computer science abstracts showed a consistent improvement in performance: the average precision increased from 32.8% to 37.1% (a 13% increase), while the normalized recall went from 74.3% to 84.5% (a 14% increase), in com- parison with the statistics of the base NIST system. This improvement is a combined effect of the new stemmer, compound terms, term selection in queries, and query expansion using filtered similarity rela- tions. The choice of similarity relation filter has been found critical in improving retrieval precision through query expansion. It should also be pointed out that only about 1.5% of all similarity relations originally generated from CACM-3204 were found processing texts without any internal document structure. 14 The filter was most effective at o = 0.57. 109 wordl word2 SIMnorm *aim algorithm *adjacency *algebraic *american assert *buddy committee critical best-fit * duplex earlier encase give incomplete lead mean method memory match lower progress range round-off remote purpose method pair symbol standard infer time-share *symposium fmal first-fit reliable previous minimum-area present miss *trail *standard technique storage recognize upper *trend variety truncate teletype 0.434 0.529 0.499 0.514 0.719 0.783 0.622 0.469 0.680 0.871 0.437 0.550 0.991 0.458 0.850 0.890 0.634 0.571 0.613 0.563 0.841 0.444 0.600 0.918 0.509 Table 2. Filtered word similarities (* indicates the more specific term). admissible after filtering, contributing only 1.2 expansion on average per query. It is quite evident significantly larger corpora are required to produce more dramatic results. 15 ~6 A detailed summary is given in Table 3 below. These results, while quite modest by IR stun- dards, are significant for another reason as well. They were obtained without any manual intervention into the database or queries, and without using any other ts KL Kwok (private communication) has suggested that the low percentage of admissible relations might be similar to the phenomenon of 'tight dusters' which while meaningful are so few that their impact is small. :s A sufficiently large text corpus is 20 million words or more. This has been paRially confirmed by experiments performed at the University of Massachussetts (B. Croft, private comrnunica- don). 110 base surf.trim Tests query exp. Recall Precision 0.764 0.674 0.547 0.449 0.387 0.329 0.273 0.198 0.146 0.093 0.079 0.00 0.10 0.20 0.30 0.40 0.50 0.60 0.70 0.80 0.90 1.00 0.775 0.688 0.547 0.479 0A21 0.356 0.280 0.222 0.170 0.112 0.087 0.793 0.700 0.573 0.486 0.421 0.372 0.304 0.226 0.174 0.114 0.090 Avg. Prec. 0.328 0.356 0.371 % change 8.3 13.1 Norm Rec. 0.743 0.841 0.842 Queries 50 50 50 Table 3. Recall/precision statistics for CACM-3204 information about the database except for the text of the documents (i.e., not even the hand generated key- word fields enclosed with most documents were used). Lewis and Croft (1990), and Croft et al. (1991) report results similar to ours but they take advantage of Computer Reviews categories manually assigned to some documents. The purpose of this research is to explore the potential of automated NLP in dealing with large scale IR problems, and not necessarily to obtain the best possible results on any particular data collection. One of our goals is to point a feasible direction for integrating NLP into the traditional IR. ACKNOWLEDGEMENTS We would like to thank Donna Harman of NIST for making her IR system available to us. We would also like to thank Ralph Weischedel, Marie Meteer and Heidi Fox of BBN for providing and assisting in the use of the part of speech tagger. KL Kwok has offered many helpful comments on an ear- lier draft of this paper. In addition, ACM has gen- erously provided us with text data from the Computer Library database distributed by Ziff Communications Inc. This paper is based upon work suppened by the Defense Advanced Research Project Agency under Contract N00014-90-J-1851 from the Office of Naval Research, the National Science Foundation under Grant 1RI-89-02304, and a grant from the Swiss National Foundation for Scientific Research. We also acknowledge a support from Canadian Institute for Robotics and Intelligent Systems (IRIS). REFERENCES Church, Kenneth Ward and Hanks, Patrick. 1990. "Word association norms, mutual informa- tion, and lexicography." Computational Linguistics, 16(1), MIT Press, pp. 22-29. Croft, W. Bruce, Howard R. Turtle, and David D. Lewis. 1991. "The Use of Phrases and Struc- tured Queries in Information Retrieval." Proceedings of ACM SIGIR-91, pp. 32-45. Crouch, Carolyn J. 1988. "A cluster-based approach to thesaurus construction." Proceedings of ACM SIGIR-88, pp. 309-320. Fagan, Joel L. 1987. Experiments in Automated Phrase Indexing for Document Retrieval: A Comparison of Syntactic and Non-Syntactic Methods. Ph.D. Thesis, Department of Com- puter Science, CorneU University. Grishman, Ralph, Lynette Hirschman, and Ngo T. Nhan. 1986. "Discovery procedures for sub- language selectional patterns: initial experi- ments". ComputationalLinguistics, 12(3), pp. 205-215. Grishman, Ralph and Tomek Strzalkowski. 1991. "Information Retrieval and Natural Language Processing." Position paper at the workshop on Future Directions in Natural Language Pro- cessing in Information Retrieval, Chicago. Harman, Donna. 1988. "Towards interactive query expansion." Proceedings of ACM SIGIR-88, pp. 321-331. Harman, Donna and Gerald Candela. 1989. "Retrieving Records from a Gigabyte of text on a Minicomputer Using Statistical Rank- ing." Journal of the American Society for Information Science, 41(8), pp. 581-589. Harris, Zelig S. 1991. A Theory of language and Information. A Mathematical Approach. Cladendon Press. Oxford. Harris, Zelig S. 1982. A Grammar of English on Mathematical Principles. Wiley. Harris, Zelig S. 1968. Mathematical Structures of Language. Wiley. Hindle, Donald. 1990. "Noun classification from predicate-argument structures." Proc. 28 Meeting of the ACL, Pittsburgh, PA, pp. 268- 275. Lewis, David D. and W. Bruce Croft. 1990. "Term Clustering of Syntactic Phrases". Proceedings of ACM SIGIR-90, pp. 385-405. Mauldin, Michael. 1991. "Retrieval Performance in Ferret: A Conceptual Information Retrieval System." Proceedings of ACM SIGIR-91, pp. 347-355. Sager, Naomi. 1981. Natural Language Information Processing. Addison-Wesley. Salton, Gerard. 1989. Automatic Text Processing: the transformation, analysis, and retrieval of information by computer. Addison-Wesley, Reading, MA. Shannon, C. E. 1948. "A mathematical theory of communication." Bell System Technical Journal, vol. 27, July-October. Smeaton, A. F. and C. J. van Rijsbergen. 1988. "Experiments on incorporating syntactic pro- cessing of user queries into a document retrieval strategy." Proceedings of ACM SIGlR-88, pp. 31-51. Sparck Jones, Karen. 1972. "Statistical interpreta- tion of term specificity and its application in retrieval." Journal of Documentation, 28(1), pp. ll-20. Sparck Jones, K. and E. O. Barber. 1971. "What makes automatic keyword classification effec- five?" Journal of the American Society for Information Science, May-June, pp. 166-175. Sparck Jones, K. and J. I. Tait. 1984. "Automatic search term variant generation." Journal of Documentation, 40(1), pp. 50-66. Strzalkowski, Tomek and Barbara Vauthey. 1991. "Fast Text Processing for Information Retrieval.'" Proceedings of the 4th DARPA Speech and Natural Language Workshop, Morgan-Kaufman, pp. 346-351. Strzalkowski, Tomek and Barbara Vauthey. 1991. "'Natural Language Processing in Automated Information Retrieval." Proteus Project Memo #42, Courant Institute of Mathematical Science, New York University. Strzalkowski, Tomek. 1992. "TYP: A Fast and Robust Parser for Natural Language." Proceedings of the 14th International Confer- ence on Computational Linguistics (COL- ING), Nantes, France, July 1992. Wilks, Yorick A., Dan Fass, Cheng-Ming Guo, James E. McDonald, Tony Plate, and Brian M. Slator. 1990. "Providing machine tractable dictionary tools." Machine Translation, 5, pp. 99-154. 111
1992
14
Prosodic Aids to Syntactic and Semantic Analysis of Spoken English Chris Rowles and Xiuming Huang AI Systems Section Australia and Overseas Telecommunications Corporation Telecommunications Research Laboratories PO Box 249, Clayton, Victoria, 3168, Australia Internet: [email protected] ABSTRACT Prosody can be useful in resolving certain lex- ical and structural ambiguities in spoken English. In this paper we present some results of employ- ing two types of prosodic information, namely pitch and pause, to assist syntactic and semantic analysis during parsing. 1. INTRODUCTION In attempting to merge speech recognition and natural language understanding to produce a system capable of understanding spoken dia- logues, we are confronted with a range of prob- lems not found in text processing. Spoken language conversations are typically more terse, less grammatically correct, less well- structured and more ambiguous than text (Brown & Yule 1983). Additionally, speech recognition systems that attempt to extract words from speech typically produce word insertion, deletion or substitution errors due to incorrect recognition and segmentation. The motivation for our work is to combine speech recognition and natural language under- standing (NLU) techniques to produce a system which can, in some sense, understand the intent of a speaker in telephone-based, information seeking dialogues. As a result, we are interested in NLU to improve the semantic recognition accu- racy of such a system, but since we do not have explicit utterance segmentation and structural in- formation, such as punctuation in text, we have explored the use of prosody. Intonation can be useful in understanding dia- logue structure (c.f. Hirschberg & Pierrehumbert 1986), but parsing can also be assisted. (Briscoe & Boguraev 1984) suggests that if prosodic struc- ture could be derived for the noun compound Bo- ron epoxy rocket motor chambers, then their parser LEXICAT could reduce the fourteen licit 112 morphosyntactic interpretations to one correct analysis without error (p. 262). (Steedman 1990) explores taking advantage of intonational struc- ture in spoken sentence understanding in the combinatory categorial grammar formalism. (Bear & Price 1990) discusses integrating proso- dy and syntax in parsing spoken English, relative duration of phonetic segments being the one as- pect of prosody examined. Compared with the efforts expended on syn- tactic/semantic disambiguation mechanisms, prosody is still an under-exploited area. No work has yet been carded out which treats prosody at the same level as syntax, semantics, and prag- matics, even though evidence shows that proso- dy is as important as the other means in human understanding of utterances (see, for example, experiments reported in (Price et a11989)). (Scott & Cutler 1984) noticed that listeners can suc- cessfully identify the intended meaning of ambig- uous sentences even in the absence of a disambiguating context, and suggested that speakers can exploit acoustic features to high- light the distinction that is to be conveyed to the listener (p. 450). Our current work incorporates certain prosod- ic information into the process of parsing, com- bining syntax, semantics, pragmatics and prosody for disambiguation 1 . The context of the work is an electronic directory assistance system (Rowles et a11990). In the following sections, an overview of the system is first given (Section 2). Then the parser is described in Section 3. Sec- tion 4 discusses how prosody can be employed in helping resolve ambiguity involved in process- 1. Another possible acoustic source to help disambiguation is =segmental phonology", the ap- plication of certain phonological assimilation and elision rules (Scott & Cutler 1984). The current work makes no attempt at this aspect. ing fixed expressions, prepositional phrase at- tachment (PP attachment), and coordinate constructions. Section 5 shows the implementa- tion of the parser. 2. SYSTEM OVERVIEW Our work is aimed at the construction of a prototype system for the understanding of spo- ken requests to an electronic directory assis- tance service, such as finding the phone number and address of a local business that offers partic- ular services. Our immediate work does not concentrate on speech recognition (SR) or lexical access. In- stead, we assume that a future speech recogni- tion system performs phoneme recognition and uses linguistic information during word recogni- tion. Recognition is supplemented by a prosodic feature extractor, which produces features syn- chronized to the word string output by the SR. The output of the recognizer is passed to a sentence-level parser. Here =sentence" really means a conversational move, that is, a contigu- ous utterance of words constructed so as to con- vey a proposition. Parses of conversational moves are passed to a dialogue analyzer that segments the dia- logue into contextually-consistent sub-dialogues (i.e, exchanges) and interpret speaker requests in terms of available system functions. A dia- logue manager manages interaction with the speaker and retrieves database information, 3. PROSODY EXTRACTION As the input to the parser is spoken language, it lacks the segmentation apparent in text. Within a move, there is no punctuation to hint at internal grammatical .structure. In addition, as complete sentences are frequently reduced to phrases, el- lipsis etc. during a dialogue, the Parser cannot use syntax alone for segmentation. Although intonation reflects deeper issues, such as a speakers' intended interpretation, it provides the surface structure for spoken lan- guage. Intonation is inherently supra-segmental, but it is also useful for segmentation purposes where other information is unavailable. Thus, in- tonation can be used to provide initial segmenta- tion via a pre-processor for the parser. Although there are many prosodic features that are potentially useful in the understanding of spoken English, pitch and pause information have received the most attention due to ease of measurement and their relative importance (Cruttenden 1986, pp 3 & 36). Our efforts to date use only these two feature types. We extract pitch and pause information from speech using specifically designed hardware with some software post-processing. The hard- ware performs frequency to amplitude transfor- mation and filtering to produce an approximate pitch contour with pauses. The post-processing samples the pitch con- tour, determines the pitch range and classifies the instantaneous pitch into high, medium and low categories within that range. This is similar to that used in (Hirschberg & Pierrehumbert 1986). Pauses are classed as short (less than 250ms), long (between 250ms and 800ms) or extended (greater than 800ms). These times were empiri- cally derived from spoken information seeking di- alogues conducted over a telephone to human operators. Short pauses signify strong tum-hold- ing behaviour, long pauses signify weaker turn- holding behaviour and extended pauses signify turn passing or exchange completion (Vonwiller 1991). These interpretations can vary with cer- tain pitch movements, however. Unvoiced sounds are distinguished from pauses by subse- quent synchronisation of prosodic features with the word stream by post-processing. A parser pre-processor then takes the SR word string, pitch markers and pauses, annotat- ing the word string with pitch markers (low marked as = ~ ", medium = - "and high = ^ ") and pauses (short .... and long ..... ). The markers are synchronised with words or syllables. The pre-processor uses the pitch and pause markers to segment the word string into intonationally- consistent groups, such as tone groups (bound- aries marked as = < = and "> ") and moves (//). A tone group is a group of words whose intonation- al structure indicates that they form a major structural component of the speech, which is commonly also a major syntactic grouping (Crut- tenden 1986, pp. 75 - 80). Short conversational moves often correspond to tone groups, while longer moves may consist of several tone groups. With cue words for example, the cue forms its own tone group. 113 Pauses usually occur at points of low transi- tional probability and often mark phrase bound- aries (Cruttenden 1986). In general, although pitch plays an important part, long pauses, indi- cate tone group and move boundaries, and short pauses indicate tone group boundaries. Ex- change boundary markers are dealt with in the dialogue manager (not covered here). Pitch movements indicate turn-holding behaviour, top- ic changes, move completion and information contrastiveness (Cooper & Sorensen 1977; Von- wilier 1991). The pre-processor also locates fixed expres- sions, so that during the parsing nondeterminism can be reduced. A problem here is that a cluster of words may be ambiguous in terms of whether they form a fixed expression or not. "Look after", for example, means =take care of" in "Mary helped John to look after his kid#', whereas "look" and "after" have separate meaning in "rll look after you do so". The pre-processor makes use of tone group information to help resolve the fixed expression ambiguity. A more detailed dis- cussion is given in section 5.2. 4. THE PARSER Once the input is segmented, moves annotat- ed with prosody are input to the parser. The pars- er deals with one move at a time. In general, the intonational structure of a sen- tence and its syntactic structure coincide (Crut- tenden 1986). Thus, prosodic segmentation avoids having the Parser try to extract moves from unsegmented word strings based solely on syntax. It also reduces the computational com- plexity in comparing syntactic and prosodic word groupings. There is a complication, however, in that tone group boundaries and move bound- aries may not align exactly. This is not frequent, and is not present in the material used here. Into- nation is used to limit the range of syntactic pos- sibilities and the parser will align tone group and move syntactic boundaries at a later stage. By integrating syntax and semantics, the Parser is capable of resolving most of the ambig- uous structures it encounters in parsing written English sentences, such as coordinate conjunc- tions, PP attachments, and lexical ambiguity (Huang 1988). Migrating the Parser from written to spoken English is our current focus. Moves input to the Parser are unlikely to be well-formed sentences, as people do not always speak grammatically, or due to the SR's inability to accurately recognise the actual words spoken. The parser first assumes that the input move is lexically correct and tries to obtain a parse for it, employing syntactic and semantic relaxation techniques for handling ill-formed sentences (Huang 1988). If no acceptable analysis is pro- duced, the parser asks the SR to provide the next alternative word string. Exchanges between the parser and the SR are needed for handling situations where an ill- formed utterance gets further distorted by the SR. In these cases other knowledge sources such as pragmatics, dialogue analysis, and dia- logue management must be used to find the most likely interpretation for the input string. We use pragmatics and knowledge of dialogue struc- ture to find the semantic links between separate conversational moves by either participant and resolve indirectness such as pronouns, deictic expressions and brief responses to the other speaker [for more details, see (Rowles, 1989)]. By determining the dialogue purpose of utteranc- es and their domain context, it is then possible to correct some of the insertion and mis-recognised word errors from the SR and determine the com- municative intent of the speaker. The dialogue manager queries the speaker if sentences can- not be analysed at the pragmatic stage. The output of the parser is a parse tree that contains syntactic, semantic and prosodic fea- tures. Most ambiguity is removed in the parse tree, though some is left for later resolution, such as definite and anaphoric references, whose res- olution normally requires inter-move inferences. The parser also detects cue words in its input using prosody. Cue words, such as "now" in "Now, I want to...", are words whose meta-func- tion in determining the structure of dialogues overrides their semantic roles (Reichman 1985).Cue words and phrases are prosodically distinct due to their high pitch and pause separa- tion from tone groups that convey most of the propositional content (Hirschberg & Litman 1987). While relatively unimportant semantically, cue words are very important in dialogue analy- sis due to their ability to indicate segmentation and the linkage of the dialogue components. 5. PROSODY AND DISAMBIGUATION During parsing prosodic information is used to help disambiguate certain structures which cannot be disambiguated syntactically/semanti- cally, or whose processing demands extra ef- forts, if no such prosodic information is available. In general, prosody includes pitch, loudness, du- ration (of words, morphemes and pauses) and rhythm. While all of these are important cues, we are currently focussing on pitch and pauses as these are easily extracted from the waveform and offer useful disambiguation during parsing and segmentation in dialogue analysis. Subse- quent work will include the other features, and further refinement of the use of pitch and pause. At present, for example, we do not consider the length of pauses internal to tone groups, al- though this may be significant. The prosodic markers are used by the parser as additional pre-conditions for grammatical rules, discriminating between possible grammati- cal constructions via consistent intonational structures. 5.1 HOMOGRAPHS Even when using prosody, homographs are a problem for parsers, although a system recognis- ing words from phonemes can make the problem a simpler. The word sense of =bank" in "John went to the bank" must be determined from se- mantics as the sense is not dependent upon vo- calisation, but the difference between the homograph "content" in "contents of a book" and "happy and content' can be determined through differing syllabic stress and resultant different phonemes. Thus, different homographs can be detected during lexical access in the SR inde- pendently of the Parser. 5.2 FIXED EXPRESSIONS As is mentioned in subsection 4.1, when the pre-processor tries to locate fixed expressions, it may face multiple choices. Some fixed expres- sions are obligatory, i.e., they form single seman- tic units, for instance =look forward to" often means "expect to feel pleasure in (something about to happen) ''2. Some other strings may or 2. Longman Dictionary of Contemporary En- glish, 1978. may not form single sematic units, depending on the context. =Look after" and "win over" are two examples. Without prosodic information, the pre- processor has to make a choice blindly, e.g. treating all potential fixed expressions as such and on backtracking dissolve them into separate words. This adds to the nondeterminism of the parsing. As prosodic information becomes avail- able, the nondeterminism is avoided. In the system's fixed expression lexicon, we have entries such as "fix_e([gave, up], gave_- up)". The pre-processor contains a rule to the fol- lowing effect, which conjoins two (or more) words into one fixed expression only when there is no pause following the first word: match_fix_e([FirstW, SecondWlRestW], [Fixe- dEIMoreW]):- no_pause in between(FirstW, SecondW), fix_e([FirstW, SecondW], FixedE), Match_fix_e(RestW, MoreW). This rule produces the following segment:- tions: (5.1a) <-He -gave> *<^up to ^two hundred dollars> *<-to the ^charity>**// (5.1b) <-He Agave ^up> *<^two hundred dol- lars> *<-for damage compensation>**//. In (5.1a), gave and upto are treated as be- longing to two separate tone groups, whereas in (5.1 b) gave up is marked as one tone group. The pre-processor checking its fixed expression dic- tionary will therefore convert up to in (5.1 a) to up_to, and gave up in (5.1b) to gave_up. 5.3 PP ATTACHMENT (Steedman 1990 & Cruttenden 1986) ob- served that intonational structure is strongly con- strained by meaning. For example, an intonation imposing bracketings like the following is not al- lowed: (5.2) <Three cats> <in ten prefer corduroy>// Conversely, the actual contour detected for the input can be significant in helping decide the segmentation and resolving PP attachment. In the following sentence, f.g., (5.3) <1 would like> < information on her ar- rival> [=on her arrival" attached to "information' 1 115 (5.4) <1 would like> <information> ** <on her arrival> ["on her arrival" attached to "like"] the pause after "information" in (5.4), but not in (5.3), breaks the bracketed phrase in (5.3) into two separate tone groups with different attach- ments. In a clash between prosodic constraints and syntactic/semantic constraints, the latter takes precedence over the former. For instance, in: (5.5) <1 would like> <information> ** <on some panel beaters in my area>. although the intonation does not suggest attach- ment of the PP to "information", since the se- mantics constraints exclude attachment to "like" meaning "choose to have" ("On panel beaters [as a location or time] I like information" does not rate as a good interpretation), it is attached to "in- formation" anyway (which satisfies the syntactic/ semantic constraints). 5.4 COORDINATE CONSTRUCTIONS Coordinate constructions can be highly am- biguous, and are handled by rules such as: Np --> det(Det), adj(Adj), /* check if a pause follows the adjective */ {check_pause (Flag)}, noun (Noun), {construct_np(Det, Adj, Noun, NP}, conjunction(NP, Flag, FinalNP). In the conjunction rule, if two noun phrases are joined, we check for any pauses to see if the adjective modifying the first noun should be cop- ied to allow it to modify the second noun. Similar- ly, we check for a pause preceding the conjunction to decide if we should copy the post modifier of the second noun to the first noun phrase. For instance, the text-form phrase: (5.6) old men and women in glasses can produce three possible interpretations: [old men (in glasses)] and [(old) women in glasses] (5.6a) [old men] and [women in glasses] (5.6b) [old men (in glasses)] and [women in glasses] (5.6c). lo 0 ~..,,< (~) ! Old men and women in glass - es (.,3 P;*ch ~,.,.t" s) t < Old > <men and wmnen in glass- es> (Vl,) 2o < Old -rr,,., e C.-.) i inell > (and wollletl ill glass - es> P'~ III < Ohl men> <and women> <in glass- es> (1) neutral iulonailon (2) attachment of 2 phrnses (3) isolated (4) atlaclmient of I phrase only Figure 1. Figure1 shows some measured pitch con- tours for utterances of phrase (5.6) with an at- tempt by the speaker to provide the interpretations (a) through (c). Note that the con- tour is smoothed by the hardware pitch extrac- tion. Pauses and unvoiced sounds are distinguished in the software post-processor. In all waveforms "old" and "glasses" have high pitch. In (5.6a), a short pause follows "old", indicating that "old" modifies "men and women in glasses" as a sub-phrase. This is in contrast to (5.6b) where the short pause appears after "men" indicating "old men" as one conjunct and "women in glasses" as the other. Notice also that duration of "men" in (5.6b) is longer than in (5.6a). In (5.6c) we have two major pauses, a shorter one after "men" and a longer one after "women". Using this variation in pause locations, 116 the parser produces the correct interpretation (i.e. the speaker's intended interpretation) for sentences (5.6a-c). 6. IMPLEMENTATION Prosodic information, currently the pitch con- tour and pauses, are extracted by hardware and software. The hardware detects pitch and paus- es from the speech waveform, while the software determines the duration of pauses, categorises pitch movements and synchronises these to the sequence of lexical tokens output from a hypo- thetical word recogniser. The parser is written in the Definite Clause Grammars formalism (Perei- ra et al. 1980) and runs under BIMProlog on a SPARCstation 1. The pitch and pause extractor as described here is also complete. To illustrate the function of the prosodic fea- ture extractor and the Parser pre-processor, the following sentence was uttered and its pitch con- tour analysed: "yes i'd like information on some panel beaters" Prosodic feature extraction produced: ** Ayes ** ^i'd Alike * -information on some ^panel beaters **// The Parser pre-processor then segments the input (in terms of moves and tone groups) for the Parser, resulting in: **< Ayes> **//< ^i'd Alike> * <-information on some ^panel beaters> **// The actual output of the pre-processor is in two parts, one an indexed string of lexical items plus prosodic information, the other a string of tone groups indicating their start and end points: [** Ayes, 1] [**// ^i, 2] [would, 3] [Alike, 4] [* -infor- mation, 5] [on, 6] [some, 7] ["panel_ beaters, 8] [**//, 9] <1,1> <2, 4> < 5, 8> <9,9> We use a set of sentences 3, all beginning with "Before the King~feature race~', but with dif- ferent intonation to provide different interpreta- tions, to illustrate how syntax, semantics and 3. Adapted from (Briscoe & Boguraev 1984). prosody (6.1) *horse> are used for disambiguation: <~ Before the -King ^races>*<-his <is -usually ^groomed>**//. (6.2) <~Before the -King> *<-races his ^horse> **<it's -usually ^groomed>**//. (6.3) <~Before the ^feature ~races> *<-his ^horse is -usually ^groomed>**//. The syntactic ambiguity of "before" (preposi- tion in 6.3 and subordinate conjunction in 6.1 and 6.2) is solved by semantic checking: "race" as a verb requires an animate subject, which "the King" satisfies, but not "the feature"; "race" as a noun can normally be modified by other nouns such as "feature", but not "King '4. However, when prosody information is not used the time needed for parsing the three sentences varies tremendously, due to the top-down, depth-first nature of the parser. (6.3) took 2.05 seconds to parse, whereas (6.1) took 9.34 seconds, and (6.2), 41.78 seconds. The explanation lies in that on seeing the word "before" the parser made an assumption that it was a preposition (correct for 6.3), and took the "wrong" path before backtrack- ing to find that it really was a conjunction (for 6.1 and 6.2). Changingthe order of rules would not help here: if the first assumption treats "before" as a conjunction, then parsing of (6.3) would have been slowed down. We made one change to the grammar so that it takes into account the pitch information accom- panying the word "races" to see if improvement can be made. The parser states that a noun- noun string can form a compound noun group only when the last noun has a low pitch. That is, the feature ~races forms a legitimate noun phrase, while the King -races and the King '~rac- es do not. This is in accordance with one of the best known English stress rules, the "Compound Stress Rule" (Chomsky and Halle 1968), which asserts that the first lexically stressed syllable in a constituent has the primary stress if the constit- uent is a compound construction forming an ad- jective, verb, or noun. 4. It is very difficult, though, to give a clear cut as to what kind of nouns can function as noun modifiers. King races may be a perfect noun group in certain context. 117 We then added the pause information in the parser along similar lines. The following is a sim- plified version of the VP grammar to illustrate the parsing mechanism: /* Noun phrase rule. "Mods" can be a string of adjectives or nouns: major (races), feature (races), etc.*/ Np--> Det, Mods,HeadNoun. /* Head noun is preferred to be low-pitched.*/ HeadNoun --> [Noun], {Iowpitched(Noun)}. /* Verb phrase rule 1 .*/ Vp --> V_intr. /* Verb phrase rule 2. Some semantic check- ing is carded out after a transitive verb and a noun phrase is found.*/ Vp --> V_tr, Np, {match(V_tr, Np)}. /* If a verb is found which might be used as in- transitive, check if there is a pause following it.*/ V_intr --> [Verb], {is_intransitive(Verb)], Pause. /* Otherwise see if the verb can be used as transitive.*/ V_tr--> [Verb], {is_transitive(Verb)}. /* This succeeds if a pause is detected. */ Pause --> [pause]. The pause information following "races" in sentences(6.1) and (6.2)thus helps the parser to decide if "races" is transitive or intransitive, again reducing nondeterminism. The above rules spec- ify only the preferred patterns, not absolute con- straints. If they cannot be satisfied, e.g. when there is no pause detected after a verb which is intransitive, the string is accepted anyway. The parse times for sentences (6.1) to (6.3) with and without prosodic rules in the parser are given in the Table 6.1. Without Prosody With Prosody (6.1) 9.34 1.23 (6.2) 41.78 8.69 (6.3) 2.05 1.27 Table 6.1 Parsing Times for the =races" sentence (in seconds). Table 6.2 shows how the parser performed on the following sentences: (6.4) *1'11 look* ^after the -boy ~comes**// (6.5) *He Agave* ^up to ^two *hundred dollars to the -charity**// (6.6) ^Now* -I want -some -information on *panel *beaters -in ~Clayton**// Without Prosody With Prosody (6.4) 6.59 1.19 (6.5) 41.38 2.49 (6.6) 2.15 2.55 Table 6.2 Parsing Times for sentences (6.4) to (6.6) (in seconds). While (6.6) is slower with prosodic annotation, the parser correctly recognises "now" as a cue word rather than as an adverb. 7. DISCUSSION We have shown that by integrating prosody with syntax and semantics in a natural language parser we can improve parser performance. In spoken language, prosody is used to isolate sen- tences at the parser's input and again to deter- mine the syntactic structure of sentences by seeking structures that are intonationally and syntactically consistent. The work described here is in progress. The prosodic features with which sentences have been annotated are the output of our feature ex- tractor, but synchronisation is by hand as we do not have a speech recognition system. As shown by the =old men ..." example, the system is capa- ble of accurately producing correct interpreta- tions, but as yet, no formal experiments using data extracted from ordinary telephone conver- sations and human comparisons have been per- formed. The aim has been to investigate the potential for the use of prosody in parsers intend- ed for use in speech understanding systems. (Bear & Price 1990) modified the grammar they use to change all the rules of the form A -> B C to the form A -> B Link C, and add con- straints to the rules application in terms of the value of the =breaking indices" based on relative duration of phonetic segments. For instance the rule VP -> V Link PP applies only when the value of the link is either 0 or 1, indicating a close cou- pling of neighbouring words. Duration is thus tak- 118 en into consideration in deciding the structure of the input. In our work, pitch contour and pause are used instead, achieving a similar result. The principle of preference semantics allows the straightforward integration of prosody into parsing rules and a consistent representation of prosody and syntax. Such integration may have been more of a problem if the basic parsing ap- proach had been different. Also relevant is the choice of English, as the integration may not car- ry across to other languages. Future research aims at a more thorough treatment of prosody. Research currently under- way, is also focussing on the use of prosody and dialogue knowledge for dialogue analysis and turn management. ACKNOWLEDGEMENTS The permission of the Director, Research, AOTC to publish the above paper is hereby ac- knowledged. The authors have benefited from discussions with Robin King, Peter Sefton, Julie Vonwiller and Christian Matthiessen, Sydney University, and Muriel de Beler, Telecommunica- tion Research Laboratories, who are involved in further work on this project. The authors would also like to thanks the anonymous reviewers for positive comments on paper improvements. REFERENCES Bear, J. & Price, P. J. (1990), Prosody, Syntax and Parsing. 28th Annual Meeting of the Assoc. for Computational Linguistics (pp. 17-22). Briscoe, E.J. & Boguraev, B.K. (1984), Con- trol Structures and Theories of Interaction in Speech Understanding Systems. 22th Annual Meeting of the Assoc. for Computational Linguis- tics (pp. 259-266) Brown, G., & Yule, G., (1983), Discourse Analysis, Cambridge University Press. Chomsky, N.& Halle, M. (1968), The Sound Pattern of English, (New York: Harper and Row). Cooper, W.E. & Sorensen, J.M., (1977), Fun- damental Frequency Contours at Syntactic Boundaries, Journal of the Acoustical Society of America, Vol. 62, No. 3, September. Cruttenden, A., (1986), Intonation, Cam- bridge University Press. Hirschberg, J. & Litman, D., (1987), Now Let's Talk About Now: Identifying Cue Phrases Intona- tionally, 25th Annual Meeting of the Assoc. for Computational Linguistics. Hirschberg, J. & Pierrehumbert, J., The Into- national Structure of Discourse, 24th Annual Meeting of the Assoc. for Computational Linguis- tics, 1986. Huang, X-M. (1988), Semantic Analysis in XTRA, An English - Chinese Machine Translation System, Computers and Translation 3, No.2. (pp. I 01-120) Pereira, F. & Warren, D. (1980), Definite Clause Grammars for Language Analysis - A Survey of the Formalism and A Comparison with • Augmented Transition Networks. Artificial Intelli- gence, 13:231-278. Price, P. J., Ostendorf, M. & Wightmen, C.W. (1989), Prosody and Parsing. DARPA Workshop on Speech and Natural Language, Cape Cod, October 1989 (pp.5-11). Reichman, R. (1985), Getting Computers to Talk Like You and Me, (Cambridge: MIT Press). Rowles, C.D. (1989), Recognizing User Inten- tions from Natural language Expressions, First Australia-Japan Joint Symposium on Natural Language Processing, (pp. 157-I 66). Rowles, C.D., Huang, X., and Aumann, G., (1990), Natural Language Understanding and Speech Recognition: Exploring lhe Connections, Third Australian International Conference on Speech Science and Technology, (pp. 374 - 382). Steedman, M. (1990),Structure and Intonation in Spoken Language Understanding. 28th Annual Meeting of the Assoc. for Computational Linguis- tics (pp. 9-I 6). Scott, D.R & Cutler, A. (1984), Segmental Phonology and the Perception of Syntactic Struc- ture, Journal of Verbal Learning and Verbal Be- havior23, (pp. 450-466). Vonwiller, J. (1991),An Empirical Study of Some Features of Intonation, Second Australia- Japan Natural Language Processing Sympo- sium, Japan, November, (pp 66-71 ). 119
1992
15
UNDERSTANDING NATURAL LANGUAGE INSTRUCTIONS: THE CASE OF PURPOSE CLAUSES Barbara Di Eugenio * Department of Computer and Information Science University of Pennsylvania Philadelphia, PA [email protected] ABSTRACT This paper presents an analysis of purpose clauses in the context of instruction understanding. Such analysis shows that goals affect the interpretation and / or exe- cution of actions, lends support to the proposal of using generation and enablement to model relations between actions, and sheds light on some inference processes necessary to interpret purpose clauses. INTRODUCTION A speake~ (S) gives instructions to a hearer CrI) in order to affect H's behavior. Researchers including (Winograd, 1972), (Chapman, 1991), (Vere and Bick- more, 1990), (Cohen and Levesque, 1990), (Alterman et al., 1991) have been and are addressing many complex facets of the problem of mapping Natural Language in- structions onto an agent's behavior. However, an aspect that no one has really considered is computing the ob- jects of the intentions H's adopts, namely, the actions to be performed. In general, researchers have equated such objects with logical forms extracted from the NL input. This is perhaps sufficient for simple positive impera- tives, but more complex imperatives require that action descriptions be computed, not simply extracted, from the input instruction. To clarify my point, consider: Ex. 1 a) Place a plank between two ladders. b) Place a plank between two ladders to create a simple scaffold. In both a) and b), the action to be executed is place a plank between two ladders. However, Ex. 1.a would be correctly interpreted by placing the plank anywhere between the two ladders: this shows that in b) H must be inferring the proper position for the plank from the expressed goal to create a simple scaffold. Therefore, the goal an action is meant to achieve constrains the interpretation and / or the execution of the action itself. The infinitival sentence in Ex. 1.b is a purpose clause, *Mailing addxess: IRCS - 3401, Walnut St - Suite 40(0 - Philadelphia, PA, 19104 - USA. which, as its name says, expresses the agent's purpose in performing a certain action. The analysis of purpose clauses is relevant to the problem of understanding Nat- ural Language instructions, because: 1. Purpose clauses explicitly encode goals and their interpretation shows that the goals that H adopts guide his/her computation of the action(s) to per- form. 2. Purpose clauses appear to express generation or en- ablement, supporting the proposal, made by (Allen, 1984), (Pollack, 1986), (Grosz and Sidner, 1990), (Balkansld, 1990), that these two relations are nec- essary m model actions. After a general description of purpose clauses, I will concentrate on the relations between actions that they express, and on the inference processes that their in- terpretation requires. I see these inferences as instan- tiations of general accommodation processes necessary to interpret instructions, where the term accommodation is borrowed from (Lewis, 1979). I will conclude by describing the algorithm that implements the proposed inference processes. PURPOSE CLAUSES I am not the first one to analyze purpose clauses: how- ever, they have received attention almost exclusively from a syntactic point of view - see for example (Jones, 1985), (l-Iegarty, 1990). Notice that I am not using the term purpose clause in the technical way it has been used in syntax, where it refers to infinitival to clauses adjoined to NPs. In contrast, the infinitival clauses I have concentrated on are adjoined to a matrix clause, and are termed rational clauses in syntax; in fact all the data I will discuss in this paper belong to a particular subclass of such clauses, subject-gap rational clauses. As far as I know, very little attention has been paid to purpose clauses in the semantics literature: in (1990), Jackendoff briefly analyzes expressions of purpose, goal, or rationale, normally encoded as an infinitival, in order 120 to-phrase, or for-phrase. He represents them by means of a subordinating function FOR, which has the adjunct clause as an argument; in turn, FOR plus its argument is a restrictive modifier of the main clause. However, Jackendoff's semantic decomposition doesn't go beyond the construction of the logical form of a sentence, and he doesn't pursue the issue of what the relation between the actions described in the matrix and adjunct really is. The only other work that mentions purpose clauses in a computational setting is (Balkanski, 1991). However, she doesn't present any linguistic analysis of the data; as I will show, such analysis raises many interesting issues, such as t: • It is fairly clear that S uses purpose clauses to explain to H the goal/~ to whose achievement the execution of contributes. However, an important point that had been overlooked so far is that the goal/~ also constrains the interpretation of ~, as I observed with respect to Ex. 1.b. Another example in point is: Ex. 2 Cut the square in half to create two triangles. The action to be performed is cutting the square in half. However, such action description is underspecified, in that there is an infinite number of ways of cutting a square in half: the goal create two triangles restricts the choice to cutting the square along one of the two diagonals. • Purpose clauses relate action descriptions at different levels of abstraction, such as a physical action and an abstract process, or two physical actions, but at different levels of granularity: Ex. 3 Heat on stove to simmer. • As far as what is described in purpose clauses, I have been implying that both matrix and purpose clauses de- scribe an action, c~ and/~ respectively. There are rare cases - in fact, I found only one - in which one of the two clauses describes a state ~r: Ex. 4 To be successfully covered, a wood wall must be flat and smooth. I haven't found any instances in which both matrix and purpose clauses describe a state. Intuitively, this makes sense because S uses a purpose clause to inform H of the purpose of a given action 2 • In most cases, the goal /~ describes a change in the world. However, in some cases 1. The change is not in the world, but in H's knowl- edge. By executing o~, H can change the state of his knowledge with respect to a certain proposition or to the value of a certain entity. 1I collected one hundred and one consecutive instances of purpose clauses from a how-to-do book on installing wall cov- erings, and from two craft magazines. ~There are clearly other ways of describing that a state is the goal of a certain action, for example by means of so~such that, but I won't deal with such data here. Ex. 5 You may want to hang a coordinating border around the room at the top of the walls. To deter- mine the amount of border, measure the width (in feet) of all walls to be covered and divide by three. Since borders are sold by the yard, this will give you the number of yards needed. Many of such examples involve verbs such as check, make sure etc. followed by a that- complement describing a state ~b. The use of such verbs has the pragmatic effect that not only does H check whether ~b holds, but, if ~b doesn't hold, s/he will also do something so that ff comes to hold. Ex. 6 To attach the wires to the new switch, use the paper clip to move the spring type clip aside and slip the wire into place. Tug gently on each wire to make sure it's secure. 2. The purpose clause may inform H that the world should not change, namely, that a given event should be prevented from happening: Ex. 7 Tape raw edges of fabric to prevent threads from raveling as you work. • From a discourse processing point of view, interpret- ing a purpose clause may affect the discourse model, in particular by introducing new referents. This happens when the effect of oL is to create a new object, and/~ identifies it. Verbs frequently used in this context are create, make, form etc. Ex. 8 Join the short ends of the hat band to form a circle. Similarly, in Ex. 2 the discourse referents for the tri- angles created by cutting the square in half, and in Ex. 5 the referent for amount of border are introduced. RELATIONS BETWEEN ACTIONS So far, I have mentioned that oe contributes to achiev- ing the goal/~. The notion of contribution can be made more specific by examining naturally occurring purpose clauses. In the majority of cases, they express genera- tion, and in the rest enablement. Also (Grosz and Sid- ner, 1990) use contribute as a relation between actions, and they define it as a place holder for any relation ... that can hold between actions when one can be said to contribute (for example, by generating or enabling) to the performance of the other. However, they don't jus- tify this in terms of naturally occurring data. Balkanski (1991) does mention that purpose clauses express gen- eration or enablement, but she doesn't provide evidence to support this claim. GENERATION Generation is a relation between actions that has been extensively studied, first in philosophy (Goldman, 1970) and then in discourse analysis (Allen, 1984), (Pollack, 1986), (Grosz and Sidner, 1990), (Balkanski, 1990). According to Goldman, intuitively generation is the re- lation between actions conveyed by the preposition by in English - turning on the light by flipping the switch. 121 More formally, we can say that an action a conditionally generates another action/~ iff 3: 1. a and/~ are simultaneous; 2. a is not part of doing/~ (as in the case of playing a C note as part of playing a C triad on a piano); 3. when a occurs, a set of conditions C hold, such that the joint occurrence of a and C imply the occur- rence of/L In the case of the generation relation between flipping the switch and turning on the light, C will include that the wire, the switch and the bulb are working. Although generation doesn't hold between o~ and fl if is part of a sequence of actions ,4 to do/~, generation may hold between the whole sequence ,4 and/~. Generation is a pervasive relation between action de- scriptions in naturally occurring data. However, it ap- pears from my corpus that by clauses are used less fre- quently than purpose clauses to express generation 4: about 95% of my 101 purpose clauses express gener- ation, while in the same corpus there are only 27 by clauses. It does look like generation in instructional text is mainly expressed by means of purpose clauses. They may express either a direct generation relation between and/~, or an indirect generation relation between and/~, where by indirect generation I mean that ~ be- longs to a sequence of actions ,4 which generates 8. ENABLEMENT Following first Pollack (1986) and then Balkanski (1990), enablement holds between two actions ~ and /~ if and only if an occurrence of ot brings about a set of conditions that are necessary (but not necessarily suffi- cien 0 for the subsequent performance of 8. Only about 5% of my examples express enablement: Ex. 9 Unscrew the protective plate to expose the box. Unscrew the protective plate enables taking the plate off which generates exposing the box. GENERATION AND ENABLEMENT IN MODELING ACTIONS That purpose clauses do express generation and enable- ment is a welcome finding: these two relations have been proposed as necessary to model actions (Allen, 1984), (Pollack, 1986), (Grosz and Sidner, 1990), (Balkanski, 1990), but this proposal has not been jus- tiffed by offering an extensive analysis of whether and how these relations are expressed in NL. 3Goldman distinguishes among four kinds of generation re- lations: subsequent work has been mainly influenced by con- ditional generation. 4Generation can also be expressed with a simple free ad- junct; however, this use of free adjuncts is not very common - see 0hrebber and Di Eugenio, 1990). 122 A further motivation for using generation and enable- ment in modeling actions is that they allow us to draw conclusions about action execution as well - a particu- larly useful consequence given that my work is taking place in the framework of the Animation from Natural Language - AnimNL project (Badler eta/., 1990; Web- ber et al., 1991) in which the input instructions do have to be executed, namely, animated. As has already been observed by other researchers, ff generates /~, two actions are described, but only a, the generator, needs to be performed. In Ex. 2, there is no creating action per se that has to be executed: the physical action to be performed is cutting, constrained by the goal as explained above. In contrast to generation, if a enables/~, after execut- ing or, fl still needs to be executed: a has to temporally precede/~, in the sense that a has to begin, but not nec- essarily end, before/3. In Ex. 10, ho/d has to continue for the whole duration offal/: Ex. 10 Hold the cup under the spigot to fill it with coffee. Notice that, in the same way that the generatee affects the execution of the generator, so the enabled action affects the execution of the enabling action. Consider the difference in the interpretation of to in go to the mirror, depending upon whether the action to be enabled is seeing oneself or carrying the mirror somewhere else. INFERENCE PROCESSES So far, I have been talking about the purpose clause constraining the interpretation of the matrix clause. I will now provide some details on how such constraints are computed. The inferences that I have identified so far as necessary to interpret purpose clauses can be de- scribed as 1. Computing a more specific action description. 2. Computing assumptions that have to hold for a cer- tain relation between actions to hold. Computing more specific action descriptions. In Ex. 2 - Cut the square in half to create two triangles - it is necessary to find a more specific action al which will achieve the goal specified by the purpose clause, as shown in Fig. 1. For Ex. 2 we have fl = create two triangles, o~ = cut the square in half, ~1 = cut the square in half along the diagonal. The reader will notice that the inputs to accommodation are linguistic expressions, while its out- puts are predicate - argument structures: I have used the latter in Fig. 1 to indicate that accommodation infers relations between action types. However, as I will show later, the representation I adopt is not based on predi- cate - argument structures. Also notice that I am using Greek symbols for both linguistic expressions and action types: the context should be sufficient to disambiguate which one is meant. Computing assumptions. Let's consider: (create two (cut the triangles) square in hal0 > accommodation (create (agent, two-triangles)) /~ (cut ~g~~ilt (2g21t' sZi'al~ng~~igonal))) Figure 1: Schematic depiction of the first kind of accommodation accommodation A A... A .... l 2 1 ¢g Figure 2: Schematic depiction of the second kind of accommodation Ex. 11 Go into the other room to get the urn of coffee. Presumably, H doesn't have a particular plan that deals with getting an urn of coffee. S/he will have a generic plan about get x, which s/he will adapt to the instructions S gives him 5. In particular, H has to find the connection between go into the other room and get the urn of coffee. This connection requires reasoning about the effects of go with respect to the plan get x; notice that the (most direc0 connection between these two actions requires the assumption that the referent of the urn of coffee is in the other room. Schematically, one could represent this kind of inference as in Fig. 2 -/~ is the goal, ~ the instruction to accommodate, Ak the actions belonging to the plan to achieve t, C the necessary assumptions. It could happen that these two kinds of inference need to be combined: however, no example I have found so far requires it. INTERPRETING Do a to do I~ In this section, I will describe the algorithm that im- 5Actually H may have more than one single plan for get x,. in which case go into the other room may in fact help to select the plan the instructor has in mind. 123 plements the two kinds of accommodation described in the previous section. Before doing that, I will make some remarks on the action representation I adopt and on the structure of the intentions - the plan graph - that my algorithm contributes to building. Action representation. To represent action types, I use an hybrid system (Brachman et al., 1983), whose primi- tives are taken from Jackendoff's Conceptual Structures (1990); relations between action types are represented in another module of the system, the action library. I'd like to spend a few words justifying the choice of an hybrid system: this choice is neither casual, nor determined by the characteristics of the AnimNL project. Generally, in systems that deal with NL instructions, action types are represented as predicate - argument structures; the crucial assumption is then made that the logical form of an input instruction will exactly match one of these definitions. However, there is an infinite number of NL descriptions that correspond to a basic predicate - argument structure: just think of all the pos- sible modifiers that can be added to a basic sentence containing only a verb and its arguments. Therefore it is necessary to have a flexible knowledge representation system that can help us understand the relation between the input description and the stored one. I claim that hybrid KR systems provide such flexibility, given their virtual lattice structure and the classification algorithm operating on the lattice: in the last section of this paper I will provide an example supporting my claim. Space doesn't allow me to deal with the reason why Conceptual Structures are relevant, namely, that they are useful to compute assumptions. For further details, the interested reader is referred to (Di Eugenio, 1992; Di Eugenic) and White, 1992). Just a reminder to the reader that hybrid systems have two components: the terminological box, or T-Box, where concepts are defined, and on which the classi- fication algorithm works by computing subsumption re- lations between different concepts. The algorithm is cru- cial for adding new concepts to the KB: it computes the subsumption relations between the new concept and all the other concepts in the lattice, so that it can "Position" the new concept in the right place in the lattice. The other component of an hybrid system is the assertional box, or A-box, where assertions are stored, and which is equipped with a theorem-prover. In my case, the T-Box contains knowledge about ac- tion types, while assertions about individual actions - instances of the types - are contained in the A-Box: such individuals correspond to the action descriptions contained in the input instructions 6 The action library contains simple plans relating ac- tions; simple plans are either generation or enablement relations between pairs: the first member of the pair is either a single action or a sequence of action, and the second member is an action. In case the first member of the pair is an individual action, I will talk about direct generation or enablement. For the moment, generation and enablement are represented in a way very similar to (Balkanski, 1990). The plan graph represents the structure of the inten- tions derived from the input instructions. It is composed of nodes that contain descriptions of actions, and arcs that denote relations between them. A node contains the Conceptual Structures representation of an action, augmented with the consequent state achieved after the execution of that action. The arcs represent, among oth- ers: temporal relations; generation; enablement. The plan graph is built by an interpretation algorithm that works by keeping track of active nodes, which for the moment include the goal currently in focus and the nodes just added to the graph; it is manipulated by var- ious inference processes, such as plan expansion, and plan recognition. My algorithm is described in Fig. 3 7. Clearly the inferences I describe are possible only because I rely ~Notice that these individuals are simply instances of generic concepts, and not necessarily action tokens, namely, nothing is asserted with regard to their happening in the world. rAs I mentioned earlier in the paper, the Greek symbols on the other AnimNL modules for 1) parsing the in- put and providing a logical form expressed in terms of Conceptual Structures primitives; 2) managing the dis- course model, solving anaphora, performing temporal inferences etc (Webber eta/., 1991). AN EXAMPLE OF THE ALGORITHM I will conclude by showing how step 4a in Fig. 3 takes advantage of the classification algorithm with which hy- brid systems are equipped. Consider the T-Box, or better said, the portion of T- Box shown in Fig. 4 s. Given Ex. 2 - Cut the square in half to create two triangles - as input, the individual action description cut (the) square in half will be asserted in the A-Box and recognized as an instance of ~ - the shaded concept cut (a) square in half - which is a descendant of cut and an abstraction of o: - cut (a) square in half along the diagonal, as shown in Fig. 5 9. Notice that this does not imply that the concept cut (a) square in half is known beforehand: the classification process is able to recognize it as a virtual concept and to find the right place for it in the lattice 10. Given that a is ancestor of o J, and that oJ generates/~ - create two triangles, the fact that the action to be performed is actually o~ and not oL can be inferred. This implements step 4(a)ii. The classification process can also help to deal with cases in which ~ is in conflict with to - step 4(a)iv. If were cut (a) square along a perpendicular axis, a con- flict with o~ - cut (a) square in half along the diagonal - would be recognized. Given the T-Box in fig. 4, the classification process would result in o~ being a sister to w: my algorithm would try to unify them, but this would not be possible, because the role fillers of location on and w cannot be unified, being along(perpendicular- axis) and along(diagonal) respectively. I haven't ad- dressed the issue yet of which strategies to adopt in case such a conflict is detected. Another point left for future work is what to do when step 2 yields more than one simple plan. The knowledge representation system I am using is BACK (Peltason et al., 1989); the algorithm is being implemented in QUINTUS PROLOG. refer both to input descriptions and to action types. SThe reader may find that the representation in Fig. 4 is not very perspicuous, as it mixes linguistic expressions, such as along(diagonal), with conceptual knowledge about entities. Actually, roles and concepts are expressed in terms of Con- ceptual Structures primitives, which provide a uniform way of representing knowledge apparently belonging to different types. However, a T-Box expressed in terms of Conceptual Structures becomes very complex, so in Fig. 4 I adopted a more readable representation. 9The agent role does not appear on cut square in half in the A-Box for the sake of readability. 1°In fact, such concept is not really added to the lattice. 124 Input: the Conceptual Structures logical forms for ~ and t, the current plan graph, and the list of active nodes. 1. Add to A-Box individuals corresponding to the two logical forms. Set flag ACCOM if they don't exactly match known concepts. 2. Retrieve from the action library the simple plan(s) associated with /5 - generation relations in which /5 is the generate., enablement relations in which/5 is the enablee. 3. If ACCOM is not set (a) If there is a direct generation or enablement relation between ~ and/5, augment plan graph with the structure derived from it, after calling compute-assumptions. (b) If there is no such direct relation, recursively look for possible connections between e and the components 7i of sequences that either generate or enable/5. Augment plan graph, after calling c omput e- a s s umpt i on s. 4. If ACCOM is set, (a) If there is ~a such that oJ directly generates or enables/5, check whether i. w is an ancestor of c~: take c~ as the intended action. ii. ~o is a descendant of c~: take o~ as the intended action. iii. If w and e are not ancestors of each other, but they can be unified - all the information they provide is compatible, as in the case of cut square in half along diagonal and cut square carefully - then their unification w U c~ is the action to be executed. iv. If o: and ~ are not ancestors of each other, and provide conflicting information - such as cut square along diagonal and cut square along perpendicular axis - then signal failure. (b) If there is no such w, look for possible connections between ~ and the components 7i of sequences that either generate or enable/5, as in step 3b. Given that ~ is not known to the system, apply the inferences described in 4a to c~ and 7/. Figure 3: The algorithm for Do ~ to do 125 O earnest @ role V/R (Value Rcm~iction) / ,.on .... Figure 4: A portion of the action hierarchy individual ,,,.,,.,,,,,..,, instantiates T_.OX -,,i .\ ~,~ /--~ location / / A-BOX Figure 5: Dealing with less specific action descriptions 126 CONCLUSIONS I have shown that the analysis of purpose clauses lends support to the proposal of using generation and enablement to model actions, and that the interpretation of purpose clauses originates specific inferences: I have illustrated two of them, that can be seen as examples of accommodation processes (Lewis, 1979), and that show how the bearer's inference processes are directed by the goal(s) s/he is adopting. Future work includes fully developing the action rep- resentation formalism, and the algorithm, especially the part regarding computing assumptions. ACKNOWLEDGEMENTS For financial support I acknowledge DARPA grant no. N0014-90-J-1863 and ARt grant no. DAALO3-89- C0031PR1. Thanks to Bonnie Webber for support, in- sights and countless discussions, and to all the members of the AnimNL group, in particular to Mike White. Fi- nally, thanks to the Dipartimento di Informatica - Uni- versita' di Torino - Italy for making their computing environment available to me, and in particular thanks to Felice Cardone, Luca Console, Leonardo Lesmo, and Vincenzo Lombardo, who helped me through a last minute computer crash. References (Allen, 1984) James Allen. Towards a general theory of action and time. Artificial Intelligence, 23:123- 154, 1984. (Alterman eta/., 1991) Richard Alterman, Roland Zito- Wolf, and Tamitha Carpenter. Interaction, Com- prehension, and Instruction Usage. Technical Re- port CS-91-161, Dept. of Computer Science, Cen- ter for Complex Systems, Brandeis University, 1991. (Badler et al., 1990) Norman Badler, Bonnie Webber, Jeff Esakov, and Jugal Kalita. Animation from in- slzuctions. In Badler, Barsky, and Zeltzer, editors, Making them Move, MIT Press, 1990. (Balkanski, 1990) Cecile Balkanski. Modelling act-type relations in collaborative activity. Technical Re- port TR-23-90, Center for Research in Computing Technology, Harvard University, 1990. (Balkanski, 1991) Cecile Balkanski. Logical form of complex sentences in task-oriented dialogues. In Proceedings of the 29th Annual Meeting of the ACL, Student Session, 1991. (Brachman et al., 1983) R. Brachman, R.Fikes, and H. Levesque. KRYPTON: A Functional Approach to Knowledge Representation. Technical Re- port FLAIR 16, Fairchild Laboratories for Artificial Intelligence, Palo Alto, California, 1983. (Chapman, 1991) David Chapman. Vision, Instruction andAction. Cambridge: MIT Press, 1991. 127 (Cohen and Levesque, 1990) Philip Cohen and Hector Levesque. Rational Interaction as the Basis for Communication. In J. Morgan, P. Cohen, and M. Pollack, editors, Intentions in Communication, MIT Press, 1990. (Di Eugenio, 1992) Barbara DiEugenio. Goals andAc- tions in Natural Language Instructions. Technical Report MS-CIS-92-07, University of Pennsylvania, 1992. (Di Eugenio and White, 1992) Barbara Di Eugenio and Michael White. On the Interpretation of Natural Language Instructions. 1992. COLING 92. (Goldman, 1970) Alvin Goldman. A Theory of Hwnan Action. Princeton University Press, 1970. (Grosz and Sidner, 1990) Barbara Grosz and Candace Sidner. Plans for Discourse. In J. Morgan, P. Co- hen, and M. Pollack, editors, Intentions in Commu- nication, MIT Press, 1990. (Hegarty, 1990)Michael Hegarty. Secondary Predi- cation and Null Operators in English. 1990. Manuscript. (Jackendoff, 1990) Ray Jackendoff. Semantic Struc- tures. Current Studies in Linguistics Series, The MIT Press, 1990. (Jones, 1985) Charles Jones. Agent, patient, and con- trol into purpose clauses. In Chicago Linguistic Society, 21, 1985. (Lewis, 1979) David Lewis. Scorekeeping in a lan- guage game. Journal of Philosophical Language, 8:339-359, 1979. (Peltason et al., 1989) C. Peltason, A. Schmiedel, C. Kindermann, and J. Quantz. The BACK System Revisited. Technical Report KIT 75, Technische Universitaet Berlin, 1989. (Pollack, 1986) Martha Pollack. Inferring domain plans in question-answering. PhD thesis, University of Pennsylvania, 1986. (Vere and Bickmore, 1990) Steven Vere and Timothy Bickmore. A basic agent. Computational Intel- ligence, 6:41--60, 1990. (Webber and Di Eugenio, 1990) Bonnie Webber and Barbara Di Eugenio. Free Adjuncts in Natural Lan- guage Instructions. In Proceedings Thirteenth In- ternational Conference on Computational Linguis- tics, COLING 90, pages 395--400, 1990. (Webber et al., 1991) Bonnie Webber, Norman Badler, Barbara Di Eugenio, Libby Levison, and Michael white. Instructing Animated Agents. In Proc. US- Japan Workshop on Integrated Systems in Multi- Media Environments. Las Cruces, NM, 1991. (Winograd, 1972) Terry Winograd. Understanding Nat- ural Language. Academic Press, 1972.
1992
16
INSIDE-OUTSIDE REESTIMATION FROM PARTIALLY BRACKETED CORPORA Fernando Pereira 2D-447, AT~zT Bell Laboratories PO Box 636, 600 Mountain Ave Murray Hill, NJ 07974-0636 pereira@research, art. com Yves Schabes Dept. of Computer and Information Science University of Pennsylvania Philadelphia, PA 19104-6389 schabes@una~i, cis. upenn, edu ABSTRACT The inside-outside algorithm for inferring the pa- rameters of a stochastic context-free grammar is extended to take advantage of constituent in- formation (constituent bracketing) in a partially parsed corpus. Experiments on formal and natu- ral language parsed corpora show that the new al- gorithm can achieve faster convergence and better modeling of hierarchical structure than the origi- nal one. In particular, over 90% test set bracket- ing accuracy was achieved for grammars inferred by our algorithm from a training set of hand- parsed part-of-speech strings for sentences in the Air Travel Information System spoken language corpus. Finally, the new algorithm has better time complexity than the original one when sufficient bracketing is provided. 1. MOTIVATION The most successful stochastic language models have been based on finite-state descriptions such as n-grams or hidden Markov models (HMMs) (Jelinek et al., 1992). However, finite-state mod- els cannot represent the hierarchical structure of natural language and are thus ill-suited to tasks in which that structure is essential, such as lan- guage understanding or translation. It is then natural to consider stochastic versions of more powerful grammar formalisms and their gram- matical inference problems. For instance, Baker (1979) generalized the parameter estimation meth- ods for HMMs to stochastic context-free gram- mars (SCFGs) (Booth, 1969) as the inside-outside algorithm. Unfortunately, the application of SCFGs and the original inside-outside algorithm to natural-language modeling has been so far in- conclusive (Lari and Young, 1990; Jelinek et al., 1990; Lari and Young, 1991). Several reasons can be adduced for the difficul- ties. First, each iteration of the inside-outside al- gorithm on a grammar with n nonterminals may require O(n3[wl 3) time per training sentence w, 128 while each iteration of its finite-state counterpart training an HMM with s states requires at worst O(s2lwl) time per training sentence. That com- plexity makes the training of suffÉciently large grammars computationally impractical. Second, the convergence properties of the algo- rithm sharply deteriorate as the number of non- terminal symbols increases. This fact can be intu- itively understood by observing that the algorithm searches for the maximum of a function whose number of local maxima grows with the number of nonterminals. Finally, while SCFGs do provide a hierarchical model of the language, that structure is undetermined by raw text and only by chance will the inferred grammar agree with qualitative linguistic judgments of sentence structure. For ex- ample, since in English texts pronouns are very likely to immediately precede a verb, a grammar inferred from raw text will tend to make a con- stituent of a subject pronoun and the following verb. We describe here an extension of the inside-outside algorithm that infers the parameters of a stochas- tic context-free grammar from a partially parsed corpus, thus providing a tighter connection be- tween the hierarchical structure of the inferred SCFG and that of the training corpus. The al- gorithm takes advantage of whatever constituent information is provided by the training corpus bracketing, ranging from a complete constituent analysis of the training sentences to the unparsed corpus used for the original inside-outside algo- rithm. In the latter case, the new algorithm re- duces to the original one. Using a partially parsed corpus has several advan- tages. First, the the result grammars yield con- stituent boundaries that cannot be inferred from raw text. In addition, the number of iterations needed to reach a good grammar can be reduced; in extreme cases, a good solution is found from parsed text but not from raw text. Finally, the new algorithm has better time complexity when sufficient bracketing information is provided. 2. PARTIALLY BRACKETED TEXT Informally, a partially bracketed corpus is a set of sentences annotated with parentheses marking constituent boundaries that any analysis of the corpus should respect. More precisely, we start from a corpus C consisting of bracketed strings, which are pairs e = (w,B) where w is a string and B is a bracketing of w. For convenience, we will define the length of the bracketed string c by Icl = Iwl. Given a string w = wl ..-WlM, a span of w is a pair of integers (i,j) with 0 < i < j g [w[, which delimits a substring iwj = wi+y ...wj of w. The abbreviation iw will stand for iWl~ I. A bracketing B of a string w is a finite set of spans on w (that is, a finite set of pairs or integers (i, j) with 0 g i < j < [w[) satisfying a consistency condition that ensures that each span (i, j) can be seen as delimiting a string iwj consisting of a se- quence of one of more. The consistency condition is simply that no two spans in a bracketing may overlap, where two spans (i, j) and (k, l) overlap if either i < k < j < l or k < i < l < j. Two bracketings of the same string are said to be compatible if their union is consistent. A span s is valid for a bracketing B if {s} is compatible with B. Note that there is no requirement that a bracket- ing of w describe fully a constituent structure of w. In fact, some or all sentences in a corpus may have empty bracketings, in which case the new al- gorithm behaves like the original one. To present the notion of compatibility between a derivation and a bracketed string, we need first to define the span of a symbol occurrence in a context-free derivation. Let (w,B) be a brack- eted string, and c~0 ==~ al :=¢, ... =~ c~m = w be a derivation of w for (S)CFG G. The span of a symbol occurrence in (~1 is defined inductively as follows: • Ifj -- m, c U = w E E*, and the span of wi in ~j is (i- 1, i). • If j < m, then aj : flAT, aj+l = /3XI"'Xk')', where A -* XI".Xk is a rule of G. Then the span of A in aj is (il,jk), where for each 1 < l < k, (iz,jt) is the span of Xl in aj+l- The spans in (~j of the symbol occurrences in/3 and 7 are the same as those of the corresponding symbols in ~j+l. A derivation of w is then compatible with a brack- eting B of w if the span of every symbol occurrence in the derivation is valid in B. 3. GRAMMAR REESTIMATION The inside-outside algorithm (Baker, 1979) is a reestimation procedure for the rule probabilities of a Chomsky normal-form (CNF) SCFG. It takes as inputs an initial CNF SCFG and a training cor- pus of sentences and it iteratively reestimates rule probabilities to maximize the probability that the grammar used as a stochastic generator would pro- duce the corpus. A reestimation algorithm can be used both to re- fine the parameter estimates for a CNF SCFG de- rived by other means (Fujisaki et hi., 1989) or to infer a grammar from scratch. In the latter case, the initial grammar for the inside-outside algo- rithm consists of all possible CNF rules over given sets N of nonterrninals and E of terminals, with suitably assigned nonzero probabilities. In what follows, we will take N, ~ as fixed, n - IN[, t = [El, and assume enumerations N - {A1,... ,An} and E = {hi,... ,bt}, with A1 the grammar start symbol. A CNF SCFG over N, E can then be specified by the n~+ nt probabilities Bp,q,r of each possible binary rule Ap --* Aq Ar and Up,m of each possible unary rule Ap --* bin. Since for each p the parameters Bp,q,r and Up,rn are supposed to be the probabilities of different ways of expanding Ap, we must have for all 1 _< p _< n E Bp,q,r + E Up,m = 1 (7) q,r m For grammar inference, we give random initial val- ues to the parameters Bp,q,r and Up,m subject to the constraints (7). The intended meaning of rule probabilities in a SCFG is directly tied to the intuition of context- freeness: a derivation is assigned a probability which is the product of the probabilities of the rules used in each step of the derivation. Context- freeness together with the commutativity of mul- tiplication thus allow us to identify all derivations associated to the same parse tree, and we will 129 I~(i- 1,i) = I~(i, k) = O~(O, lel) = O~(i,k) = ^ ~,qjr "-- pc -- P;= Up,m where c = (w, B) and bm= wi e(i, k) ~ ~ B,.,.,g(i,i)1,~(.~, k) q,r i<j<k 1 ifp=l 0 othe~ise. • ~-1 Id ~(~,k) ~ (~ O;(j,k)~(~,OB,.,~, + ~ OI(i,jlB,~.d~(k,~)) ,~,r \j=o ~=k+1 I -f; ~ B,.,.,g(~,j)~:(j,k)O~(~,k) ,ec o_</<,f<k<i=,t Z:g/e" cEC 1 c • E U,,mO;(,- ¢~c l<i<ld,.=(,.,B),,~,=b.. EP;/P" ¢EC If(0, Id) I;(i,j)O~(i,j) o_<i<./__.ld (1) (2) (s) (41 (5) (6) Table I: Bracketed Reestimation speak indifferently below of derivation and anal- ysis (parse tree) probabilities. Finally, the proba- bility of a sentence or sentential form is the sum of the probabilities of all its analyses (equivalently, the sum of the probabilities of all of its leftmost derivations from the start symbol). 3.1. The Inside-Outside Algorithm The basic idea of the inside-outside algorithm is to use the current rule probabilities and the train- ing set W to estimate the expected frequencies of certain types of derivation step, and then compute new rule probability estimates as appropriate ra- tios of those expected frequency estimates. Since these are most conveniently expressed as relative frequencies, they are a bit loosely referred to as inside and outside probabilities. More precisely, for each w E W, the inside probability I~ (i, j) es- timates the likelihood that Ap derives iwj, while the outside probability O~(i, j) estimates the like- lihood of deriving sentential form owi Ap j w from the start symbol A1. 130 3.2. The Extended Algorithm In adapting the inside-outside algorithm to par- tially bracketed training text, we must take into account the constraints that the bracketing im- poses on possible derivations, and thus on possi- ble phrases. Clearly, nonzero values for I~(i,j) or O~(i,j) should only be allowed if iwj is com- patible with the bracketing of w, or, equivalently, if (i,j) is valid for the bracketing of w. There- fore, we will in the following assume a corpus C of bracketed strings c = (w, B), and will modify the standard formulas for the inside and outside prob- abilities and rule probability reestimation (Baker, 1979; Lari and Young, 1990; Jelinek et al., 1990) to involve only constituents whose spans are com- patible with string bracketings. For this purpose, for each bracketed string c = (w, B) we define the auxiliary function 1 if (i,j) is valid for b E B ~(i,j) = 0 otherwise The reestimation formulas for the extended algo- rithm are shown in Table 1. For each bracketed sentence c in the training corpus, the inside prob- abilities of longer spans of c are computed from those for shorter spans with the recurrence given by equations (1) and (2). Equation (2) calculates the expected relative frequency of derivations of iwk from Ap compatible with the bracketing B of c = (w, B). The multiplier 5(i, k) is i just in case (i, k) is valid for B, that is, when Ap can derive iwk compatibly with B. Similarly, the outside probabilities for shorter spans of c can be computed from the inside prob- abilities and the outside probabilities for longer spans with the recurrence given by equations (3) and (4). Once the inside and outside probabili- ties computed for each sentence in the corpus, the ^ reestimated probability of binary rules, Bp,q,r, and the reestimated probability of unary rules, (Jp,ra, are computed by the reestimation formulas (5) and (6), which are just like the original ones (Baker, 1979; Jelinek et al., 1990; Lari and Young, 1990) except for the use of bracketed strings instead of unbracketed ones. The denominator of ratios (5) and (6) estimates the probability that a compatible derivation of a bracketed string in C will involve at least one ex- pansion of nonterminal Ap. The numerator of (5) estimates the probability that a compatible deriva- tion of a bracketed string in C will involve rule Ap --* Aq At, while the numerator of (6) estimates • the probability that a compatible derivation of a string in C will rewrite Ap to b,n. Thus (5) es- timates the probability that a rewrite of Ap in a compatible derivation of a bracketed string in C will use rule Ap --~ Aq At, and (6) estimates the probability that an occurrence of Ap in a compat- ible derivation of a string in in C will be rewritten to bin. These are the best current estimates for the binary and unary rule probabilities. The process is then repeated with the reestimated probabilities until the increase in the estimated probability of the training text given the model becomes negligible, or, what amounts to the same, the decrease in the cross entropy estimate (nega- tive log probability) E log pc H(C,G) = (8) Icl c6C becomes negligible. Note that for comparisons with the original algorithm, we should use the cross-entropy estimate /~(W, G) of the unbrack- eted text W with respect to the grammar G, not (8). 131 3.3. Complexity Each of the three steps of an iteration of the origi- nal inside-outside algorithm -- computation of in- side probabilities, computation of outside proba- bilities and rule probability reestimation - takes time O(Iwl 3) for each training sentence w. Thus, the whole algorithm is O(Iw[ 3) on each training sentence. However, the extended algorithm performs better when bracketing information is provided, because it does not need to consider all possible spans for constituents, but only those compatible with the training set bracketing. In the limit, when the bracketing of each training sentence comes from a complete binary-branching analysis of the sen- tence (a full binary bracketing), the time of each step reduces to O([w D. This can be seen from the following three facts about any full binary brack- eting B of a string w: 1. B has o(Iwl) spans; 2. For each (i, k) in B there is exactly one split point j such that both (i, j) and (j, k) are in 3. Each valid span with respect to B must al- ready be a member of B. Thus, in equation (2) for instance, the number of spans (i, k) for which 5(i, k) • 0 is O([eD, and there is a single j between i and k for which 6(i, j) ~ 0 and 5(j,k) ~ 0. Therefore, the total time to compute all the I~(i, k) is O(Icl). A simi- lar argument applies to equations (4) and (5). Note that to achieve the above bound as well as to take advantage of whatever bracketing is available to improve performance, the implementation must preprocess the training set appropriately so that the valid spans and their split points are efficiently enumerated. 4. EXPERIMENTAL EVALUATION The following experiments, although preliminary, give some support to our earlier suggested advan- tages of the inside-outside algorithm for partially bracketed corpora. The first experiment involves an artificial exam- ple used by Lari and Young (1990) in a previous evaluation of the inside-outside algorithm. In this case, training on a bracketed corpus can lead to a good solution while no reasonable solution is found training on raw text only. The second experiment uses a naturally occurring corpus and its partially bracketed version provided by the Penn Treebank (Brill et al., 1990). We compare the bracketings assigned by grammars in- ferred from raw and from bracketed training mate- rial with the Penn Treebank bracketings of a sep- arate test set. To evaluate objectively the accuracy of the analy- ses yielded by a grammar G, we use a Viterbi-style parser to find the most likely analysis of each test sentence according to G, and define the bracket- ing accuracy of the grammar as the proportion of phrases in those analyses that are compatible in the sense defined in Section 2 with the tree bank bracketings of the test set. This criterion is closely related to the "crossing parentheses" score of Black et al. (1991). 1 In describing the experiments, we use the nota- tion GR for the grammar estimated by the original inside-outside algorithm, and GB for the grammar estimated by the bracketed algorithm. 4.1. Inferring the Palindrome Lan- guage We consider first an artificial language discussed by Lari and Young (1990). Our training corpus consists of 100 sentences in the palindrome lan- guage L over two symbols a and b L - (ww R I E {a,b}'}. randomly generated S with the SCFG °~A C S°~BD S °-~ AA S BB C*-~SA D!+SB A *-~ a B&b 1 Since the grammar inference procedure is restricted to Chomsky normal form grannnars, it cannot avoid difficult decisions by leaving out brackets (thus making flatter parse trees), as hunmn annotators often do. Therefore, the recall component in Black et aL's figure of merit for parser is not needed. 132 The initial grammar consists of all possible CNF rules over five nonterminals and the terminals a and b (135 rules), with random rule probabilities. As shown in Figure 1, with an unbracketed train- ing set W the cross-entropy estimate H(W, GR) re- mains almost unchanged after 40 iterations (from 1.57 to 1.44) and no useful solution is found. In contrast, with a fully bracketed version C of the same training set, the cross-entropy estimate /~(W, GB) decreases rapidly (1.57 initially, 0.88 af- ter 21 iterations). Similarly, the cross-entropy esti- mate H(C, GB) of the bracketed text with respect to the grammar improves rapidly (2.85 initially, 0.89 after 21 iterations). 1.6 1.5 1.4 1.3 G 1.2 < " I . i I 0.9 0.8 ~-... \ \ ! \ Raw -- Bracketed ..... % i ! i ! , , ! 1 5 10 15 20 25 30 35 40 iteration Figure 1: Convergence for the Palindrome Corpus The inferred grammar models correctly the palin- drome language. Its high probability rules (p > 0.1, pip' > 30 for any excluded rule p') are S --*AD S -*CB B--*SC D--*SA A --* b B -* a C --* a D ---* b which is a close to optimal CNF CFG for the palin- drome language. The results on this grammar are quite sensitive to the size and statistics of the training corpus and the initial rule probability assignment. In fact, for a couple of choices of initial grammar and corpus, the original algorithm produces gram- mars with somewhat better cross-entropy esti- mates than those yielded by the new one. How- ever, in every case the bracketing accuracy on a separate test set for the result of bracketed training is above 90% (100% in several cases), in contrast to bracketing accuracies ranging between 15% and 69% for unbracketed training. 4.2. Experiments on the ATIS Cor- pus For our main experiment, we used part-of-speech sequences of spoken-language transcriptions in the Texas Instruments subset of the Air Travel Infor- mation System (ATIS) corpus (Hemphill et el., 1990), and a bracketing of those sequences derived from the parse trees for that subset in the Penn Treebank. Out of the 770 bracketed sentences (7812 words) in the corpus, we used 700 as a training set C and 70 (901 words) as a test set T. The following is an example training string ( ( ( VB ( DT ~NS ( IB ( ( NN ) ( NN CD ) ) ) ) ) ) . ) corresponding to the parsed sentence (((List (the fares (for ((flight) (number 891)))))) .) The initial grammar consists of all 4095 possible CNF rules over 15 nonterminals (the same number as in the tree bank) and 48 terminal symbols for part-of-speech tags. A random initial grammar was trained separately on the unbracketed and bracketed versions of the training corpus, yielding grammars GR and GB. 4.6 4.4 4.2 4 3.a 3.6 3.4 3.2 3 2.8 1 i ! | I i ! ! ~, Raw -- ~ Bracketed ..... \ \. I I I I I | I I0 20 30 40 50 60 70 75 iteration Figure 2: Convergence for the ATIS Corpus Figure 2 shows that H(W, GB) initially decreases faster than the/:/(W, GR), although eventually the 133 two stabilize at very close values: after 75 itera- tions, /I(W, GB) ~ 2.97 and /:/(W, GR) ~ 2.95. However, the analyses assigned by the resulting grammars to the test set are drastically different. I00 80 u 60 o o 40 rd 20 ' Raw ' ' ' ' ' Bracketed ..... ., .......... ~"° .... / l I I I i ' ' i I0 20 30 40 50 60 70 75 iteration Figure 3: Bracketing Accuracy for the ATIS Cor- pus With the training and test data described above, the bracketing accuracy of GR after 75 iterations was only 37.35%, in contrast to 90.36% bracket- ing accuracy for GB. Plotting bracketing accu- racy against iterations (Figure 3), we see that un- bracketed training does not on the whole improve accuracy. On the other hand, bracketed training steadily improves accuracy, although not mono- tonically. It is also interesting to look at some the differences between GR and GB, as seen from the most likely analyses they assign to certain sentences. Table 2 shows two bracketed test sentences followed by their most likely GR and GB analyses, given for readability in terms of the original words rather than part-of-speech tags. For test sentence (A), the only GB constituent not compatible with the tree bank bracketing is (Delta flight number), although the con- stituent (the cheapest) is linguistically wrong. The appearance of this constituent can be ex- plained by lack of information in the tree bank about the internal structure of noun phrases, as exemplified by tree bank bracketing of the same sentence. In contrast, the GR analysis of the same string contains 16 constituents incompatible with the tree bank. For test sentence (B), the G~ analysis is fully com- patible with the tree bank. However, the Grt anal- ysis has nine incompatible constituents, which for (A) Ga (I would (like (to (take (Delta ((flight number) 83)) (to Atlanta)))).) (What ((is (the cheapest fare (I can get)))) ?) (I (would (like ((to ((take (Delta flight)) (number (83 ((to Atlanta) .))))) ((What (((is the) cheapest) fare)) ((I can) (get ?))))))) (((I (would (like (to (take (((Delta (flight number)) 83) (to Atlanta))))))) .) ((What (is (((the cheapest) fare) (I (can get))))) ?)) GB (B) ((Tell me (about (the public transportation ((from SF0) (to San Francisco))))).) GR (Tell ((me (((about the) public) transportation)) ((from SF0) ((to San) (Francisco .))))) GB ((Tell (me (about (((the public) transportation) ((from SFO) (to (San Francisco))))))) .) Table 2: Comparing Bracketings example places Francisco and the final punctua- tion in a lowest-level constituent. Since final punc- tuation is quite often preceded by a noun, a gram- mar inferred from raw text will tend to bracket the noun with the punctuation mark. This experiment illustrates the fact that although SCFGs provide a hierarchical model of the lan- guage, that structure is undetermined by raw text and only by chance will the inferred grammar agree with qualitative linguistic judgments of sen- tence structure. This problem has also been previ- ously observed with linguistic structure inference methods based on mutual information. Mater- man and Marcus (1990) addressed the problem by specifying a predetermined list of pairs of parts of speech (such as verb-preposition, pronoun-verb) that can never be embraced by a low-level con- stituent. However, these constraints are stipulated in advance rather than being automatically de- rived from the training material, in contrast with what we have shown to be possible with the inside- outside algorithm for partially bracketed corpora. 5. CONCLUSIONS AND FURTHER WORK We have introduced a modification of the well- known inside-outside algorithm for inferring the parameters of a stochastic context-free grammar that can take advantage of constituent informa- tion (constituent bracketing) in a partially brack- eted corpus. The method has been successfully applied to SCFG inference for formal languages and for part-of-speech sequences derived from the ATIS 134 spoken-language corpus. The use of partially bracketed corpus can reduce the number of iterations required for convergence of parameter reestimation. In some cases, a good solution is found from a bracketed corpus but not from raw text. Most importantly, the use of par- tially bracketed natural corpus enables the algo- rithm to infer grammars specifying linguistically reasonable constituent boundaries that cannot be inferred by the inside-outside algorithm on raw text. While none of this is very surprising, it sup- plies some support for the view that purely unsu- pervised, self-organizing grammar inference meth- ods may have difficulty in distinguishing between underlying grammatical structure and contingent distributional regularities, or, to put it in another way, it gives some evidence for the importance of nondistributional regularities in language, which in the case of bracketed training have been sup- plied indirectly by the linguists carrying out the bracketing. Also of practical importance, the new algorithm can have better time complexity for bracketed text. In the best situation, that of a training set with full binary-branching bracketing, the time for each iteration is in fact linear on the total length of the set. These preliminary investigations could be ex- tended in several ways. First, it is important to determine the sensitivity of the training algorithm to the initial probability assignments and training corpus, as well as to lack or misplacement of brack- ets. We have started experiments in this direction, but reasonable statistical models of bracket elision and misplacement are lacking. Second, we would like to extend our experiments to larger terminal vocabularies. As is well known, this raises both computational and data sparse- ness problems, so clustering of terminal symbols will be essential. Finally, this work does not address a central weak- ness of SCFGs, their inability to represent lex- ical influences on distribution except by a sta- tistically and computationally impractical pro- liferation of nonterminal symbols. One might instead look into versions of the current algo- rithm for more lexically-oriented formalisms such as stochastic lexicalized tree-adjoining grammars (Schabes, 1992). ACKNOWLEGMENTS We thank Aravind Joshi and Stuart Shieber for useful discussions, and Mitch Marcus, Beatrice Santorini and Mary Ann Marcinkiewicz for mak- ing available the ATIS corpus in the Penn Tree- bank. The second author is partially supported by DARPA Grant N0014-90-31863, ARO Grant DAAL03-89-C-0031 and NSF Grant IRI90-16592. REFERENCES J.K. Baker. 1979. Trainable grammars for speech recognition. In Jared J. Wolf and Dennis H. Klatt, editors, Speech communication papers presented at the 97 ~h Meeting of the Acoustical Society of America, MIT, Cambridge, MA, June. E. Black, S. Abney, D. Flickenger, R. Grishman, P. Harrison, D. Hindle, R. Ingria, F. Jelinek, J. Klavans, M. Liberman, M. Marcus, S. Roukos, B. Santorini, and T. Strzalkowski. 1991. A pro- cedure for quantitatively comparing the syntactic coverage of english grammars. In DARPA Speech and Natural Language Workshop, pages 306-311, Pacific Grove, California. Morgan Kaufmann. T. Fujisaki, F. Jelinek, J. Cocke, E. Black, and T. Nishino. 1989. A probabilistic parsing method for sentence disambiguation. In Proceedings of the International Workshop on Parsing Technologies, Pittsburgh, August. Charles T. Hemphill, John J. Godfrey, and George R. Doddington. 1990. The ATIS spoken language systems pilot corpus. In DARPA Speech and Natural Language Workshop, Hidden Valley, Pennsylvania, June. F. Jelinek, J. D. Lafferty, and R. L. Mercer. 1990. Basic methods of probabilistic context free gram- mars. Technical Report RC 16374 (72684), IBM, Yorktown Heights, New York 10598. Frederick Jelinek, Robert L. Mercer, and Salim Roukos. 1992. Principles of lexical language mod- eling for speech recognition. In Sadaoki Furui and M. Mohan Sondhi, editors, Advances in Speech Signal Processing, pages 651-699. Marcel Dekker, Inc., New York, New York. K. Lari and S. J. Young. 1990. The estimation of stochastic context-free grammars using the Inside- Outside algorithm. Computer Speech and Lan- guage, 4:35-56. K. Lari and S. J. Young. 1991. Applications of stochastic context-free grammars using the Inside- Outside algorithm. Computer Speech and Lan- guage, 5:237-257. David Magerman and Mitchell Marcus. 1990. Parsing a natural language using mutual informa- tion statistics. In AAAI-90, Boston, MA. Yves Schabes. 1992. Stochastic lexicalized tree- adjoining grammars. In COLING 92. Forthcom- ing. T. Booth. 1969. Probabilistic representation of formal languages. In Tenth Annual IEEE Sympo- sium on Switching and Automata Theory, Octo- ber. Eric Brill, David Magerman, Mitchell Marcus, and Beatrice Santorini. 1990. Deducing linguistic structure from the statistics of large corpora. In DARPA Speech and Natural Language Workshop. Morgan Kaufmann, Hidden Valley, Pennsylvania, JuDe. 135
1992
17
Linear Context-Free Rewriting Systems and Deterministic Tree-Walking Transducers* David J. Weir School of Cognitive and Computing Sciences University of Sussex Falmer, Brighton BN1 9QH davidw @ cogs. sussex, ac. uk Abstract We show that the class of string languages gener- ated by linear context-free rewriting systems is equal to the class of output languages of deterministic tree- walking transducers. From equivalences that have pre- viously been established we know that this class of lan- guages is also equal to the string languages generated by context-free hypergraph grammars, multicompo- nent tree-adjoining grammars, and multiple context- free grammars and to the class of yields of images of the regular tree languages under finite-copying top- down tree transducers. Introduction In [9] a comparison was made of the generative capac- ity of a number of grammar formalisms. Several were found to share a number of characteristics (described below) and the class of such formalisms was called lin- ear context-free rewriting systems. This paper shows how the class of string languages generated by linear context-free rewriting systems relates to a number of other systems that have been studied by formal lan- guage theorists. In particular, we show that the class of string languages generated by linear context-free rewriting systems is equal to the class of output lan- guages of deterministic tree-walking transducers [1]. A number of other equivalences have already been established. In [10] it was shown that linear context- free rewriting systems and multicomponent tree ad- joining grammars [6] generate the same string lan- guages. The multiple context-free grammars of [7] are equivalent to linear context-free systems. This follows *I would like to thank Joost Engelfriet for drawing my attention to context-free hypergraph grammars and their relationship to deterministic tree-walking automata. from the fact that multiple context-free grammars are exactly that subclass of the linear context-free rewrit- ing systems in which the objects generated by the grammar are tuples of strings. The class of output languages of deterministic tree-walking transducers is known to be equal to the class of yields of images of the regular tree languages under finite-copying top-down tree transducers [4] and in [3] it was shown that it also equal to the string languages generated by context-free hypergraph grammars [2, 5]. We therefore have a number of Characterizations of the same class of languages and results that have been established for the class of languages associated with one system carry over to the others. This is particu- larly fruitful in this case since the output languages of deterministic tree-walking transducers have been well studied (see [4]). In the remainder of the paper we describe linear context-free rewriting systems and deterministic tree- walking transducers and outline the equivalence proof. We then describe context-free hypergraph grammars and observe that they are a context-free rewriting sys- tem. Linear Context-Free Rewriting Systems Linear context-free rewriting systems arose from the observation that a number of grammatical formalisms share two properties. 1. Their derivation tree sets can be generated by a context-free grammar. 2. Their composition operations are size-preserving, i.e., when two or more substructures are com- bined only a bounded amount of structure is added or deleted. 136 Examples of formalisms that. satisfy these condi- tions are head grammars [8], tree adjoining gram- mars [6], multicomponent tree adjoining grammars [6] and context-free hypergraph grammars. It was shown [9] that a system satisfying the above conditions generates languages that are semilinear and can be recognized in polynomial time. The definition of lin- ear context-free rewriting systems is deliberately not specific about the kinds of structures being manipu- lated. In the case of head grammars these are pairs of strings whereas tree adjoining grammars manipulate trees and context-free hypergraph grammars manipu- late graphs. In [9] size-preserving operations are defined for ar- bitrary structures in terms of properties of the cor- responding functions over the terminal yield of the structures involved. The yield is taken to be a tuple of terminal strings. We call the function associated with a composition operation the yield function of that operation. The yield function of Of of a com- position operation f gives the yield of the structure f(cl,ldots, cn) based on the yield of the structures el, • •., am. Let ~ be an alphabet of terminal symbols, f is an n-ary linear regular operation over tuples of strings in ~ if it can be defined with an equation of the form f((xl,1,..., xl,k,),..., (ran,l,..., xn,k,,)) ---- (tl,...,tk) where each k i > O, n >_ 0 and each ti is a string of variables (x's) and symbols in ~ and where the equa- tion is regular (all the variables appearing on one side appear on the other) and linear (the variables appear only once on the left and right). For example, the operations of head grammars can be define with the equations1: wrap((Xl, ~2), (Yl, Y2)) : (XlYl, Y2X2) concl((xl, x2) , (Yl, Y2)) = (xx, x2y, y2) C0n¢2((,~1, X2) , (Yl, Y2)) = (2?IX2Yl, Y2) Thus, we have wrap( (ab, ca), (ac, bc) ) = (abac, bcca) concl( (ab, ca), (ac, bc) ) = (ab, caaebc) conc2( (ab, ca), (ac, be)) = (abcaac, be) A generalized context-free grammar (gcfg) [8] is denoted G = (VN, S, F, P) where 1These operations differ from (but are equivalent to) those used in [8] VN is a finite set of nonterminal symbols, S is a distinguished member of VN, F is a finite set of function symbols and P is a finite set of productions of the form A --+ f(A1,..., A,) where n > 0, f C F, and A, AI,...,Am C VN. With a grammatical formalism we associate an in- terpretation function m that maps symbols in F onto the formalism's composition operations. For ex- ample, in a typical head grammar the set F might include { W, el, C2} where re(W) = wrap, m(Cl) = concl and re(C2) = conc2. A formalism is a linear context-free rewriting system (lefts) if every grammar can be expressed as a gcfg and its interpretation function m maps sym- bols onto operations whose yield functions are linear regular operations. In order to simplify the remaining discussion we as- sume that m maps directly onto the yield functions themselves. The language L(G) generated by a gcfg G = (VN, S, F, P) with associated interpretation function m is defined as L(G) = where * A =:=V re(f) G ifA--~f0 EP * A ~ m(/)(tl,...,tn) G ifA --* f(A1,...,An) E P and Ai ~--~ ti (l < i < n). G We denote the class of all languages generated by lefrs as LCFRL. Deterministic Tree-Walking Transduc- ers A deterministic tree-walking transducer is an automa- ton whose inputs are derivation trees of some context- free grammar. The automaton moves around the tree starting at the root. At each point in the computation, depending on the label of the current node and the state of the finite state control, the automaton moves 137 up, down or stays at the current node and outputs a string. The computation ends when the machine tries to move to the parent of the root node. We denote a deterministic tree-walking trans- ducer (dtwt) by M - (Q, G, A, 6, q0, F) where Q is a finite set of states, G = (VN, VT, S, P) is a context-free grammar without e-rules, A is a finite set of output symbols, 6 : Q × (VN U VT) ---+ Q × D × A* is the transition function where D = {stay, up} O {d(k) [ k > 1 }, q0 E Q is the initial state and F C_ Q is the set of final states. A configuration of M is a 4-tuple (q, 7, r/, w) where q E Q is the current state, 7 is the derivation tree of G under consideration, r/is a node in 7 or T (where 1" can be thought of as the parent of the root ofT), and w E A* is the output string produced up to that point in the computation. We have (q, 7, r/, w) ['-M (qt, "[, r/,, WW/) if the label of r/is X, ~f(q, X) = (q', d, w') such that when d = stay then T/' = r/, when d = d(i) then 7/' is the ith child of r/(if it exists), and when d = up then r/' is the parent of r/(T if r/is the root of 7). The output language OUT(M) of M is the set of strings: {weA*I (q0,7, r/r, e) b~/ (q f, 7, T, w), ql E F and 7 is a derivation tree of G with root r/r } where F-~ is the reflexive transitive closure of ['-M" We denote the class of all languages OUT(M) where M is a dtwt as OUT(DTWT). Consider the dtwt M = ({qo, ql,q2, q3},G,{a,b,c,d},~f, qo,{q3}) where G = ({S},{e},S,{S-*A,A-~A,A-*e}) and the relevant component of 6 is defined as follows. 6(q0, s) = (q0, d(1), e) 6(q0, A) = (q0, d(1), a) 6(ql, S) = (q2, d(1), e) 6(q2, A) = (q2, d(1), c) 6(q3, 5') -~ (q3, up, e) 6(qo, e) = (ql, up, e) 6(qz, A) = (qz, up, b) ~f(q~, e) = (q3, up, e) 6(q3, A) = (q3, up, d) It can be seen that OUT(M) = { anbnc'~d '~ In > 1 }. Equivalence In this section we outline a two part proof that OUT(DTWT) = LCFRL. OUT(DTWT) C_ LCFRL Consider a dtwt M = (Q, E, G, A, 6, qo, F) where G = (VN, VT, S, P). For convenience we assume that M is a dtwt without stay moves (see Lemma 5.1 in [3] for proof that this can be done). Given a derivation tree of G, and a node r/in this tree, we record the strings contributed to the output between the first and last visit to nodes in the subtree rooted at r/. These contributed terminal strings can be viewed as a k tuple where k is the number of times that the transducer enters and then leaves the subtree. For each production X --* X1 ...Xn in P and each p E Q we call C((X,p, .) -.+ (X1, e, 0)... (Xn, ¢, 0)) C((A, p, .) (XI, e, 0)... (Xn, e, 0)) simulates all sub- computations of M that start in state p at a node labelled X that has been expanded using the pro- duction X --* X1...Xn. The node labelled A may be visited several times, but each time the machine must be in a different state (otherwise, being deter- ministic, it would loop indefinitely). The sequence of visits is recorded as a string of states. The compo- nent of the rule that is underlined indicates which of the children or parent is currently being visited. The call C((X, a, ¢) -~ (Xl, al, il)... (Xn, an, in)) is made when a computation is being simulated in which the node labelled A has been visited ]a[ times ([a[ de- notes the length of a) such that on the ith visit the machine was in the state indicated by the ith symbol in a. al,..., an are used in a similar way to encode the state of the machine during visits to each child node. ¢ is a string of terms that is used to encode the output produced between the first and last visit to the subtree rooted at the node labelled A. Ultimately, it has the form .tl ....-tk. where each ti encodes the composition of the ith component of the tuple. The notation used for each ti is identical to that used in the equations used to define lefts composition opera- tions given earlier, i.e., each ti is a string of output symbols and x's. il,...,in are used to encode the number of times that a given child has been visited from above. This gives the number of times the sub- tree rooted at that node has been visited and, hence, encodes which component of the tuple was completed most recently. Thus, for each j, 1 _< j _< n, the sim- ulation has moved from the parent to the jth child ij 138 times. This number is used to determine which com- ponent of the tuple derived from the jth node should contribute to the parent's current component. When a move is made from the parent node to the jth child we add the variable xj,/~+x to the term currently being constructed for the parent node. In other words, the next component of the parent output is the ij + l th component of its jth child. The call C((X, a, ¢) --* (X1, oq, ix)... (Xj, aj, ij)... (Xn, an, in)) sumulates the machine visiting the jth child of a node expanded using the rule X --~ X1 ... Xn. From M the gcfg G' is constructed such that G' = (V~, 5", F, P') where vk = {S'}u {(X,a) lXeVNUVTand non-repeating a e Q*} and the procedure C determines P' and F where for each production A -~ X1 ... Xn in P and each p E Q we call C((A,p, .) --* (Xx, c, 0)...(X~, e, 0)) In addition, for each a E VT and each p E Q we call C((a, p, .)) -~ C is defined as follows. Case 1. C((X, ap, ¢) ---* (X1, oq, ix)... (Xn, an, in)) Note that if n = 0 then X E VT, otherwise, X E VN. If 6(p, X) = (q, up, w) then (X, ap) --~ f((Xl, ~1),..., (Xn, otn)) E P' for a new function f E F where re(f) is defined by f((xl,...,mix),..., (xl,.:., mi,)) '= (tl,...,tk) where Cw. = 41 "..." tk'. (note that when ij = 0 for some j then (Xl,..., xij) will appear as e), in addition, for each p' in Q that does not appear in ap call C((X, o~pp', ew.) ---* (Xl, ~1, i1)... (Xn, O~n, in)) Note that • has been placed after ew. This indicates that we have finished with the current component of the tuple. Otherwise, if 6(p,X) = (q,d(j),w) and 1 _< j < n then call c((x, ap, ¢w=j,~j+x) (xt,oq,it)...(xj,ajq,# + 1)...(Xn,o~,~,i,O) Note that if Xj E VT then it is not possible for the machine to move down the tree any further. Case 2. c((x, ¢) --. (X1, (~1, il)... (Xj, ajp, ij)... (Xn, an, in)) If 6(p, Xj) = (q, up, w) then call (Xl, al, il)... (Xj, ajp, ij)... (Xn, an, in)) Note that ¢ will end with xj,ii and the ijth compoent of the yield at As. will end in w. Otherwise, if 6(p, Xj) = (q,d(k), w) then if Xj E VN for each p' in Q and not in aiP call c((x, a, ¢) --. (Xl, ot], it)... (Xj, %pp', ij) . .. (Xn, an, in)) This simulates the next visit to this node (which must be from below) in the (guessed) state p'. In addition to the productions added by C, include in P~ the production S ~ ---. ( S, qootq! ) for each qi E F and a E Q* such that aootqi is non-repeating and /f(q, S) = (qI, up, w) for some w where q is the last symbol in q0a. A complete proof would establish that the following equivalence holds. (Aa) ~ (wt,...,w,) if and only if there is a derivation tree 7 of G with root ~?r labelled A such that a = at...an for some al,...,an E Q+ and for each i (1 < i < n) 7, 7, f, where ai = pia[ = a['qi for some c~, a~' E Q*. Consider the application of this construction to ex- ample the dtwt given earlier. The grammar contains the following productions (where productions contain- ing useless nonterminals have been omitted). (S, qoqlq3) --~ A((A, qoqlq2q3)) 139 where fl((Xlj, Xl,2)) -- Xl,lX1,2 (A, qoqlq2qa) --* f2((A, qoqlq2qa)) = (A, qoqlq2q3) --~ f3((e, qoq2)) where = (e, qoq2) ~ f40 where 140 = (e, e). By renaming nonterminal we get the four produc- tions S --* fl(A) A -. f2(A) A ---* f3(e) e ---* f40 LCFRL C_ OUT(DTWT) Consider the gcfg G -- (VN, S, F, P) and mapping m that interprets the symbols in F. Without loss of gen- erality we assume that no nonterminal appears more than once on the right of a production and that for each A E VN there is some rank(A) = k such that only k-tuples are derived from A. We define a dtwt M = (Q, ~, G ~, liT, 6, qo, F) where G ~ is a context-free grammar that generates derivation trees of G in the following way. A derivation involving the use of a production zr will he represented by a tree whose root is labelled by zr = A --* f(A1,..., Am) with n subtrees encoding the derivations from A1,..., An. The roots of these subtrees will be labelled by the n productions used to rewrite the A1,...,An. Let lhs(~r) = A and rhs(~r) = { AI,..., An }. The dtwt M walks around a derivation tree 7 of G' in such a way that it outputs the yield of 7. Each subtree of 7 rooted at a node ~/labelled by the produc- tion ~r will be visited on k = rank(lhsOr)) occasions by M. During the ith visit to the subtree M will output the ith component of the tuple. We therefore include in Q k states { 1,...,k} that are used to keep track of which tuple is being considered. This will gener- ally involve visiting children of y as determined by the equation used to define function used in 7r. Addi- tional states in Q are used to keep track of these visits as follows. When the lth child of T/ has finished its ruth component, M will move back up to y in state (Az,m). Since no nonterminal appears twice on the right of a production it is possible for M to determine the value of l from At while at y. For each production ~r = A --* f(A1,...,An) E P where f is interpreted as the function defined by the equation f((xX,1,.-.,Xl,kl),.-.,(Xnj,..-,Xn,k,))= (tl,...,tk) we include the following components in the definition of 6. For each i (1 < i < k) • if ti = wxl,m¢, where w is a possibly empty ter- minal string then let 6(i, ~) = (m, down(O, w) • if ti = w (in which case it is time to move up the tree) let 6(i, ~r) = (( lhs(Ir), i), up, w) For each B E rhs(~r) and each m, 1 <_ m <_ rank(B), let 6((B, m), 7r) = (q, move, w) where (q, move, w) is determined as follows. For some unique I we know that B is the lth nonterminal on the right-hand side of 7r. There is a unique ti such that ti = ¢lXZ,mw¢2 where w is a possibly empty string of terminals. Case 1:¢2 is empty In this case the ith component of the current node is complete. Thus, q = (lhs(r), i) and move = up. Case 2:¢2 begins with the variable xv,m, In this case the machine M must find the m'th compo- nent of the/'th child. Thus, q = m' and move = d(l'). It should be clear that the start state q0 should be 1 and the set of final states F = { (S, rank(S)) }. A complete proof would involve verifying that the following equivalence holds. (Aa) ~ (wl,...,Wn) if and only if there is a derivation tree 7 of G' with root ~r labelled 7r such that lhs(lr) = A and for each i (1 < i < n) (i, 7, ~/r, e) t-~4 ((A, i), 7, t, w~) We apply the construction to the grammar pro- duced in the illustration of the first construction. First, we name the productions of the grammar 7rl = S --~ fl(A) ~r2 = A --* f2(A) 140 ~3 = A ---* f3(e) 7r4 = e --* f40 The construction gives a machine in which the func- tion 5 is defined as follows. di(1, rl) = (1, d(1), e) &(1, ~r2) = (1, d(1), a) 5(2, ~2) = (2, d(1), e) 5(1, 7rz) = (1, d(1), a) 5(2, r3) = (2, d(1), c) 5(1, ~r4) = ((e, 1), up, e) 5(2, ~,) = fie, 2), up, ~) 6((A, 1), rl) = (2, d(1), e) 6((A, 2), 7rl) : ((S, 1), up, e) 6((A, 1), r~) = ((A, 1), up, b) 5((A, 2), 7r2) = ((A, 2), up, d) 5((e, 1), 7rz) = ((A, 1), up, b) 5((e, 2), r3) = ((A, 2), up, d) The context-free grammar whose derivation trees are to be transduced has the following productions. ";l'l "~ 71"2 7l'1 -"+ 7r3 We denote a hypergraph as a five tuple H ( V, E, ~, incident, label) where V is a finite set of nodes, E is a finite set of edges, E is a finite set of edge labels, incident : E --* V* is the incidence function and label : E --+ ~ is the edge labelling function For example, in the above graph V = {vl,v2, vz, v4}, E = {el,e2,e3}, = { a, b, c}, incident(el) = (v2, vl, v4), i,,cide.t(e2) = (v4, vl), incident(e3) = (v3), label(e,) = a, label(e2) - b and label(e3) -- c. A string can be encoded with a string hyper- graph [5]. The string bcaab is encoded with the fol- lowing graph. 71"2 ~ 71"2 71"2 ~ 71"3 71"3 ~ 7i'4 Context-Free Hypergraph Grammars In this section we describe context-free hypergraph gramars since they are an example of a lcfrs involv- ing the manipulation of graphs, zThe class of string languages generated by context-free hypergraph gram- mars is equal to OUT(DTWT) [3] and the above result shows that they are also equal to LCFRS. A directed hypergraph is similar to a standard graph except that its (hyper)edges need not simply go from one node to another but may be incident with any number of nodes. If an edge is incident with n nodes then it is a n-edge. The n nodes that are inci- dent to some edge are linearly ordered. For example, in the figure below, dots denote nodes and labelled square boxes are edges. The edge labelled a is a 3- edge, the edge labelled b is a 2-edge and the edge labelled c is a 1-edge. When the number of nodes incident to an edge exceeds 2, numbered tentacles are used to indicate the nodes that are incident to the edge. The numbers associated with the tentacles com- ing from an edge indicate the linear order of the nodes that are incident to that edge. 2-edges are shown in the standard way and 1-edges can be used as a way of associating labels with nodes as shown. @ 141 b c a a b We denote a context-free hypergraph gram- mar (cfhg) as four tuple G = (VN, VT, S, P) where VN is a finite nonterminal alphabet, VT is a finite terminal alphabet, S E VN is the initial nonterminal and P is a finite set of productions e -* H where H = (V, E, VN O VT, incident, label) is a hypergraph and e E E is a nonterminal edge in H, i.e., label(e) E VN. Consider the application of a production e --* H to a graph H ~ at a node e p in H ~ with the same nonterminal label as e. The resulting graph is obtained from H ~ by replacing e ~ by the graph H with e removed from it. This involves merging of nodes. In particular, the ith node incident with e is merged with the ith node incident with e ~. We require that all edges with the same label have the same number of incident nodes. A derivation begins with a graph containing a single edge labelled S and no edges. A derivation is completed when there are no nonterminal nodes in the graph. The string language associated with a cfhg G is de- noted STR(G). The class of languages generated by all cfhg is denoted STR(CFHG). Due to lack of space, rather than a complete formal definition of cfhg derivations, we present an illustra- tive example. Consider the three productions shown below. Note that the edge on the left-hand-side of the production is indicated with a double box. 1 4 a Below we show the steps in a derivation of the string aabbccdd involving these productions. Note that the set of graphs derived corresponds to the string lan- guage { anbncnd n I n > 0 }. D a d a b j ° ~t C d a b 1 C d a a ° b "~..~£ C d d It is clear from their definition that cfhg satisfy the conditions for being a lcfrs given earlier. As has been observed [3] it is possible to represent the set of deriva- tions of a given cfhg with a set of trees that can be generated by a context-free grammar. The composi- tion operation of cfhg in which a node is replaced by a graph is clearly size-preserving since it does not in- volve duplication or deletion of an unbounded number of nodes or edges. Additional Remarks We end by elaborating on the relationship between lcfrs, dtwt and cfhg in terms of the following complex- ity measures. • The maximum of rank(A) nonterminals A of a gcfg. Let LCFRLk be the class of languages gen- erated by gcfg of some lcfrs whose nonterminals have rank k or less, i.e., derive at most k tuples. • The crossing number of a dtwt M. This is the maximum number of times that it visits any given subtree of an input tree. Let OUT(DTWTk) be the class of languages output by dtwt whose crossing number does not exceed k. • The maximum number of tentacles of the nonter- minals of a cfhg. Let STR(CFI-IGk) be the class of languages associated with cfhg whose nonter- minals have at most k tentacles. It has been shown (Theorem 6.1 in [3]) that OUT(DTWTk) = STR(CFHGg.k) = STR(CFHG2k+I) It can be seen from the above constructions that LCFRLk = OUT(DTWTk) = STR(CFHG2k) = STR(CFHG2k+I) References [1] A. V. Aho and J. D. Ullman. Translations on a context-free grammar. Inf. Control, 19:439-475, 1971. [2] M. Bauderon and B. Courcelle. Graph expres- sions and graph rewritings. Math. Syst. Theory, 20:83-127, 1987. [3] J. Engelfriet and L. Heyker. The string generat- ing power of context-free hypergraph grammars. J. Comput. Syst. Sci., 43:328-360, 1991. 142 [4] J. Engelfriet, G. Rozenburg, and G. Slutzki. Tree transducers, I systems, and two-way machines. J. Comput. Syst. Sci., 20:150-202, 1980. [5] A. Habel and H. Kreowski. Some structural as- pects of hypergraph languages generated by hy- peredge replacement. In STACS, 1987. [6] A. K. Joshi, L. S. Levy, and M. Takahashi. Tree adjunct grammars. J. Comput. Syst. Sci., 10(1), 1975. [7] T. Kasami, H. Seki, and M. Fujii. General- ized context-free grammars, multiple context-free grammars and head grammars. Technical report, Department of Information and Computer Sci- ence, Osaka University, Osaka, Japan, 1988. [8] C. Pollard. Generalized Phrase Structure Gram- mars, Head Grammars and Natural Language. PhD thesis, Stanford University, 1984. [9] K. Vijay-Shanker, D. J. Weir, and A. K. Joshi. Characterizing structural descriptions produced by various grammatical formalisms. In 25 th meet- ing Assoc. Comput. Ling., 1987. [10] D. J. Weir. Characterizing Mildly Context- Sensitive Grammar Formalisms. PhD thesis, University of Pennsylvania, Philadelphia, PA, 1988. 143
1992
18
A CONNECTIONIST PARSER FOR STRUCTURE UNIFICATION GRAMMAR James B. Henderson* Department of Computer and Information Science University of Pennsylvania 200 South 33rd Philadelphia, PA 19104, USA ([email protected]) ABSTRACT This paper presents a connectionist syntactic parser which uses Structure Unification Grammar as its grammatical framework. The parser is im- plemented in a connectionist architecture which stores and dynamically manipulates symbolic rep- resentations, but which can't represent arbitrary disjunction and has bounded memory. These problems can be overcome with Structure Unifica- tion Grammar's extensive use of partial descrip- tions. INTRODUCTION The similarity between connectionist models of computation and neuron computation suggests that a study of syntactic parsing in a connection- ist computational architecture could lead to sig- nificant insights into ways natural language can be parsed efficiently. Unfortunately, previous in- vestigations into connectionist parsing (Cottrell, 1989, Fanty, 1985, Selman and Hirst, 1987) have not been very successful. They cannot parse arbi- trarily long sentences and have inadequate gram- mar representations. However, the difficulties with connectionist parsing can be overcome by adopt- ing a different connectionist model of computa- tion, namely that proposed by Shastri and Ajjana- gadde (1990). This connectionist computational architecture differs from others in that it directly manifests the symbolic interpretation of the infor- mation it stores and manipulates. It also shares the massive parallelism, evidential reasoning abil- ity, and neurological plausibility of other connec- tionist architectures. Since virtually all charac- terizations of natural language syntax have relied heavily on symbolic representations, this architec- ture is ideally suited for the investigation of syn- tactic parsing. *This research was supported by ARO grant DAAL 03-89-C-0031, DARPA grant N00014-90-J- 1863, NSF grant IRI 90-16592, and Ben Franklin grant 91S.3078C-1. The computational architecture proposed by Shastri and Ajjanagadde (1990) provides a rather general purpose computing framework, but it does have significant limitations. A computing mod- ule can represent entities, store predications over those entities, and use pattern-action rules to ma- nipulate this stored information. This form of rep- resentation is very expressive, and pattern-action rules are a general purpose way to do compu- tation. However, this architecture has two lim- itations which pose difficult problems for pars- ing natural language. First, only a conjunction of predications can be stored. The architecture cannot represent arbitrary disjunction. This lim- itation implies that the parser's representation of syntactic structure must be able to leave unspec- ified the information which the input has not yet determined, rather than having a disjunction of more completely specified possibilities for com- pleting the sentence. Second, the memory ca- pacity of any module is bounded. The number of entities which can be stored is bounded by a small constant, and the number of predications per predicate is also bounded. These bounds pose problems for parsing because the syntactic struc- tures which need to be recovered can be arbitrarily large. This problem can be solved by allowing the parser to output the syntactic structure incremen- tally, thus allowing the parser to forget the infor- mation which it has already output and which it no longer needs to complete the parse. This tech- nique requires that the representation of syntactic structure be able to leave unspecified the informa- tion which has already been determined but which is no longer needed for the completion of the parse. Thus the limitations of the architecture mean that the parser's representation of syntactic structure must be able to leave unspecified both the infor- mation which the input has not yet determined and the information which is no longer needed. In order to comply with these requirements, the parser uses Structure Unification Grammar (Henderson, 1990) as its grammatical framework. SUG is a formalization of accumulating informa- 144 tion about the phrase structure of a sentence un- til a complete description of the sentence's phrase structure tree is constructed. Its extensive use of partial descriptions makes it ideally suited for dealing with the limitations of the architecture. This paper focuses on the parser's represen- tation of phrase structure information and on the way the parser accumulates this information dur- ing a parse. Brief descriptions of the grammar formalism and the implementation in the connec- tionist architecture are also given. Except where otherwise noted, a simulation of the implementa- tion has been written, and its grammar supports a small set of examples. A more extensive gram- mar is under development. SUG is clearly an ade- quate grammatical framework, due to its ability to straightforwardly simulate Feature Structure Based Tree Adjoining Grammar (Vijay-Shanker, 1987), as well as other formalisms (Henderson, 1990). Initial investigations suggest that the con- straints imposed by the parser do not interfere with this linguistic adequacy, and more extensive empirical verification of this claim is in progress. The remainder of this paper will first give an overview of Structure Unification Grammar, then present the parser design, and finally a sketch of its implementation. STRUCTURE UNIFICATION GRAMMAR Structure Unification Grammar is a formaliza- tion of accumulating information about the phrase structure of a sentence until this structure is com- pletely described. This information is specified in partial descriptions of phrase structure trees. An SUG grammar is simply a set of these descriptions. The descriptions cannot use disjunction or nega- tion, but their partiality makes them both flexi- ble enough and powerful enough to state what is known and only what is known where it is known. There is also a simple abstraction operation for SUG descriptions which allows unneeded informa- tion to be forgotten, as will be discussed in the section on the parser design. In an SUG deriva- tion, descriptions are combined by equating nodes. This way of combining descriptions is extremely flexible, thus allowing the parser to take full ad- vantage of the flexibility of SUG descriptions, and also providing for efficient parsing strategies. The final description produced by a derivation must completely describe some phrase structure tree. This tree is the result of the derivation. The de- sign of SUG incorporates ideas from Tree Adjoin- ing Grammar, Description Theory (Marcus et al., 1983), Combinatory Categorial Grammar, Lexi- cal Functional Grammar, and Head-driven Phrase Structure Grammar. An SUG grammar is a set of partial descrip- tions of phrase structure trees. Each SUG gram- mar entry simply specifies an allowable grouping of information, thus expressing the information in- terdependencies. The language which SUG pro- vides for specifying these descriptions allows par- tiality both in the information about individual nodes, and (crucially) in the information about the structural relations between nodes. As in many formalisms, nodes are described with fea- ture structures. The use of feature structures al- lows unknown characteristics of a node to be left unspecified. Nodes are divided into nonterminals, which are arbitrary feature structures, and termi- nals, which are atomic instances of strings. Unlike most formalisms, SUG allows the specification of the structural relations to be equally partial. For example, if a description specifies children for a node, this does not preclude that node from ac- quiring other children, such as modifiers. This partiality also allows grammar entries to under- specify ordering constraints between nodes, thus allowing for variations in word order. This partial- ity in structural information is imperative to allow incremental parsing without disjunction (Marcus et al., 1983). In addition to the immediate domi- nance relation for specifying parent-child relation- ships and linear precedence for specifying ordering constraints, SUG allows chains of immediate dom- inance relationships to be partially specified using the dominance relation. A dominance constraint between two nodes specifies that there must be a chain of zero or more immediate dominance con- straints between the two nodes, but it does not say anything about the chain. This relation is necessary to express long distance dependencies in a single grammar entry. Some examples of SUG phrase structure descriptions are given in figure 1, and will be discussed below. A complete description of a phrase structure tree is constructed from the partial descriptions in an SUG grammar by conjoining a set of grammar entries and specifying how these descriptions share nodes. More formally, an SUG derivation starts with descriptions from the grammar, and in each step conjoins a set of one or more descriptions and adds zero or more statements of equality between nonterminal nodes. The description which results from a derivation step must be satisfiable, so the feature structures of any two equated nodes must unify and the resulting structural constraints must be consistent with some phrase structure tree. The final description produced by a derivation must be a complete description of some phrase struc- ture tree. This tree is the result of the derivation. The sentences generated by a derivation are all those terminal strings which are consistent with the ordering constraints on the resulting tree. Fig- 145 h AP-'~[] t N ~tet plzzat key: ~ x immediately "~h y is the head dominates y xj. feature value Y Y of x X ', x dominates y x t x is a terminal [] empty feature x--~y x precedes y structure Figure 1: Example grammar entries. They can be combined to form a structure for the sentence "Who ate white pizza?". ure 2 shows an example derivation with one step in which all grammar entries are combined and all equations are done. This definition of deriva- tions provides a very flexible framework for investi- gating various parsing strategies. Any ordering of combining grammar entries and doing equations is a valid derivation. The only constraints on deriva- tions come from the meanings of the description primitives and from the need to have a unique re- sulting tree. This flexibility is crucial to allow the parser to compensate for the connectionist archi- tecture's limitations and to parse efficiently. Because the resulting description of an SUG derivation must be both a consistent description and a complete description of some tree, an SUG grammar entry can state both what is true about the phrase structure tree and what needs to be true. For a description to be complete it must specify a single immediate dominance tree and all terminals mentioned in the description must have some (possibly empty) string specified for them. Otherwise there would be no way to determine the exact tree structure or the word for each terminal in the resulting tree. A grammar entry can express grammatical requirements by not satisfying these completion requirements locally. For example, in figure 1 the structure for "ate" has a subject node with category NP and with a terminal as the val- ues of its head feature. Because this terminal does not have its word specified, this NP must equate with another NP node which does have a word for the value of its head feature. The unification of the two NP's feature structures will cause the equation of the two head terminals. In this way the struc- ture for "ate" expresses the fact that it obligatorily subcategorizes for a subject NP. The structure for "ate" also expresses its subcategorization for an object NP, but this object is not obligatory since it does not have an underspecified terminal head. Like the subject of "ate", the root of the structure for "white" in figure 1 has an underspecified ter- minal head. This expresses the fact that "white" obligatorily modifies N's. The need to construct a single immediate dominance tree is used in the structure for "who" to express the need for the subcategorized S to have an NP gap. Because the dominated NP node does not have an immediate parent, it must equate with some node which has an immediate parent. The site of this equation is the gap associated with "who". THE PARSER The parser presented in this paper accumulates phrase structure information in the same way as does Structure Unification Grammar. It calcu- lates SUG derivation steps using a small set of operations, and incrementally outputs the deriva- tion as it parses. The parser is implemented in the connectionist architecture proposed by Shastri and Ajjanagadde (1990) as a special purpose mod- ule for syntactic constituent structure parsing. An SUG description is stored in the module's mem- ory by representing nonterminal nodes as entities and all other needed information as predications over these nodes. If the parser starts to run out of memory space, then it can remove some nodes from the memory, thus forgetting all information about those nodes. The parser operations are im- plemented in pattern-action rules. As each word is input to the parser, one of these rules combines one of the word's grammar entries with the current description. When the parse is finished the parser checks to make sure it has produced a complete description of some phrase structure tree. THE GRAMMARS The grammars which are supported by the parser are a subset of those for Structure Unification Grammar. These grammars are for the most part lexicalized. Each lexicalized grammar entry is a rooted tree fragment with exactly one phoneti- cally realized terminal, which is the word of the entry. Such grammar entries specify what infor- mation is known about the phrase structure of the sentence given the presence of the word, and can be used (Henderson, 1990) to simulate Lexi- calized Tree Adjoining Grammar (Schabes, 1990). Nonlexical grammar entries are rooted tree frag- ments with no words. They can be used to ex- press constructions like reduced relative clauses, for which no lexical information is necessary. The 146 Ill' , - t I I I i ; 1 who t did t Barbie seet a il h /:; p!ct~, \!fS4h~[]~yest!r!ay, y,A, ., 12I14 . ; i / i "7 : .4 h , ,, ..Nl;!!l!\l..,") who did lhrbie see p cture o J yest i 1 ~ayt xis u., l with y I Figure 2: A derivation for the sentence 'TVho did Barbie see a picture of yesterday".. current mechanism the parser uses to find possible long distance dependencies requires some informa- tion about possible extractions to be specified in grammar entries, despite the fact that this infor- mation currently only has meaning at the level of the parser. The primary limitations on the parser's abil- ity to parse the sentences derivable with a gram- max are due to the architecture's lack of disjunc- tion and limited memory capacity. Technically, constraints on long distance dependencies are en- forced by the parser's limited ability to calcu- late dominance relationships, but the definition of an SUG derivation could be changed to man- ifest these constraints. This new definition would be necessary to maintain the traditional split be- tween competence and performance phenomena. The remaining constraints imposed at the level of the parser are traditionally treated as performance constraints. For example, the parser's bounded memory prevents it from being able to parse arbi- trarily center embedded sentences or from allow- ing arbitrarily many phrases on the right frontier of a sentence to be modified. These are well es- tablished performance constraints on natural lan- guage (Chomsky, 1959, and many others). The lack of a disjunction operator limits the parser's ability to represent local ambiguities. This re- sults in some locally ambiguous grammatical sen- tences being unparsable. The existence of such sentences for the human parser, called garden path sentences, is also well documented (Bever, 1970, among others). The representations currently used for handling local ambiguities appear to be adequate for building the constituent structure of any non-garden path sentences. The full verifi- cation of this claim awaits a study of how effec- tively probabilistic constraints can be used to re- solve ambiguities. The work presented in this pa- per does not directly address the question of how ambiguities between possible predicate-argument structures are resolved. Also, the current parser is not intended to be a model of performance phe- nomena, although since the parser is intended to be computationally adequate, all limitations im- posed by the parser must fall within the set of performance constraints on natural language. THE PARSER DESIGN The parser follows SUG derivations, incrementally combining a grammar entry for each word with the description built from the previous words of the sentence. Like in SUG the intermediate descrip- tions can specify multiple rooted tree fragments, but the parser represents such a set as a list in or- der to represent the ordering between terminals in the fragments. The parser begins with a descrip- tion containing only an S node which needs a head. This description expresses the parser's expectation for a sentence. As each word is read, a gram- mar entry for that word is chosen and combined 147 current grammar description: entry: ~ ~ attaching =~. at x current grammar description: entry: ,A r\ leftward attaching aty current grammar description: entry: Q current Z: or dominance instantiating at z current description: current grammar description: entry: //~ J/~ equa~onle~s ~ , ~ combining internal equation key: .~ x is the y host of y xis xi YO equatable with y Figure 3: The operations of the parser. with the current description using one of four com- bination operations. Nonlexical grammar entries can be combined with the current description at any time using the same operations. There is also an internal operation which equates two nodes al- ready in the current description without using a grammar entry. The parser outputs each opera- tion it does as it does them, thus providing incre- mental output to other language modules. After each operation the parser's representation of the current description is updated so that it fully re- flects the new information added by the operation. The five operations used by the parser axe shown in figure 3. The first combination opera- tion, called attaching, adds the grammar entry to the current description and equates the root of the grammar entry with some node already in the cur- rent description. The second, called dominance in- stantiating, equates a node without a parent in the current description with a node in the grammar entry, and equates the host of the unparented node with the root of the grammar entry. The host func- tion is used in the parser's mechanism for enforc- ing dominance constraints, and represents the fact that the unparented node is potentially dominated by its current host. In the case of long distance dependencies, a node's host is changed to nodes further and further down in the tree in a man- ner similar to slash passing in Generalized Phrase Structure Grammar, but the resulting domain of possible extractions is more similar to that of Tree Adjoining Grammar. The equationless combining operation simply adds a grammar entry to the end of the tree fragment list. This operation is some- times necessary in order to delay attachment de- cisions long enough to make the right choice. The leftward attaching operation equates the root of the tree fragment on the end of the list with some node in the grammar entry, as long as this root is not the initializing matrix S 1. The one parser op- eration which does not involve a grammar entry is called internal equating. When the parser's rep- resentation of the current description is updated so that it fully reflects newly added information, some potential equations are calculated for nodes which do not yet have immediate parents. The internal equating operation executes one of these potential equations. There are two cases when this can occur, equating fillers with gaps and equating a root of a tree fragment with a node in the next earlier tree fragment on the list. The later is how tree fragments are removed from the list. The bound on the number of entities which can be stored in the parser's memory requires that the parser be able to forget entities. The imple- mentation of the parser only represents nontermi- nal nodes as entities. The number of nontermi- nals in the memory is kept low simply by forget- ting nodes when the memory starts getting full, thereby also forgetting the predications over the nodes. This forgetting operation abstracts away from the existence of the forgotten node in the phrase structure. Once a node is forgotten it can no longer be equated with, so nodes which must be equated with in order for the total descrip- tion to be complete can not be forgotten. Forget- ting nodes may eliminate some otherwise possible parses, but it will never allow parses which violate 1As of this writing the implementation of the tree fragment list and these later two combination opera- tions has been designed, but not coded in the simula- tion of the parser's implementation. 148 parser state : S h attaching S h Barbie t dominance instantiating h~rLie t~~ r~ forgetting hS equationless combining tit AP.,,h fashiolllab~]yt Ba sest fashionablyt internal equating grammar entries : Barbiet Barbiet h - :VP dres~est \ fashionablyt h ~esses t Figure 4: An example parse of "Barbie dresses fashionably". the forgotten constraints. Any forgetting strategy can be used as long as the only eliminated parses are for readings which people do not get. Several such strategies have been proposed in the litera- ture. As a simple example parse consider the parse of "Barbie dresses fashionably" sketched in fig- ure 4. The parser begins with an S which needs a head, and receives the word "Barbie". The un- derlined grammar entry is chosen because it can attach to the S in the current description using the attaching operation. The next word input is "dresses", and its verb grammar entry is chosen and combined with the current description using the dominance instantiating operation. In the re- sulting description the subject NP is no longer on the right frontier, so it will not be involved in any future equations and thus can be forgotten. Re- member that the output of the parser is incremen- tal, so forgetting the subject will not interfere with semantic interpretation. The next word input is "fashionably", which is a VP modifier. The parser could simply attach "fashionably", but for the pur- poses of exposition assume the parser is not sure where to attach this modifier, so it simply adds this grammar entry to the end of the tree frag- ment list using equationless combining. The up- dating rules of the parser then calculate that the VP root of this tree fragment could equate with the VP for "dresses", and it records this fact. The internal equating operation can then apply to do this equation, thereby choosing this attachment site for "fashionably". This technique can be used to delay resolving any attachment ambiguity. At this point the end of the sentence has been reached and the current description is complete, so a suc- cessful parse is signaled. Another example which illustrates the parser's ability to use underspecification to delay disam- biguation decisions is given in figure 5. The feature decomposition ~:A,:EV is used for the major cate- gories (N, V, A, and P) in order to allow the object of "know" to be underspecified as to whether it is of category i ([-A,-V]) or V ([-A,TV]). When 149 parser state : grammar entry: Barbiet knows t ~ at mant leftt Figure 5: Delaying the resolution of the ambigu- ity between "Barbie knows a man." and "Barbie knows a man left." "a man" is input the parser is not sure if it is the object of "know" or the subject of this object, so the structure for "a man" is simply added to the parser state using equationless combining. This underspecification can be maintained for as long as necessary, provided there are resources available to maintain it. If no verb is subsequently input then the NP can be equated with the -A node using internal equation, thus making "a man" the object of "know". If, as shown, a verb is input then leftward attaching can be used to attach "a man" as the subject of the verb, and then the verb's S node can be equated with the -A node to make it the object of "know". Since this parser is only concerned with constituent structure and not with predicate-argument structure, the fact that the -A node plays two different semantic roles in the two cases is not a problem. THE CONNECTIONIST IMPLEMENTATION The above parser is implemented using the con- nectionist computational architecture proposed by Shastri and Ajjanagadde (1990). This architecture solves the variable binding problem 2 by using units which pulse periodically, and representing differ- ent entities in different phases. Units which are storing predications about the same entity pulse synchronously, and units which are storing pred- ications about different entities pulse in different phases. The number of distinct entities which can be stored in a module's memory at one time is determined by the width of a pulse spike and the time between periodic firings (the period). Neuro- logically plausible estimates of these values put the maximum number of entities in the general vicin- ity of 7-4-2. The architecture does computation with sets of units which implement pattern-action rules. When such a set of units finds its pattern in the predications in the memory, it modifies the memory contents in accordance with its action and 2The variable binding problem is keeping track of what predications are for what variables when more than one variable is being used. Figure 6: The architecture of the parser. the entity(s) which matched. This connectionist computational architecture is used to implement a special purpose module for syntactic constituent structure parsing. A di~ agram of the parser's architecture is shown in fig- ure 6. This parsing module uses its memory to store information about the phrase structure de- scription being built. Nonterminals are the enti- ties in the memory, and predications over nonter- minals are used to represent all the information the parser needs about the current description. Pattern-action rules are used to make changes to this information. Most of these rules implement the grammar. For each grammar entry there is a rule for each way of using that grammar en- try in a combination operation. The patterns for these rules look for nodes in the current descrip- tion where their grammar entry can be combined in their way. The actions for these rules add in- formation to the memory so as to represent the changes to the current description which result from their combination. If the grammar entry is lexical then its rules are only activated when its word is the next word in the sentence. A general purpose connectionist arbitrator is used to choose between multiple rule pattern matches, as with other disambiguation decisions 3. This arbitrator 3Because a rule's pattern matches must be commu- nicated to the rule's action through an arbitrator, the existence and quality of a match must be specified in a single node's phase. For rules which involve more than one node, information about one of the nodes must be represented in the phase of the other node for the purposes of testing patterns. This is the purpose 150 weighs the preferences for the possible choices and makes a decision. This mechanism for doing dis- ambiguation allows higher level components of the language system to influence disambiguation by adding to the preferences of the arbitrator 4. It also allows probabilistic constraints such as lexi- cal preferences and structural biases to be used, although these aspects of the parser design have not yet been adequately investigated. Because the parser's grammar is implemented in rules which all compute in parallel, the speed of the parser is independent of the size of the grammar. The internal equating operation is implemented with a rule that looks for pairs of nodes which have been specified as possible equations, and equates them, provided that that equation is chosen by the arbitrator. Equation is done by translating all predications for one node to the phase of the other node, then forgetting the first node. The for- getting operation is implemented with links which suppress all predications stored for the node to be forgotten. The only other rules update the parser state to fully reflects any new information added by a grammar rule. These rules act whenever they apply, and include the calculation of equatability and host relationships. CONCLUSION This paper has given an overview of a connection- ist syntactic constituent structure parser which uses Structure Unification Grammar as its gram- matical framework. The connectionist computa- tional architecture which is used stores and dy- namically manipulates symbolic representations, thus making it ideally suited for syntactic parsing. However, the architecture's inability to represent arbitrary disjunction and its bounded memory ca- pacity pose problems for parsing. These difficul- ties can be overcome by using Structure Unifica- tion Grammar as the grammatical framework, due to SUG's extensive use of partial descriptions. This investigation has indeed led to insights into efficient natural language parsing. This parser's speed is independent of the size of its grammar. It only uses a bounded amount of mem- ory. Its output is incremental, monotonic, and does not include disjunction. Its disambiguation of the signal generation box in figure 6. For all such rules, the identity of one of the nodes can be deter- mined uniquely given the other node and the parser state. For example in the dominance instantiating op- eration, given the unparented node, the host of that node can be found because host is a function. This constraint on parser operations seems to have signifi- cant linguistic import, but more investigation of this possibility is necessary. 4In the current simulation of the parser implemen- tation the arbitrators are controlled by the user. mechanism provides a parallel interface for the in- fluence of higher level language modules. Assum- ing neurologically plausible timing characteristics for the computing units of the connectionist archi- tecture, the parser's speed is roughly compatible with the speed of human speech. In the future the ability of this architecture to do evidential reason- ing should allow the use of statistical information in the parser, thus making use of both grammat- ical and statistical approaches to language in a single framework. REFERENCES Bever, Thomas G (1970). The cognitive basis for linguistic structures. In J. R. Hayes, editor, Cognition and the Development of Language. John Wiley, New York, NY. Chomsky, Noam (1959). On certain formal prop- erties of grammars. Information and Control, 2: 137-167. Cottrell, Garrison Weeks (1989). A Connectionist Approach to Word Sense Disambiguation. Mor- gan Kaufmann Publishers, Los Altos, CA. Fanty, Mark (1985). Context-free parsing in con- nectionist networks. Technical Report TR174, University of Rochester, Rochester, NY. Henderson, James (1990). Structure unifica- tion grammar: A unifying framework for in- vestigating natural language. Technical Re- port MS-CIS-90-94, University of Pennsylvania, Philadelphia, PA. Marcus, Mitchell; Hindle, Donald; and Fleck, Margaret (1983). D-theory: Talking about talk- ing about trees. In Proceedings of the 21st An- nual Meeting of the ACL, Cambridge, MA. Schabes, Yves (1990). Mathematical and Compu- tational Aspects of Lexicalized Grammars. PhD thesis, University of Pennsylvania, Philadelphia, PA. Selman, Bart and Hirst, Graeme (1987). Pars- ing as an energy minimization problem. In Lawrence Davis, editor, Genetic Algorithms and Simulated Annealing, chapter 11, pages 141- 154. Morgan Kaufmann Publishers, Los Altos, CA. Shastri, Lokendra and Ajjanagadde, Venkat (1990). From simple associations to system- atic reasoning: A connectionist representation of rules, variables and dynamic bindings. Tech- nical Report MS-CIS-90-05, University of Penn- sylvania, Philadelphia, PA. Revised Jan 1992. Vijay-Shanker, K. (1987). A Study of Tree Ad- joining Grammars. PhD thesis, University of Pennsylvania, Philadelphia, PA. 151
1992
19
AN ALGORITHM FOR VP ELLIPSIS Daniel Hardt Department of Computer and Information Science University of Pennsylvania Philadelphia, PA 19104 Internet: hardt @linc.cis.upenn.edu ABSTRACT An algorithm is proposed to determine an- tecedents for VP ellipsis. The algorithm elim- inates impossible antecedents, and then im- poses a preference ordering on possible an- tecedents. The algorithm performs with 94% accuracy on a set of 304 examples of VP el- lipsis collected from the Brown Corpus. The problem of determining antecedents for VP el- lipsis has received little attention in the litera- ture, and it is shown that the current proposal is a significant improvement over alternative approaches. INTRODUCTION To understand an elliptical expression it is nec- essary to recover the missing material from sur- rounding context. This can be divided into two subproblems: first, it is necessary to determine the antecedent expression. Second, a method of recon- structing the antecedent expression at the ellipsis site is required. Most of the literature on ellipsis has concerned itself with the second problem. In this paper, I propose a solution for the first prob- lem, that of determining the antecedent. I focus on the case of VP ellipsis. VP ellipsis is defined by the presence of an auxiliary verb, but no VP, as in the following example 1: (1) a. It might have rained, any time; b. only - it did not. To interpret the elliptical VP "did not", the antecedent must be determined: in this case, "rained" is the only possibility. The input to the algorithm is an elliptical VP and a list of VP's occurring in proximity to the el- liptical VP. The algorithm eliminates certain VP's IAll examples are taken from the Brown Corpus unless otherwise noted. 9 that are impossible antecedents. Then it assigns preference levels to the remaining VP's, based on syntactic configurations as well as other factors. Any VP's with the same preference level are or- dered in terms of proximity to the elliptical VP. The antecedent is the VP with the highest prefer- ence level. In what follows, I begin with the overall struc- ture of the algorithm. Next the subparts of the algorithm are described, consisting of the elimina- tion of impossible antecedents, and the determina- tion of a preference ordering based on clausal rela- tionships and subject coreference. I then present the results of testing the algorithm on 304 exam- ples of VP ellipsis collected from the Brown Cor- pus. Finally, I examine other approaches to this problem in the literature. THE ALGORITHM The input to the algorithm is an elliptical VP(VPE), and VPlist, a list of VP's occurring in the current sentence, and those occurring in the two immediately preceding sentences. In addition, it is assumed that the parse trees of these sentences are available as global variables, and that NP's in these parse trees have been assigned indices to in- dicate coreference and quantifier binding. The antecedent selection function is: A-Select(VPlist,VPE) VPlist := remove-impossible(VPlist,VPE) VPlist := assign-levels(VPlist,VPE) antecedent. := select-highest(VPlist,VPE) First, impossible antecedents are removed from the VPlist. Then, the remaining items in VPlist are assigned preference levels, and the item with the highest preference level is selected as the antecedent. If there is more than one item with the same preference level, the item closest to the VPE, scanning left from the VPE, is selected. The definition of the function remove- impossible is as follows: remove-impossible(VPlist,VPE) For all v in VPlist if ACD(v,VPE) or BE-DO-conflict(v,VPE) then remove(v, VPlist) There are two types of impossible antecedents: the first involves certain antecedent-containment structures, and the second involves cases in which the antecedent contains a BE-form and the target contains a DO-form. These are described in detail below. Next, preference levels are assigned to remain- ing items in VPlist by the assign-levels function. (All items on VPlist are initialized with a level of 0.) assign-levels (VPlist, VPE) For all v in VPlist if related-clause(v,VPE) then v.level := v.level + 1 if coref-subj (v,VPE) then v.level := v.level + i An antecedent is preferred if there is a clausal relationship between its clause and the VPE clause, or if the antecedent and the VPE have coreferential subjects. The determination of these preferences is described in detail below. Finally, the select-highest function merely selects the item on VPlist with the highest prefer- ence level. If there is more than one item with the highest preference level, the item nearest to the VPE (scanning left) is selected. IMPOSSIBLE ANTECEDENTS This section concerns the removal of impossible antecedents from VPlist. There are two cases in which a given VP is not a possible antecedent. The first deals with antecedent-containment, the second, with conflicts between BE-forms and DO- forms. ANTECEDENT CONTAINMENT There are cases of VP ellipsis in which the VPE is contained within the antecedent VP: IV [... VPE ...]]vP Such cases are tradition- ally termed antecedent-contained deletion (ACD). They are highly constrained, although the proper 10 formulation of the relevant constraint remains con- troversial. It was claimed by May (1985) and oth- ers that ACD is only possible if a quantifier is present. May argues that this explains the fol- lowing contrast: (2) a. Dulles suspected everyone who Angelton did. b. * Dulles suspected Philby, who Angelton did. However, it has been subsequently noted (cf. Fiengo and May 1991) that such structures do not require the presence of a quantifier, as shown by the following examples: (3) a. Dulles suspected Philby, who Angelton did too. b. Dulles suspected Philby, who Angelton didn't. Thus the algorithm will allow cases of ACD in which the target is dominated by an NP which is an argument of the antecedent verb. It will not allow cases in which the target is dominated by a sentential complement of the antecedent verb, such as the following: (4) That still leaves you a lot of latitude. And I suppose it did. Here, "suppose" is not a possible antecedent for the elliptical VP. In general, configurations of the following form are ruled out: IV [... VPE .--]s-..]vP BE/DO CONFLICTS The auxiliary verb contributes various teatures to the complete verb phrase, including tense, aspect, and polarity. There is no requirement that these features match in antecedent and elliptical VP. However, certain conflicts do not appear to be pos- sible. In general, it is not possible to have a DO- form as the elliptical VP, with an overt BE-form in the antecedent. Consider the following example: (5) Nor can anyone be certain that Prokofief would have done better, or even as well, under different circumstances. His fellow- countryman, Igor Stravinsky, certainly did not. In this example, there are two elements on the VP list: "be certain...", and "do better". The tar- get "did not" rules out "be certain" as a possible antecedent, allowing only the reading "Stravinsky did not do better". If the elliptical VP is changed from "did not" to "was not", the situation is re- versed; the only possible reading is then "Stravin- sky was not certain that Prokofief would have done better...". A related conflict to be ruled out is that of ac- tive/passive conflicts. A passive antecedent is not possible if the VPE is a DO-form. For example: (6) Jubal did not hear of Digby's disappear- ance when it was announced, and, when he did, while he had a fleeting suspicion, he dismissed it; In this example, "was announced" is not a possible antecedent for the VPE "did". One possible exception to this rule involves progressive antecedents, which, although they con- tain a BE-form, may be consistent with a DO- form target. The following (constructed) example seems marginally acceptable: (7) Tom was cleaning his room today. Harry did yesterday. Thus a BE-form together with a progressive does not conflict with a DO-form. PREFERENCE LEVELS If there are several possible antecedents for a given VPE, preferences among those antecedents are de- termined by looking for other relations between the VPE clause and the clauses containing the pos- sible antecedents. CLAUSAL RELATIONSHIPS An antecedent for a given VPE is preferred if there is a configurational relationship between the an- tecedent clause and the VPE clause. These include comparative structures and adverbial clauses. Elliptical VP's (VPE) in comparative con- structions are of the form [VP Comparative [NP VPE]] where Comparatives are expressions such as "as well as", "better than", etc. In constructions of this form there is a strong preference that VP is the antecedent for VPE. For example: (8) Now, if Morton's newest product, a corn chip known as Chip-o's, turns out to sell as well as its stock did... Here, the antecedent of the VPE "did" is the VP "sell". The next configuration involves VPE's within adverbial clauses. For example, (9) But if you keep a calendar of events, as we do, you noticed a conflict. Here the antecedent for the VPE "do" is "keep a calendar of events". In general, in configurations of the form: 11 [VP ADV [NP VPE]] VP is preferred over other possible an- tecedents. It is important to note that this is a preference rule, rather than an obligatory constraint. Al- though no examples of this kind were found in the Brown Corpus, violations of this constraint may well be possible. For example: (10) John can walk faster than Harry can run. Bill can walk faster than Barry can. If a reading is possible in which the VPE is "Barry can run", this violates the clausal relation- ship preference rule. SUBJECT COREFERENCE Another way in which two clauses are related is subject coreference. An antecedent is preferred if its subject corefers with that of the elliptical VP. An example: (11) He wondered if the audience would let him finish. They did. The preferred reading has "they" coreferential with "the audience" and the antecedent for "did" the VP "let him finish". Subject "coreference" is determined manually, and it is meant to reflect quantifier binding as well as ordinary coreference - that is, standard instances involving coindexing of NP's. Again, it must be emphasized that the subject coreference rule is a preference rule rather than an obligatory constraint. While no violations were found in the Brown corpus, it is possible to con- struct such examples. INTERACTION OF PREFERENCE RULES There are cases where more than one preference rule applies. The antecedent selected is the item with the highest preference level. If more than one item has the same preference level, the item nearest to the VPE is selected, where nearness is determined by number of words encountered scan- ning left from the VPE. In the following example, two preference rules apply: (12) usually, this is most exasperating to men, who expect every woman to verify their preconceived notions concerning her sex, and when she does not, immediately con- demn her as eccentric and unwomanly. The VPE clause is an adverbial clause modi- fying the following clause. Thus the VP "condemn her as eccentric and unwomanly" receives a pref- erence level of 1. The subject "she" of the VPE is coindexed with "every woman". This causes the VP "verify their preconceived notions concerning her sex" to also receive a preference level of 1. Since both of these elements have the same pref- erence level, proximity is determined by scanning left from the VPE. This selects "verify their pre- conceived notions concerning her sex" as the an- tecedent. TESTING THE ALGORITHM The algorithm has been tested on a set of 304 ex- amples of VP ellipsis collected from the Brown Corpus. These examples were collected using the UNIX grep pattern-matching utility. The version of the Brown Corpus used has each word tagged by part of speech. I defined search patterns for aux- iliary verbs that did not have verbs nearby. These patterns did not succeed in locating all the in- stances of VP ellipsis in the Brown Corpus. How- ever, the 304 examples do cover the full range of types of material in the Brown Corpus, includ- ing both "Informative" (e.g., journalistic, scien- tific, and government texts) and "Imaginative" (e.g., novels, short stories, and humor). I have di- vided these examples into three categories, based on whether the antecedent is in the same sentence as the VPE, the adjacent (preceding) sentence, or earlier ("Long-Distance"). The definition of sen- tence is taken from the sentence divisions present in the Brown Corpus. RESULTS The algorithm selected the correct antecedent in 285, or 94% of the cases. For comparison pur- poses, I present results of an alternative strategy; namely, a simple linear scan of preceding text. In this strategy, the first verb that is encountered is taken to be the head of the antecedent VP. The results of the algorithm and the "Linear Scan" approach are displayed in the following ta- ble. Category Same-sent Adj-sent Long-Dist Total 196 93 15 304 Algorithm No. Correct 193(96%) 85(92%) 7(47%) 285(94%) Linear Scan No. Correct 172(88%) 72(77%) 2(13%) 247(81%) The algorithm performs considerably better than Linear Scan. Much of the improvement is due to "impossible antecedents" which are selected by the Linear Scan approach because they are closest to the VPE. A frequent case of this is contain- ing antecedents that are ruled out by the algo- rithm. Another case distinguishing the algorithm from Linear Scan involves coreferential subjects. There were several cases in which the coreferen- tial subject preference rule caused an antecedent to be selected that was not the nearest to the VPE. One example is: (13) a. But, darn it all, why should we help a cou- ple of spoiled snobs who had looked down their noses at us? b. But, in the end, we did. Here, the correct antecedent is the more dis- tant "help a couple of...", rather than "looked down their noses...". There were no cases in which Linear Scan succeeded where the algorithm failed. (14) a. SOURCES OF ERROR I will now look at sources of errors for the algo- rithm. The performance was worst in the Long Distance category, in which at least one sentence intervenes between antecedent and VPE. In sev- eral problem cases in the Long Distance category, it appears that intervening text contains some mechanism that causes the antecedent to remain salient. For example: "...in Underwater Western Eye I'd have a chance to act. I could show what I can do" . b. As far as I was concerned, she had already and had dandily shown what she could do. In this case, the elliptical VP "had already" means "had already had a chance to act". The algorithm incorrectly selects "show what I can do" as the antecedent. The intervening sentence causes the previous antecedent to remain salient, since it is understood as "(If I had a chance to act then) I could show what I can do." Further- more, the choice made by the algorithm might per- haps be eliminated on pragmatic grounds, given the oddness of "she had already shown what she could do and had dandily shown what she could do ." Another way in which the algorithm could be generalized is illustrated by the follow example: (15) a. "I didn't ask you to fight for the ball club", Phil said slowly. b. "Nobody else did, either". Here the algorithm incorrectly selects '~fight for the ball club" as the antecedent, instead of "ask you to fight for the ball club". The subject coref- erence rule does not apply, since "Nobody else" 12 is not coreferential with the subject of any of the possible antecedents. However, its interpretation is dependent on the subject 'T' of "ask you to fight for the ball club". Thus, if one generalized the subject coreference rule to include such forms of dependence, the algorithm would succeed on such examples. Many of the remaining errors involve an an- tecedent that takes a VP or S as complement, of- ten leading to subtle ambiguities. One example of this is the following: (16) a. Usually she marked the few who did thank you, you didn't get that kind much in a place like this: and she played a little game with herself, seeing how downright rude she could act to the others, before they'd take offense, threaten to call the manager. b. Funny how seldom they did: used to it, probably. Here the algorithm selects "call the manager" as antecedent, instead of "threaten to call the manager", which I determined to be the correct antecedent. It may be that many of these cases involve a genuine ambiguity. OTHER APPROACHES The problem addressed here, of determining the antecedent for an elliptical VP, has received little attention in the literature. Most treatments of VP ellipsis (cf. Sag 1976, Williams 1977, Webber 1978, Fiengo and May 1990, Dalrymple, Shieber and Pereira 1991) have focused on the question of determining what readings are possible, given an elliptical VP and a particular antecedent. For a computational system, a method is required to determine the antecedent, after which the possible readings can be determined. Lappin and McCord (1990) present an al- gorithm for VP ellipsis which contains a partial treatment of this problem. However, while they define three possible ellipsis-antecedent configura- tions, they have nothing to say about selecting among alternatives, if there is more than one VP in an allowed configuration. The three configu- rations given by Lappin and McCord for a VPE- antecedent pair < V,A> are: 1. V is contained in the clausal complement of a subordinate conjunction SC, where the SC- phrase is either (i) an adjunct of A, or (ii) an adjunct of a noun N and N heads an NP argu- ment of A, or N heads the NP argument of an adjunct of A. 2. V is contained in a relative clause that modifies a head noun N, with N contained in A, and, if 13 a verb A t is contained in A and N is contained in A t, then A p is an infinitival complement of A or a verb contained in A. 3. V is contained in the right conjunct of a senten- tial conjunction S, and A is contained in the left conjunct of S. An examination of the Brown Corpus exam- ples reveals that these configurations are incom- plete in important ways. First, there is no con- figuration that allows a sentence intervening be- tween antecedent and VPE. Thus, none of the Long-Distance examples (about 5% of the sam- ple) would be covered. Configuration (3) deals with antecedent-VPE pairs in adjacent S's. There are many such cases in which there is no sentential conjunction. For example: (17) a. All the generals who held important com- mands in World War 2, did not write books. b. It only seems as if they did. Perhaps configuration (3) could be interpreted as covering any adjacent S's, whether or not an explicit conjunction is present. 9. Furthermore, there are cases in which the ad- jacent categories are something other than S; in the following two examples, the antecedent and VPE are in adjacent VP's. (18) (19) The experts are thus forced to hypothe- size sequences of events that have never occurred, probably never will - but possi- bly might. The innocent malfeasant, filled with that supreme sense of honor found in bars, in- sisted upon replacing the destroyed mona- cle - and did, over the protests of the for- mer owner - with a square monacle. In the following example, the adjacent cate- gory is S'. (20) I remember him pointing out of the win- dow and saying that he wished he could live to see another spring but that he wouldn't. Configurations (1) and (2) deal with antecedent-VPE pairs within the same sentence. In Configuration (1), the VPE is in a subordinate clause, and In (2), the VPE is in a relative clause. In each case, the VPE is c-commanded by the antecedent A. While the configurations cover two 2However, a distinction must be maintained be- tween VPE and related phenomena such as gapping and "pseudo-gapping", in which an explicit conjunc- tion is required. quite common cases, there are other same-sentence configurations in which the antecedent does not c- command the VPE. (21) (22) In the first place, a good many writers who are said to use folklore, do not, unless one counts an occasional superstition or tale. In reply to a question of whether they now tax boats, airplanes and other movable property excluding automobiles, nineteen said that they did and twenty that they did not. In sum, the configurations defined by Lappin and McCord would miss a significant number of cases in the Brown Corpus, and, even where they do apply, there is no method for deciding among alternative possibilities. 3 CONCLUSIONS To interpret an elliptical expression it is neces- sary to determine the antecedent expression, after which a method of reconstructing the antecedent expression at the ellipsis site is required. While the literature on VP ellipsis contains a vast ar- ray of proposals concerning the proper method of reconstructing a given antecedent for an elliptical VP, there has been little attention to the question of determining the antecedent. In this paper, I have proposed a solution to this problem; I have described an algorithm that determines the antecedent for elliptical VP's. It was shown that the algorithm achieves 94% ac- curacy on 304 examples of VP ellipsis collected from the Brown Corpus. Many of the failure cases appear to be due to the interaction of VPE with other anaphoric phenomena, and others may be cases of genuine ambiguity. ACKNOWLEDGEMENTS Thanks to Aravind Joshi and Bonnie Webber. This work was supported by the following grants: ARO DAAL 03-89-C-0031, DARPA N00014-90- J-1863, NSF IRI 90-16592, and Ben Franklin 91S.3078C-1. REFERENCES Susan E. Brennan, Marilyn Walker Friedman, and Carl J. Pollard. A Centering Approach to Pro- 3While the problem of antecedent determination for VP ellipsis has been largely neglected, the anal- ogous problem for pronoun resolution has been ad- dressed (cf. Hobbs 1978, Grosz, Joshi, and Weinstein 1983 and 1986, and Brennan, Friedman and Pollard 1987), and two leading proposals have been subjected to empirical testing (Walker 1989). ]4 nouns, Proceedings of the 25th Annual Meeting of the ACL, 1987. Mary Dalrymple, Stuart Shieber and Fer- nando Pereira. Ellipsis and Higher-Order Unifi- cation. Linguistics and Philosophy. Vol. 14, no. 4, August 1991. Robert Fiengo and Robert May. Ellipsis and Anaphora. Paper presented at GLOW 1990, Cam- bridge University, Cambridge, England. Robert Fiengo and Robert May. ndices and Identity. ms. 1991. Barbara Grosz, Aravind Joshi, and Scott We- instein. Providing a Unified Account of Definite Noun Phrases in Discourse. In Proceedings, 2Ist Annual Meeting of the ACL, pp. 44-50, Cam- bridge, MA, 1983. Barbara Grosz, Aravind Joshi, and Scott We- instein. Towards a Computational Theory of Dis- course Interpretation. ms. 1986. Isabelle Haik. Bound VP's That Need To Be. Linguistics and Philosophy 11: 503-530. 1987. Jerry Hobbs. Resolving Pronoun References, Lingua 44, pp. 311-338. 1978. Shalom Lappin and Michael McCord. Anaphora Resolution in Slot Grammar, in Com- putational Linguistics, vol 16, no 4. 1990. Robert May. Logical Form: Its Structure and Derivation, MIT Press, Cambridge Mass. 1985. Ivan A. Sag. Deletion and Logical Form. Ph.D. thesis, MIT. 1976. Marilyn Walker. Evaluating discourse pro- cessing algorithms. In Proceedings, 27th Annual Meeting of the ACL, Vancouver, Canada. 1989. Bonnie Lynn Webber. A Formal Approach to Discourse Anaphora. Ph.D. thesis, Harvard Uni- versity. 1978. Edwin Williams. Discourse and Logical Form. Linguistic Inquiry, 8(1):101-139. 1977.
1992
2
WOULD I LIE TO YOU? MODELLING MISREPRESENTATION AND CONTEXT IN DIALOGUE Carl Gutwin Alberta Research Council 1 6815 8th Street N. E. Calgary, Alberta T2E 7H7, Canada Internet: gutwin@ skyler.arc.ab.ca Gordon McCalla ARIES Laboratory, University of Saskatchewan 2 Saskatoon, Saskatchewan S7N 0W0, Canada ABSTRACT In this paper we discuss a mechanism for modifying context in a tutorial dialogue. The context mechanism imposes a pedagogically motivated misrepresentation (PMM) on a dialogue to achieve instructional goals. In the paper, we outline several types of PMMs and detail a particular PMM in a sample dialogue situation. While the notion of PMMs are specifically oriented towards tutorial dialogue, misrepresentation has interesting implications for context in dialogue situations generally, and also suggests that Grice's maxim of quality needs to be modified. 1. INTRODUCTION Most of the time, truth is a wonderful thing. However, this research studies situations where not saying what you believe to be the truth can be the best course of action. Intentional misrepresentation of a speaker's knowledge appears to be a common and highly pragmatic process used in many different kinds of dialogue, especially tutorial dialogue. We use imperfect or incomplete representations in response to constraints and demands imposed by the situation: for example, many models of the real world are extremely complex, and misrepresentations are often used as useful, comprehensible approximations of complicated systems. People use idealized Newtonian mechanics, the wave (or particle) theory of light, and rules of default reasoning stating that birds fly, penguins are birds, and penguins don't fly. Some systems which cannot be simplified are purposefully ignored: for example, higher order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 This research was completed while C. Gutwin was a graduate student at the University of Saskatchewan. All correspondence should be sent to the first author. 2 Visiting scientist, Learning Research & Development Centre, University of Pittsburgh, 1991-92 differential equations are left out of engineering classes because of their complexity. Simplified and imperfect representations are often found in tutoring discourse. Misrepresentation as a pedagogic strategy holds promise for extending the capabilities of intelligent tutoring systems (ITSs), but the concept also affects computational dialogue research: it builds on the idea of discourse focus and context, extends work on adapting to the user with multiple representations of knowledge, and challenges Grice's maxims of conversation. 2. MOTIVATION AND BACKGROUND Misrepresentations are alterations to a perceived reality. When they have sincere pedagogic purposes, we name them Pedagogically Motivated Misrepresentations, or PMMs. PMMs can reduce the complexity of the dialogue and of the concepts to be learned, provide focus in a busy environment, or facilitate the communication of essential knowledge. PMMs share themes with research into computational dialogue and ITS. PMMs are intimately connected to ideas of instructional and dialogue focus, the latter of which was explored by Grosz [1977], who stated that task-oriented dialogue could be organized into focus spaces, each containing a subset of the dialogue's purposes and entities. The collection of focus spaces created by the changing dynamics of a dialogue could be gathered together into a focusing structure which assisted in interpreting new utterances. Adaptation to the hearer is also a concern in dialogue research: beliefs about the hearer or about the situation can be used to vary the structure, complexity, and language of discourse to optimally suit the hearer. Several projects (e.g. [McKeown et al 1985], [Moore & Swartout 1989], [Paris 1989]) have 152 looked at adapting the level or tenor of explanations to a user's needs. Paris's [1989] TAILOR system varies its output (descriptions of complex devices) depending upon the hearer's expertise. Another concern in both dialogue research and ITS research is multiple representations of domain knowledge. TAILOR, for example, uses two different models of each device to construct its explanations. Tutoring systems like SMITHTOWN [Shute and Bonar 1986] and MHO [Lesgold et al. 1987] organize different representations around distinct pedagogic goals; in the domain of electrical circuits, QUEST [Frederiksen & White 1988] provides progressively more sophisticated representations, from a simple qualitative model to quantitative circuit theory. Lastly, any discussion of misrepresentation in dialogue is bound to reflect on Grice's first maxim of quality, "do not say that which you believe to be false." The conversational maxims of H. Paul Grice [1977] are a well-known set of observations about human discourse frequently used in computational dialogue research (for example [Joshi et al 1984], [Moore and Paris 1989], [Reichman 1985]). However, people sometimes accept the truth of Grice's maxims too easily. A close examination reveals difficulties with a literal interpretation of the first maxim of quality. While this maxim seems a reasonable rule to use in dialogue, examination of human discourse shows many instances where uttering falsehoods is legitimate behaviour. For example, in some first year computer science courses, students are told that a semicolon is the terminator of a Pascal statement. This utterance misrepresents reality (a semicolon actually separates statements), but the underlying purpose is sincere: the misrepresentation allows students to begin programming without forcing them to learn about syntax charts, parsing algorithms, or recursive definitions. Grice's maxims have avoided major criticism by the computational dialogue community, and the maxims have been successfully used in limited domains to help dialogue systems interact with their users. Realizing that misrepresentations often occur in tutorial discourse, however, provides us with a context for investigating limits to the Gricean approach. 3. OVERVIEW OF PEDAGOGICALLY MOTIVATED MISREPRESENTATIONS We have identified and characterized several types of PMM that can occur in tutorial discourse. We define each type as a computational structure that, when invoked, alters the dialogue system's own reality and hence the student's perception of reality, for sincere pedagogic purposes. There are five essential computational characteristics governing the use of PMMs: preconditions, applicability conditions, removal conditions, revelation conditions, and effects. These conditions are predicates matched against information in the dialogue system's essential data structures: a domain knowledge representation (in this system, a granularity hierarchy after [Greet and McCalla 1989], as shown in Figure 1); a model of the student; and an instructional plan (in this system, a simplified version of Brecht's (1990) content planner, from which a sample partial plan is shown in Figure 2). Each step in the instructional plan provides a teaching operator (such as prepare-to-teach) and a concept from the knowledge base which becomes the focus of the instructional interaction. I Major Programming Concept I Figure 1. A fragment of the domain representation In this implementation, PMMs act by manipulating the dialogue system's blackboard-based internal communication. An active PMM intercepts relevant messages before the knowledge base can receive them, then returns misrepresented information instead of the "true" information to the blackboard. 153 'UT ~' (conditional"-~ COI~ " STUDL~IT 1 ~'~ ~STUDEI~ r" KNOWS KI~WS ' (conol)hal ~¢~nditional expres ~xpressions) , Figure 2. A partial content plan from Brecht's [1990] planner. The first step in using a misrepresentation involves the PMM's preconditions and applicability conditions. Preconditions are definitional constraints characterizing situations in which a particular PMM is conceivable. Applicability conditions actually determine the suitability of a PMM to a situation. Each applicability condition examines one element of the current instructional context, from the student model, the domain representation, or the instructional plan. The individual conditions are combined to determine a final "score" for the PMM, using a calculus akin to MYCIN's certainty factors ([Shortliffe 1976]). For example, one applicability condition states that less student knowledge about a domain concept can provide evidence for the PMM's greater applicability, and more knowledge implies less applicability. A PMM's removal conditions provide a facility for determining when the misrepresentation is no longer useful and may be removed. However, a dialogue system also needs to know when a PMM is not working well; after all, there are certain dangers associated with the use of misrepresentations. For example, a student may realize the discrepancy between the altered environment and reality. These situations are monitored by a PMM's revelation conditions, guiding the system in cases where it must be ready to abandon the misrepresentation and reveal the misrepresentation. If preconditions and applicability conditions are satisfied, a PMM's procedural effects can be applied to the domain representation, implementing the 'alternative reality' presented to the student through the dialogue. The way in which the student's perceived environment is altered and restored plays a crucial part in a misrepresentation's success. The dialogue actions which accomplish these changes compose two unique subdialogues. An alteration subdialogue must make a smooth transition to the altered environment; a restoration subdialogue has the opposite effect: it must restore the "real" environment, knot all the loose ends created by the misrepresentation, and help the student transfer knowledge from the misrepresented environment to the real environment. Restoration subdialogues must guard against another potential danger of misrepresentation: that students may retain incorrect information even after the misrepresentation has been retracted at the close of the learning episode. 4. DETAILS OF THE PMM MODEL We have identified several types of pedagogic misrepresentations, and have implemented and evaluated them in a partial tutorial dialogue system. The implemented system concentrates on the function of the misrepresentation expert, and therefore the dialogue system is not fully functional: for example, it does not process or generate surface natural language. We have implemented the misrepresentation expert and the PMM structures, the blackboard communication architecture, the student model, and the domain knowledge (see Figure 1). The content planner and other system components are implemented as shells able to provide necessary information when needed. Input to the system is a teaching situation including information from the content planner, the student model, and the domain. The system's output is a log of system actions detailing the simulation of the teaching situation. Figure 3 shows the organization of the implemented PMMs, some of which inherit shared conditions and effects. The implemented PMMs have a variety of uses: Ignore-Specializations PMM simplifies concepts by reducing the number of kinds that a concept has; Compress-Redirect PMM collapses a part of the granularity hierarchy to allow specific instantiations of general concepts. There are also extended versions of these two PMMs which have more wide-reaching effects. The remaining PMMs are Entrapment PMM, which uses a misconception to corner a student and add weight to 154 the illustration of a better conception, and Simplify- Explanation PMM, which reduces the complexity of a concept's functional explanation. The remaining restriction PMM, Restrict-Peripheral PMM, is detailed in the following section to illustrate the concept of misrepresentation and the elements of the PMM model, and to show the PMM's use in an actual dialogue. Compress- ,- I \ I E o o'PMM [ Local PMM J~ I ,t"°n,ca.e.P~, s, I Ignore- - - ~ [ LOCal t'MM ] Specializations Extended PMM I C°mpress- I Redirect LoCal , PMM Figure 3. The PMM hierarchy. The purpose of the "Restrict Peripheral Concepts" PMM is to simplify concepts related to the current teaching concept. For example, during an initial discussion of base cases (while learning programming in Lisp), a student might benefit from a misrepresentation which restricts recursive cases to a single type, the variety of recursive case used with cdr recursion. The restriction allows both participants in the dialogue to discuss and refer to a single common object, and allows the student to concentrate on base cases without needing to know the complexities of recursive cases. This PMM's preconditions check that there are peripheral concepts in the current instructional context. Applicability conditions determine whether those concepts should be simplified, by considering the domain's pedagogic complexity and the student's capabilities. For example, the PMM considers the difficulty ratings of the current concept and the peripheral concept, the student's knowledge of these concepts and any existing difficulties with them as shown in the student model. In addition, the PMM considers other factors such as the student's anxiety level and their ability with structural relationships. Removal conditions for this PMM consider factors such as whether or not instruction about the current concept has been completed, or whether the instructional context has changed so markedly that the PMM can no longer be useful. Revelation conditions cover two other cases for a PMM's removal: when the student challenges the misrepresentation, and when the student or another part of the dialogue system requires a hidden part of the domain. If applied, the effect of this PMM is to restrict peripheral concepts related to the current concept such that all but one of their specializations are hidden. The PMM carries out the restriction, but does not choose the specializations that will remain visible: that decision is left to the pedagogic expert, using the instructional plan and the student model. 5. EXAMPLE DIALOGUE PMM "Restrict Peripheral Concepts" is illustrated below in an example dialogue. The dialogue is based on an actual trial of the implemented system, which determined when to invoke the PMM, when to revoke it, and all the interactions between the knowledge base and the dialogue system. However, the surface utterances are fabricated to illustrate how the misrepresentation system would function in a completed tutorial discourse system. The teaching domain in the dialogue is recursion in Lisp (as shown in Figure 1), and the system believes the student to be a novice Lisp programmer. T: ... the next thing I'd like to show you is the part of recursion that stops the reduction. The system's current instructional context contains a teaching operator, "prepare to teach x," and a current concept, "base case." The current situation satisfies the preconditions of PMM "Restrict Peripheral Concepts," and its applicability score ranks it as most applicable to the situation. The PMM thus determines that the peripheral concept "recursive case" will be restricted to one specialization, and the pedagogic expert chooses 'cdr recursive case' as the most appropriate specialization for novice students. The system asks the instructional planner to replan given the altered view of the domain, and enters into an alteration subdialogue with the student. Although these subdialogues are only represented as stubs in the system's internal notation, the discourse could proceed as follows: T: Do you remember the last example you saw? S: Yes. 155 T: OK. Remember that I pointed out the parts of the recursive function, the base case and the recursive case? S: Yup. T: Great. Now, I'll just put that example back on for a second. You'll notice that the recursive case looks like "(t (allnums (cdr liszt)))" Got that? S: Yup. T: Ok. For when we look at the base case, I want you to assume that this recursive case is the only kind of recursive case that there is. Then when we write some programs, you won't have to worry about the recursive case part. Does that sound ok? [At this point the system has already imposed its alteration on the knowledge base, and when the system asks for the specializations of 'recursive case,' it will receive only 'cdr recursive case' as an answer.] S: Sure. T: Great. So the thing to remember is, whenever you need a recursive case, use a recursive case like you have in the example. So. Let's move on to looking at the way the base case works; let's start with that example we had up. First, you identify the base case... Later in the dialogue, the student is constructing a solution to another problem: S: I'm not sure about the base case for this one ... I think I'll do the recursive case first. What does the recursive case do again? T: A recursive case reduces the problem by calling the function again with reduced input. The recursive case is the default case of the "cond" statement, and it calls the function again with the cdr of the list input. [Here the PMM again alters perceived reality, restricting 'recursive case' to 'cdr recursive case'] S: Right. lisz0))? T: Yep. So the recursive case is (t (findb (cdr [The PMM is again used to verify the student's query.] S: OK. Now the base case ... This exchange shows that the misrepresentation is useful in focusing the dialogue on the current concept of base case, by making the recursive case easy to synthesize. The system continues investigating and teaching base case until the student can analyse and synthesize simple base cases. The instructional plan then raises its next step, "complete base case." Arrival at this plan step satisfies one of the removal conditions for the PMM, so the system engages in a restoration subdialogue with the student, which might go as follows, preparing the student for the next context: T: Ok. The next thing we'll do is look a little closer at recursive case. Although I told you that there was only one kind of recursive case, there are actually more. The reason we only used one kind of recursive case is because I wanted to make sure you learned the way a base case works without needing all the details of recursive cases. Recursive cases still do the same thing (that is, reducing the input) but the specific parts might do different things than the recursive case we used. Does that sound ok to you? S: ok. T: So let's look at recursive cases. We'll only deal with the kinds used with cdr recursion .... 6. RESULTS AND DISCUSSION Evaluative trials for the PMM system have been aimed specifically at both the individual PMMs and the PMM model. Twenty-six different types of situations have been designed to test the PMMs' relevance, consistence, and coherence. Through these trials the individual PMMs demonstrated their integrity, and the PMM model itself was shown to be capable of working within a dialogue system architecture. Full details of evaluation methodology and results can be found in [Gutwin 1991]. This research project has shown that PMMs can be represented for use in a tutorial dialogue system, and supports their value as a pedagogic tool. However, the foremost contribution of the PMM system to computational dialogue may be how it extends the notion of focus currently used in dialogue research. Grosz and Sidner [1986] see dialogue as a collection of focus spaces which shift in reaction to changes in the discourse's purposes and salient entities. This research suggests that within any of these focus spaces, there can exist a further structure: a context that provides a specific interpretation of the knowledge represented in the system. The same knowledge is "in focus" throughout the focus space, but different contexts can color or interpret that knowledge in different ways. A pedagogically motivated misrepresentation is thus a context mechanism that alters the domain knowledge for an educational purpose. It is possible that we always use 156 some kind of alternate interpretation or misrepresentation to mediate between our knowledge and other dialogue participants. Focusing structure has traditionally been used in interpretation: in several projects ([Grosz 1977], [Sidner 1983]), context structures are shown to be useful in tasks like pronoun resolution or anaphora resolution. Pragmatic contexts, such as those created by a PMM, can direct generation of discourse as well. They are active reflections of the larger situation, rather than local representations of dialogue structure, and they are able to alter the discourse in order to further some goal. Responding to patterns in the world outside the dialogue allows pragmatic context mechanisms such as PMMs to consider fitness and suitability of a dialogue situation in addition to a focus space's subset of goals and salient entities. Another issue of importance to this research is that of tailoring. While some existing dialogue systems tailor an explanation to the user's level of expertise (e.g. [Paris 1989], [McKeown et al 1985]), the PMM system instead tailors the domain to the learner. The PMM system does not make basic decisions about either content or delivery in a dialogue, but attempts to shape the content's representation into a form which will be best suited to the learning situation. The PMM model also touches on research into multiple representation, in that it provides a mechanism for encapsulating several different interpretations of a knowledge base. The mechanism might be able to model and administer alternate representations of other kinds as well, such as analogy. The usefulness and ubiquity of PMMs also suggests that a literal interpretation of Grice's maxims, particularly the maxim of quality, is inappropriate. Clearly, we often say things we know to be false! However, the maxim of quality can be rescued by indicating the relationship between truth and dialogue purposes: from the original, "do not say that which you believe to be false," we create a new maxim, "do not say that which you believe to be false to your purposes." The new maxim shifts emphasis from an absolute standard of truth in dialogue to the more pragmatic idea of truth relative to a dialogue's goals, and better reflects the way humans actually use discourse. Much remains to be accomplished in this research. There are undoubtedly other as yet undiscovered PMMs. The notion of intentional misrepresentation itself may just be an instance of a more general context mechanism that underlies all dialogue, an idea that should be explored by considering other kinds of dialogue from the perspective of PMMs, and by a closer examination of existing theories of discourse context. Finally, all of the oracles used in the PMM System should be replaced by functioning components so that a dialogue system with complete capabilities can stand alone as proof of the PMM concept. Nevertheless, this research points the way towards the possibility of a new and widely applicable mechanism for modelling dialogue. ACKNOWLEDGMENTS The authors wish to thank the Natural Science and Engineering Research Council of Canada for financial assistance during this research. REFERENCES [Brecht 1990] Brecht (Wasson), B. Determining the Focus of Instruction: Content Planning for Intelligent Tutoring Systems, Ph.D. thesis, University of Saskatchewan, 1990. [Frederiksen & White 1988] Frederiksen, J.R., and White, B. Intelligent Learning Environments for Science Education. in Proceedings of the International Conference on Intelligent Tutoring Systems, Montreal 1988, pp. 250-257. [Greer and McCalla 1989] Greer, J., and McCalla, G. "A computational framework for granularity and its application to educational diagnosis" in Proceedings of the 11th International Joint Conference on Artificial Intelligence, Detroit MI, 1989, pp. 477- 482. [Grice 1977] Grice, H.P. "Logic and Conversation" in Syntax and Semantics, Vol. 3, New York: Academic Press, 1975, pp.41-58. [Grosz 1977] Grosz, B. "The Representation and Use of Focus in a System for Understanding Dialogs" in Proceedings of the 11th International Joint Conference on Artificial Intelligence, Cambridge, Massachusetts, 1977, pp. 67-76. [Grosz & Sidner 1986] Grosz, B.J., and Sidner, C. "Attention, Intentions, and the Structure of Discourse" in Computational Linguistics 12, 1986, pp. 175-204. [Gutwin 1991] Gutwin, C. How to Get Ahead by Lying: Using Pedagogically Motivated Misrepresentation in Tutorial Dialogue. M.Sc. Thesis, University of Saskatchewan, 1991. [Joshi et al 1984] Joshi, A., Webber, B., and Weischedel, R. "Preventing False Inferences" in 157 Proceedings of the lOth International Conference on Computational Linguistics, 1984, pp.134-138. [Lesgold et al 1987] Lesgold, A., Bonar, J., Ivil, J, and Bowen, A. An intelligent tutoring system for electronics troubleshooting: DC-circuit understanding, in Knowing and Learning : lssues for the Cognitive Psychology of Instruction, L. Resnick ed., Hillsdale NJ: Lawrence Erlbaum Associates. [McKeown et al 1985] McKeown, K., Wish, M., Matthews, K. "Tailoring Explanations for the User" in Proceedings on the 5th International Joint Conference on Artificial Intelligence, Los Angeles, August 1985, pp.794-798. [Moore andParis 1989] Moore, J., and Paris, C. "Planning Text for Advisory Dialogues" in Proceeding of the 27th Conference of the Association for Computational Linguistics, 1989, pp. 203-211. [Moore and Swartout 1989] Moore, J., and Swartout, W.R. "A reactive approach to explanation," in Proceedings of the 11th International Joint Conference on Artificial Intelligence, Detroit, 1989 pp. [Paris 1989] Paris, Cecile. "The use of explicit user models in a generation system for tailoring answers to the user's level of expertise" in User Models in Dialog Systems, A. Kobsa and W. Wahlster, eds. Berlin: Springer-Verlag, 1989, pp. 200-232. [Reichman 1985] Reichman, R. Getting Computers to Talk Like You and Me. Cambridge, MA: the MIT Press, 1985. [Shortliffe 1976] Shortliffe, E.H. Computer-Based Medical Consultation: MYCIN. New York: Elsevier. [Shute & Bonar 1986] Shute, V., and Bonar, J.G. " An intelligent tutoring system for scientific inquiry skills." in Proceedings of the Eighth Cognitive Science Society Conference, Amherst MA, pp.353- 370. [Sidner 1983] Sidner, C. "Focusing in the Comprehension of Definite Anaphora" in Computational Models of Discourse, M. Brady and R. Berwick, eds. Cambridge, Mass: MIT Press, 1983, pp. 267-330. ~58
1992
20
LATTICE-BASED WORD IDENTIFICATION IN CLARE David M. Carter SRI International Cambridge Computer Science Research Centre 23 Millers Yard Cambridge CB2 1RQ, U.K. dmc@cam, sri. com ABSTRACT I argue that because of spelling and typing errors and other properties of typed text, the identification of words and word boundaries in general requires syntactic and semantic knowledge. A lattice representation is there- fore appropriate for lexical analysis. I show how the use of such a representation in the CLARE system allows different kinds of hy- pothesis about word identity to be integrated in a uniform framework. I then describe a quantitative evaluation of CLARE's perfor- mance on a set of sentences into which ty- pographic errors have been introduced. The results show that syntax and semantics can be applied as powerful sources of constraint on the possible corrections for misspelled words. 1 INTRODUCTION In many language processing systems, uncer- tainty in the boundaries of linguistic units, at various levels, means that data are repre- sented not as a well-defined sequence of units but as a lattice of possibilities. It is common for speech recognizers to maintain a lattice of overlapping word hypotheses from which one or more plausible complete paths are subse- quently selected. Syntactic parsing, of either spoken or written language, frequently makes use of a chart or well-formed substring ta- ble because the correct bracketing of a sen- tence cannot (easily) be calculated determin- istically. And lattices are also often used in the task of converting Japanese text typed in kana (syllabic symbols) to kanji; the lack of in- terword spacing in written Japanese and the complex morphology of the language mean that lexical items and their boundaries cannot be reliably identified without applying syntac- tic and semantic knowledge (Abe et al, 1986). In contrast, however, it is often assumed that, for languages written with interword spaces, it is sufficient to group an input char- acter stream deterministically into a sequence of words, punctuation symbols and perhaps other items, and to hand this sequence to the parser, possibly after word-by-word mor- phological analysis. Such an approach is sometimes adopted even when typographi- cally complex inputs are handled; see, for ex- ample, Futrelle et al, 1991. In this paper I observe that, for typed in- put, spaces do not necessarily correspond to boundaries between lexical items, both for lin- guistic reasons and because of the possibil- ity of typographic errors. This means that a lattice representation, not a simple sequence, should be used throughout front end (pre- parsing) analysis. The CLARE system under development at SRI Cambridge uses such a representation, allowing it to deal straightfor- wardly with combinations or multiple occur- rences of phenomena that would be difficult or impossible to process correctly under a se- quence representation. As evidence for the performance of the approach taken, I describe 159 an evaluation of CLARE's ability to deal with typing and spelling errors. Such errors are es- pecially common in interactive use, for which CLARE is designed, and the correction of as many of them as possible can make an appre- ciable difference to the usability of a system. The word identity and word boundary am- biguities encountered in the interpretation of errorful input often require the application of syntactic and semantic knowledge on a phrasal or even sentential scale. Such knowl- edge may be applied as soon as the problem is encountered; however, this brings major problems with it, such as the need for ad- equate lookahead, and the difficulties of en- gineering large systems where the processing levels are tightly coupled. To avoid these diffi- culties, CLARE adopts a staged architecture, in which indeterminacy is preserved until the knowledge needed to resolve it is ready to be applied. An appropriate representation is of course the key to doing this efficiently. 2 SPACES AND WORD BOUNDARIES In general, typing errors are not just a matter of one intended input token being miskeyed as another one. Spaces between tokens may be deleted (so that two or more intended words appear as one) or inserted (so that one word appears as two or more). Multiple errors, involving both spaces and other characters, may be combined in the same intended or ac- tual token. A reliable spelling corrector must allow for all these possibilities, which must, in addition, be distinguished from the use of correctly-typed words that happen to fall out- side the system's lexicon. However, even in the absence of "noise" of this kind, spaces do not always correspond to lexical item boundaries, at least if lexical items are defined in a way that is most con- venient for grammatical purposes. For exam- ple, "special" forms such as telephone num- bers or e-mail addresses, which are common in many domains, may contain spaces. In CLARE, these are analysed using regular ex- pressions (cf Grosz et al, 1987), which may include space characters. When such an ex- pression is realised, an analysis of it, connect- ing non-adjacent vertices if it contains spaces, is added to the lattice. The complexities of punctuation are an- other source of uncertainty: many punctu- ation symbols have several uses, not all of which necessarily lead to the same way of seg- menting the input. For example, periods may indicate either the end of a sentence or an ab- breviation, and slashes may be simple word- internal characters (e.g. X11/Ne WS) or func- tion lexically as disjunctions, as in [1] I'm looking for suggestions for vendors to deal with/avoid. 1 Here, the character string "with/avoid", al- though it contains no spaces, represents three lexical items that do not even form a syntactic constituent. CLARE's architecture and formalism allow for all these possibilities, and, as an exten- sion, also permit multiple-token phrases, such as idioms, to be defined as equivalent to other tokens or token sequences. This facility is especially useful when CLARE is being tai- lored for use in a particular domain, since it allows people not expert in linguistics or the CLARE grammar to extend grammati- cal coverage in simple and approximate, but often practically important, ways. For ex- ample, if an application developer finds that inputs such as "What number of employees have cars?" are common, but that the con- struction "what number of ..." is not han- dled by the grammar, he can define the se- quence "what number of" as equivalent to "how many". This will provide an extension of coverage without the developer needing to know how any of the phrases involved are treated in the grammar. Extending the gram- mar is, of course, a more thorough solution if the expertise is available; the phrasal equiv- alence suggested here will not, for example, aThese two examples are taken from the Sun-spots computer bulletin board. 160 cope correctly with the query "What number of the employees have cars?". 3 CLARE'S PROCESSING STAGES The CLARE system is intended to provide language processing capabilities (both anal- ysis and generation) and some reasoning fa- cilities for a range of possible applications. English sentences are mapped, via a num- ber of stages, into logical representations of their literal meanings, from which reasoning can proceed. Stages are linked by well-defined representations. The key intermediate repre- sentation is that of quasi logical .form (QLF; Alshawi, 1990, 1992), a version of slightly extended first order logic augmented with constructs for phenomena such as anaphora and quantification that can only be resolved by reference to context. The unification of declarative linguistic data is the basic process- ing operation. The specific task considered in this paper is the process of mapping single sentences from character strings to QLF. Two kinds of issue are therefore not discussed here. These are the problem of segmenting a text into sen- tences and dealing with any markup instruc- tions (cf Futrelle et al, 1991), which is logically prior to producing character strings; and pos- sible context-dependence of the lexical phe- nomena discussed, which would need to be dealt with after the creation of QLFs. In the analysis direction, CLARE's front end processing stages are as follows. 1. A sentence is divided into a sequence of clusters separated by white space. 2. Each cluster is divided into one or more tokens: words (possibly inflected), punc- tuation characters, and other items. To- kenization is nondeterministic, and so a lattice is used at this and subsequent stages. 3. Each token is analysed as a sequence of one or more segments. For normal lexi- cal items, these segments are morphemes. . . The lexicon proper is first accessed at this stage. A variety of strategies for error recovery (including but not limited to spelling/ typing correction) are attempted on to- kens for which no segmentation could be found. Edges without segmentations are then deleted; if no complete path re- mains, sentence processing is abandoned. Further edges, possibly spanning non- adjacent vertices, are added to the lat- tice by the phrasal equivalence mecha- nism discussed earlier. Morphological, syntactic and semantic stages then apply to produce one or more QLFs. These are checked for adherence to sortal (se- lectional) restrictions, and, possibly with the help of user intervention, one is selected for further processing. Because tokenization is nondeterministic and does not involve lexical access, it will produce many possible tokens that cannot be further analysed. If sentence [1] above were processed, with/avoid would be one such to- ken. It is important that analyses are found for as many tokens and token sequences as possible, but that error recovery, especially if it involves user interaction, is not attempted unless really necessary. More generally, the system must decide which techniques to apply to which problem tokens, and how the results of doing so should be combined. CLARE's token segmentation phase there- fore attempts to find analyses for all the sin- gle tokens in the lattice, and for any special forms, which may include spaces and therefore span multiple tokens. Next, a series of recov- ery methods, which may be augmented or re- ordered by the application developer, are ap- plied. Globalmethods apply to the lattice as a whole, and are intended to modify its contents or create required lexicon entries on a scale larger than the individual token. Local meth- ods apply only to single still-unanalysed to- kens, and may either supply analyses for them 161 or alter them to other tokens. The default methods, all of which may be switched on or off using system commands, supply facilities for inferring entries through access to an ex- ternal machine-readable dictionary; for defin- ing sequences of capitalized tokens as proper names; for spelling correction (described in detail in the next section); and for interacting with the user who may suggest a replacement word or phrase or enter the VEX lexical ac- quisition subsystem (Carter, 1989) to create the required entries. After a method has been applied, the lat- tice is, if possible, pruned: edges labelled by unanalysed tokens are provisionally removed, as are other edges and vertices that then do not lie on a complete path. If pruning suc- ceeds (i.e. if at least one problem-free path remains) then token analysis is deemed to have succeeded, and unanalysed tokens (such as with/avoid) are forgotten; any remaining global methods are invoked, because they may provide analyses for token sequences, but re- maining local ones are not. If full pruning does not succeed, any subpath in the lattice containing more unrecognized tokens than an alternative subpath is eliminated. Subpaths containing tokens with with non-alphabetic characters are penalized more heavily; this ensures that if the cluster "boooks," is in- put, the token sequence "boooks ," (in which "boooks" is an unrecognized token and "," is a comma) is preferred to the single token "boooks," (where the comma is part of the putative lexical item). The next method is then applied. 2 4 SEGMENTATION AND SPELLING CORRECTION A fairly simple affix-stripping approach to to- ken segmentation is adopted in CLARE be- Sin fact, for completeness, CLARE allows the ap- plication of two or more methods in tandem and will combines the results without any intermediate prun- ing. This option would be useful if, in a given appli- cation, two sources of knowledge were deemed to be about equally reliable in their predictions. cause inflectional morphological changes in English tend not to be complex enough to warrant more powerful, and potentially less efficient, treatments such as two-level mor- phology (Koskenniemi, 1983). Derivational morphological relationships typically involve semantic peculiarities as well, necessitating the definition of derived words in the lexicon in their own right. The rules for dividing clusters into :tokens have the same form as those for segmenting tokens into morphemes, and are processed by the same mechanism. Thus ",", like, say, "ed", is defined as a suffix, but one that is treated by the grammar as a separate word rather than a bound morpheme. Rules for punctuation characters are very simple be- cause no spelling changes are ever involved. However, the possessive ending "' s" is treated as a separate word in the CLARE grammar to allow the correct analysis of phrases such as "the man in the corner's wife", and spelling changes can be involved here. Like segmenta- tion, tokenization can yield multiple results, mainly because there is no reason for a com- plex cluster like Mr. or King's not also to be defined as a lexical item. One major advantage of the simplicity of the affix-stripping mechanism is that spelling correction can be interleaved directly with it. Root forms in the lexicon are represented in a discrimination net for efficient access (cf Emirkanian and Bouchard, 1988). When the spelling corrector is called to suggest possible corrections for a word, the number of simple errors (of deletion, insertion, substitution and transposition; e.g. Pollock and Zamora, 1984) to assume is given. Normal segmentation is just the special case of this with the number of errors set to zero. The mechanism nonde- terministically removes affixes from each end of the word, postulating errors if appropriate, and then looks up the resulting string in the discrimination net, again considering the pos- sibility of error. 3 3This is the reverse of Veronis' (1988) algorithm, where roots are matched before affixes. However, it 162 Interleaving correction with segmentation like this promotes efficiency in the following way. As in most other correctors, only up to two simple errors are considered along a given search path. Therefore, either the affix- stripping phase or the lookup phase is fairly quick and produces a fairly small number of results, and so the two do not combine to slow processing down. Another beneficial con- sequence of the interleaving is that no spe- cial treatment is required for the otherwise awkward case where errors overlap morpheme boundaries; thus desigend is corrected to de- signed as easily as deisgned or designde are. If one or more possible corrections to a to- ken are found, they may either be presented to the user for selection or approval, or, if the number of them does not exceed a pre- set threshold, all be preserved as alternatives for disambiguation at the later syntactic or semantic stages. The lattice representation allows multiple-word corrections to be pre- served along with single-word ones. It is generally recognized that spelling er- rors in typed input are of two kinds: compe- tence errors, where the user does not know, or has forgotten, how to spell a wordi and per- formance errors, where the wrong sequence of keys is hit. CLARE's correction mechanism is oriented towards the latter. Other work (e.g. Veronis, 1988, Emirkanian and Bouchard, 1988, van Berkel and De Smedt, 1988) em- phasizes the former, often on the grounds that competence errors are both harder for the user to correct and tend to make a worse impres- sion on a human reader. However, Emirka- nian and Bouchard identify the many-to-one nature of French spelling-sound correspon- dence as responsible for the predominance of such errors in that language, which they say does not hold in English; and material typed to CLARE tends to be processed further (for seems easier and more efficient to match affixes first, because then the hypothesized root can be looked up without having to allow for any spelling changes; and if both prefixes and suffixes are to be handled, as they are in CLARE, there is no obvious single starting point for searching for the root first. database access, translation, etc) rather than reproduced for potentially embarrassing hu- man consumption. A performance-error ap- proach also has the practical advantage of not depending on extensive linguistic knowl- edge; and many competence errors can be de- tected by a performance approach, especially if some straightforward adjustments (e.g. to prefer doubling to other kinds of letter inser- tion) are made to the algorithm. As well as coping quite easily with mor- pheme boundaries, CLARE's algorithm can also handle the insertion or deletion of word boundary spaces. For the token witha, CLARE postulates both with and with a as corrections, and (depending on the current switch settings) both may go into the lat- tice. The choice will only finally be made when a QLF is selected on sortal and other grounds after parsing and semantic analy- sis. For the token pair hey er, CLARE pos- tulates the single correction never, because this involves assuming only one simple er- ror (the insertion of a space) rather than two or more to "correct" each token individ- ually. Multiple overlapping possibilities can also be handled; the input Th m n worked causes CLARE to transform the initial lattice th m n worked into a corrected lattice containing analyses of the words shown here: th.e/to / man/men . a/an/in/ th:m /no/on/,/1/I - worked The edges labelled "them" and "man/men" are constructed first by the "global" spelling correction method, which looks for possible corrections across token boundaries. The edge for the token "m" is then removed because, given that it connects only to errorful tokens on both sides, it cannot form part of any potentially optimal path through the lattice. Corrections are, however, sought for "th" and ]63 "n" as single tokens when the local spelling correction method is invoked. The corrected lattice then undergoes syntactic and semantic processing, and QLFs for the sequences "the man worked" and "the men worked", but not for any sequence starting with "them" or "to", are produced. 5 AN EVALUATION To assess the usefulness of syntactico- semantic constraints in CLARE's spelling cor- rection, the following experiment, intended to simulate performance (typographic) er- rors, was carried out. Five hundred sen- tences, of up to ten words in length, falling within CLARE's current core lexical (1600 root forms) and grammatical coverage were taken at random from the LOB corpus. These sentences were passed, character by charac- ter, through a channel which transmitted a character without alteration with probability 0.99, and with probability 0.01 introduced a simple error. The relative probabilities of the four different kinds of error were deduced from Table X of Pollock and Zamora, 1984; where a new character had to be inserted or sub- stituted, it was selected at random from the original sentence set. This process produced a total of 102 sentences that differed from their originals. The average length was 6.46 words, and there were 123 corrupted tokens in all, some containing more than one simple error. Because longer sentences were more likely to be changed, the average length of a changed sentence was some 15% more than that of an original one. The corrupted sentence set was then pro- cessed by CLARE with only the spelling cor- rection recovery method in force and with no user intervention. Up to two simple er- rors were considered per token. No domain- specific or context-dependent knowledge was used. Of the 123 corrupted tokens, ten were cor- rupted into other known words, and so no correction was attempted. Parsing failed in nine of these cases; in the tenth, the cor- rupted word made as much sense as the orig- inal out of discourse context. In three further cases, the original token was not suggested as a correction; one was a special form, and for the other two, alternative corrections involved fewer simple errors. The corrections for two other tokens were not used because a corrup- tion into a known word elsewhere in the same sentence caused parsing to fail. Only one correction (the right one) was sug- gested for 59 of the remaining 108 tokens. Multiple-token correction, involving the ma- nipulation of space characters, took place in 24 of these cases. This left 49 tokens for which more than one correction was suggested, requiring syntactic and semantic processing for further disam- biguation. The average number of corrections suggested for these 49 was 4.57. However, only an average of 1.69 candidates (including, because of the way the corpus was selected, all the right ones) appeared in QLFs satis- fying selectional restrictions; thus only 19% of the wrong candidates found their way into any QLF. If, in the absence of frequency in- formation, we take all candidates as equally likely, then syntactic and semantic processing reduced the average entropy from 1.92 to 0.54, removing 72% of the uncertainty (see Carter, 1987, for a discussion of why entropy is the best measure to use in contexts like this). When many QLFs are produced for a sen- tence, CLARE orders them according to a set of scoring functions encoding syntactic and semantic preferences. For the 49 multiple- candidate tokens, removing all but the best- scoring QLF(s) eliminated 7 (21%) of the 34 wrong candidates surviving to the QLF stage; however, it also eliminated 5 (10~) of the right candidates. It is expected that future development of the scoring functions will fur- ther improve these figures, which are summa- rized in Table 1. The times taken to parse lattices containing multiple spelling candidates reflect the char- acteristics of CLARE's parser, which uses a ]64 Stage Right Wrong Average cand's cand's number Suggested 175 4.57 49 49 In any QLF 34 1.69 In best-scoring 44 27 1.45 QLF(s) Table 1: Correction candidates for the 49 multiple-candidate tokens backtracking, left-corner algorithm and stores well-formed constituents so as to avoid repeat- ing work where possible. In general, when a problem token appears late in the sentence and/or when several candidate corrections axe syntactically plausible, the lattice approach is several times faster than processing the al- ternative strings separately (which tends to be very time-consuming). When the problem token occurs early and has only one plausi- ble correction, the two methods are about the same speed. For example, in one case, a corrupted to- ken with 13 candidate corrections occurred in sixth position in an eight-word sentence. Parsing the resulting lattice was three times faster than parsing each alternative full string separately. The lattice representation avoided repetition of work on the first six words, tIow- ever, in another case, where the corrupted token occurred second in an eight-word sen- tence, and had six candidates, only one of which was syntactically plausible, the lattice representation was no faster, as the incorrect candidates in five of the strings led to the parse being abandoned early. An analogous experiment was carried out with 500 sentences from the same corpus which CLARE could not parse. 131 of the sentences, with average length 7.39 words, suf- fered the introduction of errors. Of these, only seven (5%) received a parse. Four of the seven received no sortally valid QLFs, leaving only three (2%) "false positives". This low figure is consistent with the results from the origi- naJly parseable sentence set; nine out of the ten corruptions into known words in that ex- periment led to parse failure, and only 19% of wrong suggested candidates led to a sor- tallyvalid QLF. If, as those figures suggest, the replacement of one word by another only rarely maps one sentence inside coverage to another, then a corresponding replacement on a sentence outside coverage should yield some- thing within coverage even more rarely, and this does appear to be the case. 6 CONCLUSIONS These experimental results suggest that gen- eral syntactic and semantic information is an effective source of constraint for correcting typing errors, and that a conceptually fairly simple staged architecture, where word iden- tity and word boundary ambiguities are only resolved when the relevant knowledge is ready to be applied, can be acceptably efficient. The lattice representation also allows the system to deal cleanly with word boundary uncer- tainty not caused by noise in the input. A fairly small vocabulary was used in the experiment. However, these words were originally selected on the basis of frequency of occurrence, so that expanding the lexi- con would involve introducing proportionately fewer short words than longer ones. Mistyped short words tend to be the ones with many correction candidates, so the complexity of the problem should grow less fast than might be expected with vocabulary size. Further- more, more use could be made of statistical information: relative frequency of occurrence could be used as a criterion for pruning rela- tively unlikely correction candidates, as could more sophisticated statistics in the sugges- tion algorithm, along the lines of Kernighan et al (1990). Phonological knowledge, to al- low competence errors to be tackled more di- rectly, would provide another useful source of constraint. 165 ACKNOWLEDGMENTS CLARE is being developed as part of a collab- orative project involving SRI International, British Aerospace, BP Research, British Tele- com, Cambridge University, the UK Defence Research Agency, and the UK Department of Trade and Industry. REFERENCES Abe, M., Y. Oshima, K. Yuura and N. Take- ichi (1986) "A Kana-Kanji Translation System for Non-Segmented Input Sen- tences Based on Syntactic and Seman- tic Analysis", Proceedings of the Eleventh International Conference on Computa- tional Linguistics, pp 280-285. Alshawi, H. (1990) "Resolving Quasi Logical Forms", Computational Linguistics 16:3, pp. 133-144. Alshawi, H. (ed.) (1992) The Core Language Engine, M.I.T. Press. van Berkel, B., and K. De Smedt (1988) "Tri- phone Analysis: A Combined Method for the Correction of Orthographical and Ty- pographical Errors", Proceedings of the Second Conference on Applied Natural Language Processing, pp. 77-83. Carter, D.M. (1987) "An Information- theoretic Analysis of Phonetic Dictionary Access", Computer Speech and Language, 2:1-11. Carter, D.M. (1989) "Lexical Acquisition in the Core Language Engine", Proceedings of the Fourth Conference of the European Chapter of the Association for Computa- tional Linguistics, pp 137-144. Emirkanian, L., and L.H. Bouchard (1988) "Knowledge Integration in a Robust and Efficient Morpho-syntactic Analyser for French", Proceedings of the Twelfth International Conference on Computa- tional Linguistics, pp 166-171. Futrelle, R.P., C.E. Dunn, D.S. Ellis and M.J. Pescitelli, Jr. (1991) "Preprocessing and Lemcon Design for Parsing Technical Text", Proceedings of the Second [nter- national Workshop on Parsing Technolo- gies, pp. 31-40. Grosz, B. J., D. E. Appelt, P. Martin, and F. Pereira (1987). "TEAM: An Exper- iment in the Design of Transportable Natural-Language Interfaces". Artificial Intelligence 32: 173-243. Kernighan, M.D., K.W. Church, and W.A. Gale (1990). "A Spelling Correction Pro- gram Based on a Noisy Channel Model", Proceedings of the Thirteenth Interna- tional Conference on Computational Lin- guistics, pp 205-210. Koskenniemi, K. (1983) Two-level morphol- ogy: a general computational model for word-form recognition and production. University of Helsinki, Department of General Linguistics, Publications, No. 11. Pollock, J.J., and A. Zamora (1984) "Au- tomatic Spelling Correction in Scientific and Scholarly Text", Communications of the ACM, 27:4, pp 358-368. Veronis, J. (1988) "Morphosyntactic Correc- tion in Natural Language Interfaces", Proceedings of the Twelfth International Conference on Computational Linguis- tics, pp 708-713. 166
1992
21
An Alternative Conception of Tree-Adjoining Derivation* Yves Schabes Department of Computer and Information Science University of Pennsylvania Philadelphia, PA 19104 Stuart M. Shieber Aiken Computation Laboratory Division of Applied Sciences Harvard University Cambridge, MA 02138 Abstract The precise formulation of derivation for tree- adjoining grammars has important ramifications for a wide variety of uses of the formalism, from syntactic analysis to semantic interpretation and statistical language modeling. We argue that the definition of tree-adjoining derivation must be re- formulated in order to manifest the proper linguis- tic dependencies in derivations. The particular proposal is both precisely characterizable, through a compilation to linear indexed grammars, and computationally operational, by virtue of an ef- ficient algorithm for recognition and parsing. 1 Introduction In a context-free grammar, the derivation of a string in the rewriting sense can be captured in a single canonical tree structure that abstracts all possible derivation orders. As it turns out, this derivation tree also corresponds exactly to the hi- erarchical structure that the derivation imposes on the str!ng, the derived tree structure of the string. The formalism of tree-adjoining grammars (TAG), on the other hand, decouples these two notions of derivation tree and derived tree. Intuitively, the derivation tree is a more finely grained structure *The authors are listed in alphabetical order. The first author was supported in part by DARPA Grant N0014- 90-31863, ARO Grant DAAL03-S9-C-0031 and NSF Grant IRI90-16592. The second author was supported in part by Presidential Young Investigator award IRI-91-57996 from the National Science Foundation. The authors wish to thank Aravind Joshi for his support of the research, and Aravind Joshi, Anthony Kroeh, Fernando Pereira, and K. Vijay-Shanker for their helpful discussions of the issues involved. We are indebted to David Yarowsky for aid in the design of the experiment mentioned in footnote 5 and for its execution. 167 than the derived tree, and as such can serve as a substrate on which to pursue further analysis of the string. This intuitive possibility is made man- ifest in several ways. Fine-grained syntactic anal- ysis can be pursued by imposing on the deriva- tion tree further combinatoriM constraints, for instance, selective adjoining constraints or equa- tional constraints over feature structures. Statis- tical analysis can be explored through the speci- fication of derivational probabilities as formalized in stochastic tree-adjoining grammars. Semantic analysis can be overlaid through the synchronous derivations of two TAGs. All of these methods rely on the derivation tree as the source of the important primitive relation- ships among trees. The decoupling of derivation trees from derived trees thus makes possible a more flexible ability to pursue these types of anal- yses. At the same time, the exact definition of derivation becomes of paramount importance. In this paper, we argue that previous definitions of tree-adjoining derivation have not taken full ad- vantage of this decoupling, and are not as appro- priate as they might be for the kind of further analysis that tree-adjoining analyses could make possible. In particular, the standard definition of derivation, due to Vijay-Shanker (1987), requires that elementary trees be adjoined at distinct nodes in elementary trees. However, in certain cases, especially cases characterized as linguistic modi- fication, it is more appropriate to allow multiple adjunctions at a single node. In this paper, we propose a redefinition of TAG derivation along these lines, whereby multiple aux- iliary trees of modification can be adjoined at a single node, whereas only a single auxiliary tree of predication can. The redefinition constitutes a new definition of derivation for TAG that we will refer to as extended derivation. In order for such a redefinition to be serviceable, however, it is nec- essary that it be both precise and operational. In service of the former, we provide a rigorous speci- fication of our proposal in terms of a compilation of TAGs into corresponding linear indexed gram- mars (LIG) that makes the derivation structure explicit. With respect to the latter, we show how the generated LIG can drive a parsing algorithm that recovers, either implicitly or explicitly, the extended derivations of the string. The paper is organized as follows. First, we re- view Vijay-Shanker's standard definition of TAG derivation, and introduce the motivation for ex- tended derivations. Then, we present the extended notion of derivation informally, and formalize it through the compilation of TAGs to LIGs. The original compilation provided by Vijay-Shanker and Weir and our variant for extended derivations are both decribed. Finally, we briefly mention a parsing algorithm for TAG that recovers extended derivations either implicitly or explicitly, and dis- cuss some issues surrounding it. Space limitations preclude us from presenting the algorithm itself, but a full description is given elsewhere (Schabes and Shieber, 1992). 2 The Standard Definition of Derivat ion To exemplify the distinction between standard and extended derivations, we exhibit the TAG of Fig- ure 1. This grammar derives some simple noun phrases such as "roasted red pepper" and "baked red potato". The former, for instance, is associ- ated with the derived tree in Figure 2(a). The tree can be viewed as being derived in two ways 1 Dependent: The auxiliary tree fifo is adjoined at the root node (address e) of fire. The re- sultant tree is adjoined at the root node (ad- dress e) of initial tree ap~. This derivation is depicted as the derivation tree in Figure 3(a). Independent: The auxiliary trees fir° and fire are adjoined at the root node of the initial tree ape. This derivation is depicted as the derivation tree in Figure 3(b). In the independent derivation, two trees are sepa- rately adjoined at one and the same node in the initial tree. In the dependent derivation, on the other hand, one auxiliary tree is adjoined to the 1 As is standard in the TAG literature we disallow ad- junction at the foot nodes of auxiliary trees. 168 NP NP I I N N 1 I potato pepper N Adj N* I roasted (%) (%) (g.,) N N Adj N* Adj N* 1 ( "red baked Figure 1: A sample tree-adjoining grammar NP NP I I N N Adj N Adj N roasted Adj N red Adj N i I I I red pepper roasted pepper (a) (b) Figure 2: Two trees derived by the grammar of Figure 1 g, % (a) (b) Figure 3: Derivation trees for the derived tree of Figure 2(a) according to the grammar of Figure 1 other, the latter only being adjoined to the initial tree. We will use this informal terminology uni- formly in the sequel to distinguish the two general topologies of derivation trees. The standard definition of derivation, as codified by Vijay-Shanker, restricts derivations so that two adjunctions cannot occur at the same node in the same elementary tree. The dependent notion of derivation is therefore the only sanctioned deriva- tion for the desired tree in Figure 2(a); the inde- pendent derivation is disallowed. Vijay-Shanker's definition is appropriate because for any indepen- dent derivation, there is a dependent derivation of the same derived tree. This can be easily seen in that any adjunetion of/32 at a node at which an adjunction of/31 occurs could instead be replaced by an adjunction of/32 at the root of/31. The advantage of this standard definition of derivation is that a derivation tree in this normal form unambiguously specifies a derived tree. The independent derivation tree on the other hand is ambiguous as to the derived tree it specifies in that a notion of precedence of the adjunctions at the same node is unspecified, but crucial to the derived tree specified. This follows from the fact that the independent derivation tree is symmetric with respect to the roles of the two auxiliary trees (by inspection), whereas the derived tree is not. By symmetry, therefore, it must be the case that the same independent derivation tree specifies the alternative derived tree in Figure 2(b). 3 Motivation for Extended Derivations In the absence of some further interpretation of the derivation tree nothing hinges on the choice of derivation definition, so that the standard def- inition is as reasonable as any other. However, tree-adjoining grammars are almost universally extended with augmentations that make the issue apposite. We discuss three such variations here, all of which argue for the use of independent deriva- tions under certain circumstances. 3.1 Adding Adjoining Constraints Already in very early work on tree-adjoining gram- mars (Joshi et al., 1975) constraints were allowed to be specified as to whether a particular auxiliary tree may or may not be adjoined at a particular node in a particular tree. The idea is formulated in its modern variant as selective-adjoining con- straints (Vijay-Shanker and Joshi, 1985). As an application of this capability, we consider the re- mark by Quirk et al. (1985, page 517) that "di- rection adjuncts of both goal and source can nor- mally be used only with verbs of motion", which accounts for the distinction between the following sentences: (1)a. Brockway escorted his sister to the annual cotillion. b. #Brockway resembled his sister to the an- nual cotillion. This could be modeled by disallowing through se- lective adjoining constraints the adjunction of the elementary tree corresponding to a to adverbial at the VP node of the elementary tree corresponding to the verb resembles. 2 However, the restriction applies even with intervening (and otherwise ac- ceptable) adverbials. (2)a. Brockway escorted his sister last year. b. Brockway escorted his sister last year to the annual cotillion. (3)a. Brockway resembled his sister last year. b. #Brockway resembled his sister last year to the annual cotillion. Under the standard definition of derivation, there is no direct adjunction in the latter sentence of the to tree into the resembles tree. Rather, it is dependently adjoined at the root of the elemen- tary tree that heads the adverbial last year, the latter directly adjoining into the main verb tree. To restrict both of the ill-formed sentences, then, a restriction must be placed not only on adjoining 2Whether the adjunction occurs at the VP node or the S node is immaterial to the argtnnent. 169 (4)a. b. (5)a. b. (6)a. * b. * the goal adverbial in a resembles context, but also in the last year adverbial context. But this con- straint is too strong, as it disallows sentence (2b) above as well. The problem is that the standard derivation does not correctly reflect the syntactic relation be- tween adverbial modifier and the phrase it modi- fies when there are multiple modifications in a sin- gle clause. In such a case, each of the adverbials independently modifies the verb, and this should be reflected in their independent adjunction at the same point. But this is specifically disallowed in a standard derivation. It is important to note that the argument ap- plies specifically to auxiliary trees that correspond to a modification relationship. Auxiliary trees are used in TAG typically for predication relations as well, 3 as in the case of raising and sentential com- plement constructions. 4 Consider the following sentences. (The brackets mark the leaves of the pertinent trees to be combined by adjunction in the assumed analysis.) Brockway conjectured that Harrison wanted to escort his sister. [Brockway conjectured that] [Harrison wanted] [to escort his sister] Brockway wanted to try to escort his sis- ter. [Srockway wanted] [to try] [to escort his sister] Harrison wanted Brockway tried to escort his sister. [Harrison wanted] [Brockway tried] [to es- cort his sister] Assume (following, for instance, the analysis of Kroch and Joshi (1985)) that the trees associ- ated with the various forms of the verbs "try", "want", and "conjecture" all take sentential com- plements, certain of which are tensed with overt subjects and others untensed with empty subjects. The auxiliary trees for these verbs specify by ad- 3We use the term 'predication' in its logical sense, that is, for auxiliary trees that serve as logical predicates over the trees into which they adjoin, in contrast to the term's linguistic sub-sense in which the argument of the predicate is a linguistic subject. 4 The distinction between predicative and modifier trees has been proposed previously for purely linguistic reasons by Kroch (1989), who refers to them as thematic and ath- ematic trees, respectively. The arguments presented here can be seen as providing further evidence for differentiating the two kinds of auxiliary trees. 170 junction constraints which type of sentential com- plement they take: "conjecture" requires tensed complements, "want" and "try" untensed. Under this analysis the auxiliary trees must not be al- lowed to independently adjoin at the same node. For instance, if trees corresponding to "Harrison wanted" and "Brockway tried" (which both re- quire untensed complements) were both adjoined at the root of the tree for "to escort his sister", the selective adjunction constraints would be satisfied, yet the generated sentence (6a) is ungrammatical. Thus, the case of predicative trees is entirely unlike that of modifier trees. Here, the standard notion of derivation is exactly what is needed as far as in- terpretation of adjoining constraints is concerned. In summary, the interpretation of adjoining con- straints in TAG is sensitive to the particular no- tion of derivation that is used. Therefore, it can be used as a litmus test for an appropriate definition of derivation. As such, it argues for a nonstandard, independent, notion of derivation for modifier aux- iliary trees and a standard, dependent, notion for predicative trees. 3.2 Adding Statistical Parameters In a similar vein, the statistical parameters of a stochastic lexicalized TAG (SLTAG) (Resnik, 1992; Schabes, 1992) specify the probability of ad- junction of a given auxiliary tree at a specific node in another tree. This specification may again be interpreted with regard to differing derivations, obviously with differing impact on the resulting probabilities assigned to derivation trees. (In the extreme case, a constraint prohibiting adjoining corresponds to a zero probability in an SLTAG. The relation to the argument in the previous sec- tion follows thereby.) Consider a case in which linguistic modification of noun phrases by adjec- tives is modeled by adjunction of a modifying tree. Under the standard definition of derivation, mul- tiple modifications of a single NP would lead to dependent adjunctions in which a first modifier adjoins at the root of a second. As an example, we consider again the grammar given in Figure 1, that admits of derivations for the strings "baked red potato" and "baked red pepper". Specifying adjunction probabilities on standard derivations, the distinction between the overall probabilities for these two strings depends solely on the ad- junction probabilities of fire (the tree for red) into apo and ape (those for potato and pepper, respec- tively), as the tree fib for the word baked is adjoined in both cases at the root of fl~ in both standard derivations. In the extended derivations, on the other hand, both modifying trees are adjoined in- dependently into the noun trees. Thus, the overall probabilities are determined as well by the prob- abilities of adjunction of the trees for baked into the nominal trees. It seems intuitively plausible that the most important relationships to charac- terize statistically are those between modifier and modified, rather than between two modifiers. 5 In the case at hand, the fact that potatoes are more frequently baked, whereas peppers are roasted, would be more determining of the expected overall probabilities. Note again that the distinction between modi- fier and predicative trees is important. The stan- dard definition of derivation is entirely appropriate for adjunction probabilities for predicative trees, but not for modifier trees. 3.3 Adding Semantics Finally, the formation of synchronous TAGs has been proposed to allow use of TAGs in semantic interpretation, natural language generation, and machine translation. In previous work (Shieber and Schabes, 1990), the definition of synchronous TAG derivation is given in a manner that requires multiple adjunctions at a single node. The need for such derivations follows from the fact that syn- chronous derivations are intended to model seman- tic relationships. In cases of multiple adjunction of modifier trees at a single node, the appropri- ate semantic relationships comprise separate mod- ifications rather than cascaded ones, and this is reflected in the definition of synchronous TAG derivation. 6 Because of this, a parser for syn- chronous TAGs must recover, at least implicitly, the extended derivations of TAG derived trees. 5Intuition is an appropriate guide in the design of the SLTAG framework, as the idea is to set up a linguisti- cally plausible infrastructure on top of which a lexically- based statistical model can be built. In addition, sugges- tive (though certainly not conclusive) evidence along these lines can be gleaned from corpora analyses. For instance, in a simple experiment in which medium frequency triples of exactly the discussed form "(adjective) (adjective) (noun)" were examined, the mean mutual information between the first adjective and the noun was found to be larger than that between the two adjectives. The statistical assump- tions behind the experiment do not allow very robust con- clusions to be drawn, and more work is needed along these lines. 6The importance of the distinction between predicative and modifier trees with respect to how derivations are de- fined was not appreciated in the earlier work; derivations were taken to be of the independent variety in all cases. In future work, we plan to remedy this flaw. 171 Note that the independence of the adjunction of modifiers in the syntax does not imply that seman- tically there is no precedence or scoping relation between them. As exemplified in Figure 4, the de- rived tree generated by multiple independent ad- junctions at a single node still manifests nesting relationships among the adjoined trees. This fact may be used to advantage in the semantic half of a synchronous tree-adjoining grammar to specify the semantic distinction between, for example, the following two sentences: 7 (7)a. Brockway paid for the tickets twice inten- tionally. b. Brockway paid for the tickets intention- ally twice. We hope to address this issue in greater detail in future work on synchronous tree-adjoining gram- mars. 4 Informal Specification of Extended Derivations We have presented several arguments that the standard notion of derivation does not allow for an appropriate specification of dependencies to be captured. An extended notion of derivation is needed that . Differentiates predicative and modifier auxil- iary trees; 2. Requires dependent derivations for predica- tive trees; 3. Requires independent derivations for modifier trees; and 4. Unambiguously specifies a derived tree. Recall that a derivation tree is a tree with un- ordered arcs where each node is labeled by an el- ementary tree of a TAG and each arc is labeled by a tree address specifying a node in the parent tree. In a standard derivation tree no two sibling arcs can be labeled with the same address. In an extended derivation tree, however, the condition is relaxed: No two sibling arcs to predicative trees can be labeled with the same address. Thus, for any given address there can be at most one pred- icative tree and several modifier trees adjoined at rWe are indebted to an anonymous reviewer for raising this issue crisply through examples similar to those given here. T (a) Co) ~N--N*~ A Figure 4: Schematic extended derivation tree and associated derived tree that node. So as to fully specify the output derived tree, we specify a partial ordering on sibling arcs by mandating that arcs corresponding to modifier trees adjoined at the same address are treated as ordered left-to-right. However, all other arcs, in- cluding those for predicative adjunctions are left unordered. A derivation tree specifies a derived tree through a bottom-up traversal (as is standard since the work of Vijay-Shanker (1987)). The choice of a particular traversal order plays the same role as choosing a particular rewriting derivation order in a context-free grammar -- leftmost or right- most, say -- in eliminating spurious ambiguity due to inconsequential reordering of operations. An extended derivation tree specifies a derived tree in exactly the same manner, except that there must be a specification of the derived tree spec- ified when several trees are adjoined at the same node. Assume that in a given tree T at a particular address t, the predicative tree P and the k mod- ifier trees M1,..., Mk (in that order) are directly adjoined. Schematically, the extended derivation tree would appear as in Figure 4(a). Associated with the subtrees rooted at the k + 1 elementary auxiliary trees in this derivation are k + 1 derived auxiIiary trees (Ap and A1,..., Ak, respectively). (The derived auxiliary trees are specified induc- tively; it is this sense in which the definition cor- responds to a bottom-up traversal.) There are many possible trees that might be en- tertained as the derived tree associated with the derivation rooted at T, one for each permutation 172 of the k + 1 auxiliary trees. Since the ordering of the modifiers in the derivation tree is essentially arbitrary, we can fix on a single ordering of these in the output tree. We will choose the ordering in which the top to bottom order in the derived tree follows the partial order on the nodes in the deriva- tion tree. Thus A1 appears higher in the tree than A2, A2 higher than A3 and so forth. This much is arbitrary. The choice of where the predicative tree goes, however, is consequential. There are k + 1 possible positions, of which only two can be seriously main- tained: outermost, at the top of the tree; or inner- most, at the bottom. We complete the (informal) definition of extended derivation by specifying the derived tree corresponding to such a derivation to manifest outermost predication as depicted in Fig- ure 4(b). Both linguistic and technical consequences ar- gue for outermost, rather than innermost, predi- cation. Linguistically, the outermost method spec- ifies that if both a predicative tree and a modifier tree are adjoined at a single node, then the pred- icative tree attaches "higher" than the modifier tree; in terms of the derived tree, it is as if the predicative tree were adjoined at the root of the modifier tree. This accords with the semantic in- tuition that in such a case, the modifier is modify- ing the original tree, not the predicative one. (The alternate "reading", in which the modifier modi- fies the predicative tree, is still obtainable under an outermost-predication standard by having the modifier auxiliary tree adjoin at the root node of the predicative tree.) In contrast, the innermost- predication method specifies that the modifier tree attaches higher, as if the modifier tree adjoined at the root of the predicative tree and was therefore modifying the predicative tree, contra semantic in- tuitions. From a technical standpoint, the outermost- predication method requires no changes to the parsing rules to be presented later, but only a sin- gle addition. The innermost-predication method induces some subtle interactions between the orig- inal parsing rules and the additional one, necessi- tating a much more complicated set of modifica- tions to the original algorithm. (In fact, the com- plexities in generating such an algorithm consti- tuted the precipitating factor that led us to revise our original, innermost-predication, attempt at re- defining tree-adjoining derivation.) 5 Formal Specification of Ex- tended Derivations In all three application areas of TAGs, the need is evidenced for a modified notion of derivation that retains the dependent notion of derivation for predicative trees but mandates independent ad- junction for modifier trees. A formal definition of extended derivation can be given by means of a compilation of tree-adjoining grammars into linear indexed grammars. We discuss such a compilation in this section. This compilation is especially use- ful as it can be used as the basis for a parsing al- gorithm that recovers the extended derivations for strings. The design of the algorithm is the topic of Section 6. Linear indexed grammars (LIG) constitute a grammatical framework based, like context-free, context-sensitive, and unrestricted rewriting sys- tems, on rewriting strings of nonterminal and ter- minal symbols. Unlike these systems, linear in- dexed grammars, like the indexed grammars from which they are restricted, allow stacks of marker symbols, called indices, to be associated with the nonterminal symbols being rewritten. The linear version of the formalism allows the full index infor- mation from the parent to be used to specify the index information for only one of the child con- stituents. Thus, a linear indexed production can be given schematically as: curred. For these reasons, we use the technique in this work. The compilation process that manifests the standard definition of derivation can be most eas- ily understood by viewing nodes in a TAG elemen- tary tree as having both a top and bottom compo- nent, identically marked for nonterminal category, that dominate (but may not immediately domi- nate) each other. (See Figure 5.) The rewrite rules of the corresponding linear indexed gram- mar capture the immediate domination between a bottom node and its child top nodes directly, and capture the domination between top and bot- tom parts of the same node by optionally allowing rewriting from the top of a node to an appropriate auxiliary tree, and from the foot of the auxiliary tree back to the bottom of the node. The index stack keeps track of the nodes that adjunction has occurred on so that the recognition to the left and the right of the foot node will occur under identical assumption of derivation structure. In summary, the following LIG rules are generated: . Immediate domination dominating foot: For each auxiliary tree node r/ that dominates the foot node, with children 01, • .., rl, .... , r/,, where r/a is the child that also dominates the foot node, include a production b[..r/] -, t[,1].., t[o,-x]t[..,,]t[r/,+l].-- t[o,] /o[../3o] --. Nile1].." N,-1[/3,-1] N,J..~3,] U,+l [/3,+d""" gk [/3k] The Ni are nonterminals, the/3/strings of indices. The ".." notation stands for the remainder of the stack below the given string of indices. Note that only one element on the right-hand side, Ns, in- herits the remainder of the stack from the parent. (This schematic rule is intended to be indicative, not definitive. We ignore issues such as the option- ality of the inherited stack, how terminal symbols fit in, and so forth. Vijay-Shanker and Weir (1990) present a complete discussion.) Vijay-Shanker and Weir (1990) present a way of specifying any TAG as a linear indexed grammar. The LIG version makes explicit the standard no- tion of derivation being presumed. Also, the LIG version of a TAG grammar can be used for recog- nition and parsing. Because the LIG formalism is based on augmented rewriting, the parsing al- gorithms can be much simpler to understand and easier to modify, and no loss of generality is in- . Immediate domination not including foot: For each elementary tree node r/ that does not dominate a foot node, with children r/i,..., r/,~, include a production b[,] --, t[r/d...t[,,] . No adjunction: For each elementary tree node r/that is not marked for substitution or oblig- atory adjunction, include a production . Start root of adjunction: For each elementary tree node r/on which the auxiliary tree/3 with root node r k can be adjoined, include the fol- lowing production: t[..,or] 5. Start foot of adjnnction: For each elementary tree node r/on which the auxiliary tree fl with 178 Type 4 ,,~ Type1/2 2 ~ ~ -b[;] / : Type $ / Figure 5: Schematic structure of adjunction with top and bottom of each node separated foot node r/! can be adjoined, include the fol- lowing production: ---. b[..,fl 6. Start substitution: For each elementary tree node ~/marked for substitution on which the initial tree a with root node qr can be substi- tuted, include the production We will refer to productions generated by Rule i above as Type i productions. For example, Type 3 productions are of the form t[..~/] -* b[..T/]. For fur- ther information concerning the compilation see the work of Vijay-Shanker and Weir (1990) and Schabes (1991). For present purposes, it is suf- ficient to note that the method directly embeds the standard notion of derivation in the rewriting process. To perform an adjunction, we move (by Rule 4) from the node adjoined at to the top of the root of the auxiliary tree. At the root, ad- ditional adjunctions might be performed. When returning from the foot of the auxiliary tree back to the node where adjunction occurred, rewriting continues at the bottom of the node (see Rule 5), not the top, so that no more adjunctions can be started at that node. Thus, the dependent nature of predicative adjunction is enforced because only a single adjunction can occur at any given node. In order to permit extended derivations, we must allow for multiple modifier tree adjunctions at a single node. There are two natural ways this might be accomplished, as depicted in Figure 6. 174 (a) predicative tree Figure 6: Schematic structure of possible predica- tive and modifier adjunctions with top and bottom of each node separated 1. Modified start foot of adjunction rule: Allow moving from the bottom of the foot of a mod- ifier auxiliary tree to the top (rather than the bottom) of the node at which it adjoined (Fig- ure 6b). 2. Modified start root of adjunction rule: Allow moving from the bottom (rather than the top) of a node to the top of the root of a modifier auxiliary tree (Figure 6c). As can be seen from the figures, both of these methods allow recursion at a node, unlike the orig- inal method depicted in Figure 6a. Thus multi- ple modifier trees are allowed to adjoin at a single node. Note that since predicative trees fall under the original rules, at most a single predicative tree can be adjoined at a node. The two methods cor- respond exactly to the innermost- and outermost- predication methods discussed in Section 4. For the reasons described there, the latter is preferred. In summary, independent derivation structures can be allowed for modifier auxiliary trees by start- ing the adjunction process from the bottom, rather than the top of a node for those trees. Thus, we split Type 4 LIG productions into two subtypes for predicative and modifier trees, respectively. 4a. Start root of predicative adjunction: For each elementary tree node r/on which the predica- tive auxiliary tree fl with root node fir can be adjoined, include the following production: -+ 4b. Start root of modifier adjunction: For each elementary tree node y on which the modi- fier auxiliary tree/~ with root node r/~ can be adjoined, include the following production: --, Once this augmentation has been made, we no longer need to allow for adjunctions at the root nodes of modifier auxiliary trees, as repeated ad- junction is now allowed for by the new rule 4b. Consequently, P~ules 4a and 4b must treat all mod- ifier auxiliary tree root nodes as if they have ad- joining constraints that forbid modifier tree ad- junctions that do not correspond to modification of the tree itself. This simple modification to the compilation pro- cess from TAG to LIG fully specifies the modified notion of derivation. The recognition algorithms for TAG based on this compilation, however, must be adjusted to allow for the new rule types. 175 6 Recognition and Parsing Following Schabes (1991), the LIG generated by compiling a TAG can be used as the basis for Ear- Icy recognition. Schabes's original method must be modified to respect the differences in compi- lation engendered by extended derivations. Such parsing rules, along with an extension that allows building of explicit derivation trees on-line as a ba- sis for incremental interpretation, have been devel- oped, and are presented in an extended version of this paper (Schabes and Shieber, 1992). In sum- mary, the algorithm operates as a variant of Earley parsing on the corresponding LIG. The set of ex- tended derivations can subsequently be recovered from the set of Earley items generated by the al- gorithm. The resultant algorithm can be further modified so as to build an explicit derivation tree incrementally as parsing proceeds; this modifica- tion, which is a novel result in its own right, al- lows the parsing algorithm to be used by systems that require incremental processing with respect to tree-adjoining grammars. As a proof of concept, the parsing algorithm just described was implemented in Prolog on top of a simple, general-purpose, agenda-based infer- ence engine. Encodings of explicit inference rules are essentially interpreted by the inference engine. The Prolog database is used as the chart; items not already subsumed by a previously generated item are asserted to the database as the parser runs. An agenda is maintained of potential new items. Items are added to the agenda as inference rules are triggered by items added to the chart. Because the inference rules are stated explicitly, the relation between the abstract inference rules described in this paper and the implementation is extremely transparent. Because the prototype was implemented as a meta-interpreter it is not partic- ularly efficient. (In particular, the implementation does not achieve the theoretical O(n 6) bound on complexity, because of a lack of appropriate in- dexing.) Code for the prototype implementation is available for distribution electronically from the authors. 7 Conclusion The precise formulation of derivation for tree- adjoining grammars has important ramifications for a wide variety of uses of the formalism, from syntactic analysis to semantic interpretation and statistical language modeling. We have argued that the definition of tree-adjoining derivation must be reformulated in order to take greatest ad- vantage of the decoupling of derivation tree and derived tree by manifesting the proper linguistic dependencies in derivations. The particular pro- posal is both precisely characterizable, through a compilation to linear indexed grammars, and com- putationally operational, by virtue of an efficient algorithm for recognition and parsing. References Aravind K. Joshi, L. S. Levy, and M. Takahashi. 1975. Tree adjunct grammars. Journal of Com- puter and System Sciences, 10(1). Anthony Kroch and Aravind K. Joshi. 1985. Lin- guistic relevance of tree adjoining grammars. Technical Report MS-CIS-85-18, Department of Computer and Information Science, University of Pennsylvania, April. Anthony Kroch. 1989. Asymmetries in long dis- tance extraction in a tag grammar. In M. Baltin and A. Kroch, editors, Alternative Conceptions of Phrase Structure, pages 66-98. University of Chicago Press. Randolph Quirk, Sidney Greenbaum, Geoffrey Leech, and Jan Svartvik. 1985. A Comprehen- sive Grammar of the English Language. Long- man. Philip Resnik. 1992. Lexicalized tree-adjoining grammar for distributional analysis. To appear in Proceedings of the 14 th International Confer- ence on Computational Linguistics. Yves Schabes and Stuart M. Shieber. 1992. An alternative conception of tree-adjoining deriva- tion. Technical Report 08-92, Harvard Univer- sity. Yves Schabes. 1991. Computational and mathematical studies of lexicalized grammars. Manuscript in preparation based on the author's PhD dissertation (University of Pennsylvania, August 1990). Yves Schabes. 1992. Stochastic lexicalized tree- adjoining grammars. To appear in Proceedings of the 14 th International Conference on Com- putational Linguistics. Stuart M. Shieber and Yves Schabes. 1990. Syn- chronous tree-adjoining grammars. In Pro- ceedings of the 13 th International Conference 176 on Computational Linguistics (COLING'90), Helsinki. K. Vijay-Shanker and Aravind K. Joshi. 1985. Some computational properties of Tree Adjoin- ing Grammars. In 23 ~d Meeting of the Associ- ation for Computational Linguistics, pages 82- 93, Chicago, Illinois, July. K. Vijay-Shanker and David J. Weir. 1990. Poly- nomial parsing of extensions of context-free grammars. In Masaru Tomita, editor, Current Issues in Parsing Technologies, pages 191-206. Kluwer Accademic Publishers. K. Vijay-Shanker. 1987. A Study of Tree Ad- joining Grammars. Ph.D. thesis, Department of Computer and Information Science, Univer- sity of Pennsylvania.
1992
22
GPSM: A GENERALIZED PROBABILISTIC SEMANTIC MODEL FOR AMBIGUITY RESOLUTION tJing-Shin Chang, *Yih-Fen Luo and tKeh-Yih Su tDepartment of Electrical Engineering National Tsing Hua University Hsinchu, TAIWAN 30043, R.O.C. tEmail: [email protected], [email protected] *Behavior Design Corporation No. 28, 2F, R&D Road II, Science-Based Industrial Park Hsinchu, TAIWAN 30077, R.O.C. ABSTRACT In natural language processing, ambiguity res- olution is a central issue, and can be regarded as a preference assignment problem. In this paper, a Generalized Probabilistic Semantic Model (GPSM) is proposed for preference computation. An effective semantic tagging procedure is proposed for tagging semantic features. A semantic score function is de- rived based on a score function, which inte- grates lexical, syntactic and semantic prefer- ence under a uniform formulation. The se- mantic score measure shows substantial im- provement in structural disambiguation over a syntax-based approach. 1. Introduction In a large natural language processing system, such as a machine translation system (MTS), am- biguity resolution is a critical problem. Various rule-based and probabilistic approaches had been proposed to resolve various kinds of ambiguity problems on a case-by-case basis. In rule-based systems, a large number of rules are used to specify linguistic constraints for re- solving ambiguity. Any parse that violates the se- mantic constraints is regarded as ungrammatical and rejected. Unfortunately, because every "rule" tends to have exception and uncertainty, and ill- formedness has significant contribution to the er- ror rate of a large practical system, such "hard rejection" approaches fail to deal with these situa- tions. A better way is to find all possible interpre- tations and place emphases on preference, rather than weU-formedness (e.g., [Wilks 83].) However, most of the known approaches for giving prefer- ence depend heavily on heuristics such as counting the number of constraint satisfactions. Therefore, most such preference measures can not be objec- tively justified. Moreover, it is hard and cosily to acquire, verify and maintain the consistency of the large fine-grained rule base by hand. Probabilistic approaches greatly relieve the knowledge acquisition problem because they are usually trainable, consistent and easy to meet cer- tain optimum criteria. They can also provide more objective preference measures for "soft re- jection." Hence, they are attractive for a large sys- tem. The current probabilistic approaches have a wide coverage including lexical analysis [DeRose 88, Church 88], syntactic analysis [Garside 87, Fujisaki 89, Su 88, 89, 91b], restricted semantic analysis [Church 89, Liu 89, 90], and experimental translation systems [Brown 90]. However, there is still no integrated approach for modeling the joint effects of lexical, syntactic and semantic in- formation on preference evaluation. A generalized probabilistic semantic model (GPSM) will be proposed in this paper to over- come the above problems. In particular, an in- tegrated formulation for lexical, syntactic and se- mantic knowledge will be used to derive the se- mantic score for semantic preference evaluation. Application of the model to structural disam- 177 biguation is investigated. Preliminary experiments show about 10%-14% improvement of the seman- tic score measure over a model that uses syntactic information only. 2. Preference Assignment Using Score Function In general, a particular semantic interpretation of a sentence can be characterized by a set of lexical categories (or parts of speech), a syntactic struc- ture, and the semantic annotations associated with it. Among the-various interpretations of a sen- tence, the best choice should be the most probable semantic interpretation for the given input words. In other words, the interpretation that maximizes the following score function [Su 88, 89, 91b] or analysis score [Chen 91] is preferred: Score (Semi, Sgnj, Lexk, Words) -- P (Semi, Synj, LezklWords) = P (SemilSynj, Lexk, Words) × P (Syn I ILexk, Words) x P (LexklWords) (1) (semantic score) (syntactic score) (lexical score) where (Lex,, Synj, Semi) refers to the kth set of lexical categories, the jth syntactic structure and the ith set of semantic annotations for the input Words. The three component functions are re- ferred to as semantic score (Ssem), syntactic score (Ssyn) and lexical score (Stex), respectively. The global preference measure will be referred to as compositional score or simply as score. In partic- ular, the semantic score accounts for the semantic preference on a given set of lexical categories and a particular syntactic structure for the sentence. Various formulation for the lexical score and syn- tactic score had been studied extensively in our previous works [Su 88, 89, 91b, Chiang 92] and other literatures. Hence, we will concentrate on the formulation for semantic score. 3. Semantic Tagging Canonical Form of Semantic Representation Given the formulation in Eqn. (1), first we will show how to extract the abstract objects (Semi, Synj, LexD from a semantic representation. In general, a particular interpretation of a sentence can be represented by an annotated syntax tree (AST), which is a syntax tree annotated with fea- ture structures in the tree nodes. Figure 1 shows an example of AST. The annotated version of a node A is denoted as A = A [fa] in the figure, where fA is the feature structure associated with node A. Because an AST preserves both syntactic and semantic information, it can be converted to other deep structure representations easily. There- fore, without lose of generality, the AST represen- tation will be used as the canonical form of seman- tic representation for preference evaluation. The techniques used here, of course, can be applied to other deep structure representations as well. A[~] //-..< B[fB] C[fc] D[fD] E[fE] F[fF] G[fc] C I C2 C 3 C4 (wl) (w2) (w3) (w4) Ls={A } L7={B, C } L~={B, F, G } Ls={B, F,c4} L4={B, c3, c4} L3={D,E ,c3,ca} L2={D, c2, c3, c4} L1 ={Cl, C2, C3, C4 } Figure 1. Annotated Syntax Tree (AST) and Phrase Levels (PL). The hierarchical AST can be represented by a set of phrase levels, such as L] through L8 in Figure 1. Formally, a phrase level (PL) is a set of symbols corresponding to a sententialform of the sentence. The phrase levels in Figure 1 are derived from a sequence of rightmost derivations, which is commonly used in an LR parsing mech- anism. For example, 1-,5 and L4 correspond to the rightmost derivation B F Ca ~+ B c3 c4. Note rm that the first phrase level L] consists of all lexical categories cl ... cn of the terminal words (wl ... w,,). A phrase level with each symbol annotated with its feature structure is called an annotated phrase level (APL). The i-th APL is denoted as Fi. For example, L5 in Figure 1 has an annotated phrase level F5 = {B [fB], F [fF], c4 [fc,]} as its 178 counterpart, where fc, is the atomic feature of the lexical category c4, which comes from the lexical item of the 4th word w4. With the above nota- tions, the score function can be re-formulated as follows: Score (Semi, Synj , Lexk, Words) - P (FT, L 7, c~ I,o7) n n = P (r~ n IL~ n , c 1 , wl ) x P(LT'Ic , wD x P (c, [w 1 ) (2) (semantic score) (syntactic score) (lexical score) where c]" (a short form for {cl ... c,,}) is the kth set of lexical categories (Lexk), /-,1" ({L] ... Lr,,}) is the jth syntactic structure (Synj), and rl m ({F1 ... Fro}) is the ith set of semantic annotations (Semi) for the input words wl" ({wl ... wn}). A good encoding scheme for the Fi's will allow us to take semantic information into account with- out using redundant information. Hence, we will show how to annotate a syntax tree so that various interpretations can be characterized differently. Semantic Tagging A popular linguistic approach to annotate a tree is to use a unification-based mechanism. How- ever, many information irrelevant to disambigua- tion might be included. An effective encod- ing scheme should be simple yet can preserve most discrimination information for disambigua- tion. Such an encoding scheme can be ac- complished by associating each phrase struc- ture rule A --+ X1X2... XM with a head list (Xi,,Xi,...XiM). The head list is formed by arranging the children nodes (X1,X2,...,XM) in descending order of importance to the compo- sitional semantics of their mother node A. For this reason, Xi~, Xi~ and Xi, are called the primary, secondary and the j-th heads of A, respectively. The compositional semantic features of the mother node A can be represented as an ordered list of the feature structures of its children, where the order is the same as in the head list. For example, for S ~ NP VP, we have a head list (VP, NP), be- cause VP is the (primary) head of the sentence. When composing the compositional semantics of S, the features of VP and NP will be placed in the first and second slots of the feature structure of S, respectively. Because not all children and all features in a feature structure am equally significant for dis- ambiguation, it is not really necessary to annotate a node with the feature structures of all its chil- dren. Instead, only the most important N chil- dren of a node is needed in characterizing the node, and only the most discriminative feature of a child is needed to be passed to its mother node. In other words, an N-dimensional feature vector, called a semantic N-tuple, could be used to char- acterize a node without losing much information for disambiguation. The first feature in the se- mantic N-tuple comes from the primary head, and is thus called the head feature of the semantic N- tuple. The other features come from the other children in the order of the head list. (Compare these notions with the linguistic sense of head and head feature.) An annotated node can thus be approximated as A ,~ A(fl,f2,... ,fN), where fj = HeadFeature X~7~,~) is the (primary) head feature of its j-th head (i.e., Xij) in the head list. Non-head features of a child node Xij will not be percolated up to its mother node. The head fea- ture of ~ itself, in this case, is fx. For a terminal node, the head feature will be the semantic tag of the corresponding lexical item; other features in the N-tuple will be tagged as ~b (NULL). Figure 2 shows two possible annotated syn- tax trees for the sentence "... saw the boy in the park." For instance, the "loc(ation)" feature of "park" is percolated to its mother NP node as the head feature; it then serves as the sec- ondary head feature of its grandmother node PP, because the NP node is the secondary head of PP. Similarly, the VP node in the left tree is an- notated as VP(sta,anim) according to its primary head saw(sta,q~) and secondary head NP(anim,in). The VP(sta,in) node in the fight tree is tagged dif- ferently, which reflects different attachment pref- erence of the prepositional phrase. By this simple mechanism, the major charac- teristics of the children, namely the head features, can be percolated to higher syntactic levels, and 179 sta: stative verb S S def." definite article . ~ . ~ ~ ~ . loc: location anim: animate °t(a-hlLz-h2) ~~(~-hl,~-h2) ot(¢X-hl~t~sta,in~.._ ~(~--h~,~-h~) N P ~ f ~ - ~)saw(:ta:(~d)e~:,~)in(in~,def) the(def'~,¢) boy(-y~(~m,¢)in(in,~) ~ the(def#)/#)~p~par~Nk(loc,¢) the(def,t~) park(loc#) Figure 2. Ambiguous PP attachment patterns annotated with semantic 2-tuples. their correlation and dependency can be taken into account in preference evaluation even if they are far apart. In this way, different interpretations will be tagged differently. The preference on a partic- ular interpretation can thus be evaluated from the distribution of the annotated syntax trees. Based on the above semantic tagging scheme, a seman- tic score will be proposed to evaluate the seman- tic preference on various interpretations for a sen- tence. Its performance improvement over syntac- tic score [Su 88, 89, 91b] will be investigated. Consequently, a brief review of the syntactic score evaluation method is given before going into de- tails of the semantic score model. (See the cited references for details.) 4. Syntactic Score According to Eqn. (2), the syntactic score can be formulated as follows [Su 88, 89, 91b]: S,y,, =_ P(SynilLeZk,W'~) = P(L'~lc'~,w~) (3) fti = HP(LtlL~-',c~,w~) 1=2 1-I P (L, IL',-') ~" II P(L'IL'-') = HP({o~t, A,, /3,} I{o,,, ~',}) 180 where at, fit are the left context and right context under which the derivation At =~ X1X2... XM occurs. (Assume that Lt = {at, At,fit} and LI-1 = {at,X1,"" ,XM,fil}.) If L left context symbols in al and R right context symbols in fit are consulted to evaluate the syntactic score, it is said to operate in LLRR mode of operation. When the context is ignored, such an LoRo mode of oper- ation reduces to a stochastic context-free grammar. To avoid the normalization problem [Su 91b] arisen from different number of transition prob- abilities for different syntax trees, an alternative formulation of the syntactic score is to evaluate the transition probabilities between configuration changes of the parser. For instance, the config- uration of an LR parser is defined by its stack contents and input buffer. For the AST in Figure 1, the parser configurations after the read of cl, c2, c3, c4 and $ (end-of-sentence) are equivalent to L1, L2, L4, 1-.5 and Ls, respectively. Therefore, the syntactic score can be approximated as [Su 89, 91b]: S, vn ~ P(Ls, LT'" L2IL,) (4) P(LslL~) x P(LsIL4) x P(L41L2) x P(L21L1) In this way, the number of transition probabilities in the syntactic scores of all AST's will be kept the same as the sentence length. 5. Semantic Score Semantic score evaluation is similar to syntactic score evaluation. From Eqn. (2), we have the following semantic model for semantic score: S, em (Semi, Synj , Lex~:, Words) = p (p~n ILT, c~, w~) m I--1 ra n n = I"[P(F, IF1 ,L1 ,Cl,Wl) (5) 1=2 1"I P(r, lr,_l) = where 3~j am the semantic tags from the chil- dren of A1. For example, we have terms like e(VP(sta, anim) [ a, VP ~- v NP,fl) and P(VP(sta, in) la, Ve~v NP PP,fl),respec- fively, for the left and right trees in Figure 2. The annotations of the context am ignored in evalu- ating Eqn. (6) due to the assumption of seman- tics compositionality. The operation mode will be called LLRR+Alv, where N is the dimension of the N-tuple, and the subscript L (or R) refers to the size of the context window. With an appropriate N, the score will provide sufficient discrimination power for general disambiguation problem with- out resorting to full-blown semantic analysis. where At = At (ft,l,fln,...,fuv) is the anno- tated version of At, whose semantic N-tuple is (fl,1, fl,2,-", ft,N), and 57, fit are the annotated context symbols. Only Ft.1 is assumed to be sig- nificant for the transition to Ft in the last equa- tion, because all required information is assumed to have been percolated to Ft-j through semantics composition. Each term in Eqn. (5) can be interpreted as the probability thatAt is annotated with the partic- ular set of head features (fs,1, ft,2,..., fI,N), given that X1 ... XM are reduced to At in the context of a7 and fit. So it can be interpreted informally as P(At (fl,1, ft,2, . . . , fz ~v) I Ai ~ X1. . . XM , in the context of ~-7, fit ). It corresponds to the se- mantic preference assigned to the annotated node A t" Since (11,1, fl,~,"" ft,N) are the head features from various heads of the substructures of A, each term reflects the feature co-occurrence preference among these heads. Furthermore, the heads could be very far apart. This is different from most simple Markov models, which can deal with local constraints only. Hence, such a formulation well characterizes long distance dependency among the heads, and provides a simple mechanism to incor- porate the feature co-occurrence preference among them. For the semantic N-tuple model, the seman- tic score can thus be expressed as follows: S~.~ (6) m "~ I-[ P ( A* (ft,,, f,,2 " " " ft,N) la,,A, ,-- Xl " " gM,/~l) l=2 181 6. Major Categories and Semantic Features As mentioned before, not all constituents are equally important for disambiguation. For in- stance, head words are usually more important than modifiers in determining the compositional semantic features of their mother node. There is also lots of redundancy in a sentence. For in- stance, "saw boy in park" is equally recogniz- able as "saw the boy in the park." Therefore, only a few categories, including verbs, nouns, ad- jectives, prepositions and adverbs and their pro- jections (NP, VP, AP, PP, ADVP), are used to carry semantic features for disambiguation. These categories are roughly equivalent to the major cat- egories in linguistic theory [Sells 85] with the in- clusion of adverbs as the only difference. The semantic feature of each major category is encoded with a set of semantic tags that well describes each category. A few rules of thumb are used to select the semantic tags. In particular, semantic features that can discriminate different linguistic behavior from different possible seman- tic N-tuples are preferred as the semantic tags. With these heuristics in mind, the verbs, nouns, adjectives, adverbs and prepositions are divided into 22, 30, 14, 10 and 28 classes, respectively. For example, the nouns are divided into "human," "plant," "time," "space," and so on. These seman- tic classes come from a number of sources and the semantic attribute hierarchy of the ArchTran MTS [Su 90, Chen 91]. Table 1. Close Test of Semantic Score 7. Test and Analysis The semantic N-tuple model is used to test the improvement of the semantic score over syntactic score in structure disambiguation. Eqn. (3) is adopted to evaluate the syntactic score in L2RI mode of operation. The semantic score is derived from Eqn. (6) in L2R~ +AN mode, for N = 1, 2, 3, 4, where N is the dimension of the semantic S-tuple. A total of 1000 sentences (including 3 un- ambiguous ones) are randomly selected from 14 computer manuals for training or testing. They are divided into 10 parts; each part contains 100 sentences. In close tests, 9 parts are used both as the training set and the testing set. In open tests, the rotation estimation approach [Devijver 82] is adopted to estimate the open test perfor- mance. This means to iteratively test one part of the sentences while using the remaining parts as the training set. The overall performance is then estimated as the average performance of the 10 iterations. The performance is evaluated in terms of Top- N recognition rate (TNRR), which is defined as the fraction of the test sentences whose preferred interpretation is successfully ranked in the first N candidates. Table 1 shows the simulation re- suits of close tests. Table 2 shows partial results for open tests (up to rank 5.) The recognition rates achieved by considering syntactic score only and semantic score only are shown in the tables. (L2RI+A3 and L2RI+A4 performance are the same as L2R~+A2 in the present test environment. So they are not shown in the tables.) Since each sen- tence has about 70-75 ambiguous constructs on the average, the task perplexity of the current dis- ambiguation task is high. Score Rank 1 2 3 4 5 13 18 Syntax Semantics Semantics (L2R1) (L2RI+A1) (L2RI+A2) Count TNRR (%) 781 101 9 5 Count TNRR (%) 87.07 872 98.33 20 99.33 5 99.89 100.00 97.21 866 99.44 24 100.00 4 2 1 Count TNRR (%) 96.54 99.22 99.67 99.89 100.00 DataBase: 900 Sentences Test Set: 897 Sentences Total Number of Ambiguous Trees = 63233 (*) TNRR: Top-N Recognition Rate Table 2. Open Test of Semantic Score Score Syntax (L2R1) Rank Count TNRR (%) 1 430 43.13 2 232 66A0 3 94 75.83 4 80 83.85 5 35 87.36 Semantics (L2RI+A1) Count TNRR! (%) 569 57.07 163 73.42 90 82.45 50 87.46 22 89.67 Semantics (L2RI+A2) Count TNRR (%) 578 57.97 167 74.72 75 82.25 49 87.16 28 89.97 DataBase: 900 Sentences (+) Test Set: 997 Sentences (++) Total Number of Ambiguous Trees = 75339 (+) DataBase : effective database size for rotation estimation (++) Test Set : all test sentences participating the rotation estimation test 182 The close test Top-1 performance (Table 1) for syntactic score (87%) is quite satisfactory. When semantic score is taken into account, sub- stantial improvement in recognition rate can be observed further (97%). This shows that the se- mantic model does provide an effective mecha- nism for disambiguation. The recognition rates in open tests, however, are less satisfactory under the present test environment. The open test per- formance can be attributed to the small database size and the estimation error of the parameters thus introduced. Because the training database is small with respect to the complexity of the model, a significant fraction of the probability entries in the testing set can not be found in the training set. As a result, the parameters are somewhat "over- tuned" to the training database, and their values are less favorable for open tests. Nevertheless, in both close tests and open tests, the semantic score model shows substantial improvement over syntactic score (and hence stochastic context-free grammar). The improvement is about 10% for close tests and 14% for open tests. In general, by using a larger database and bet- ter robust estimation techniques [Su 91a, Chiang 92], the baseline model can be improved further. As we had observed from other experiments for spoken language processing [Su 91a], lexical tag- ging, and structure disambiguation [chiang 92], the performance under sparse data condition can be improved significantly if robust adaptive leam- ing techniques are used to adjust the initial param- eters. Interested readers are referred to [Su 91a, Chiang 92] for more details. 8. Concluding Remarks In this paper, a generalized probabilistic seman- tic model (GPSM) is proposed to assign semantic preference to ambiguous interpretations. The se- mantic model for measuring preference is based on a score function, which takes lexical, syntactic and semantic information into consideration and optimizes the joint preference. A simple yet effec- tive encoding scheme and semantic tagging proce- dure is proposed to characterize various interpreta- 183 tions in an N dimensional feature space. With this encoding scheme, one can encode the interpre- tations with discriminative features, and take the feature co-occurrence preference among various constituents into account. Unlike simple Markov models, long distance dependency can be man- aged easily in the proposed model. Preliminary tests show substantial improvement of the seman- tic score measure over syntactic score measure. Hence, it shows the possibility to overcome the ambiguity resolution problem without resorting to full-blown semantic analysis. With such a simple, objective and trainable formulation, it is possible to take high level se- mantic knowledge into consideration in statistic sense. It also provides a systematic way to con- struct a disambiguation module for large practical machine translation systems without much human intervention; the heavy burden for the linguists to write fine-grained "rules" can thus be relieved. REFERENCES [Brown 90] Brown, P. et al., "A Statistical Ap- proach to Machine Translation," Computational Linguistics, vol. 16, no. 2, pp. 79-85, June 1990. [Chen 91] Chen, S.-C., J.-S. Chang, J.-N. Wang and K.-Y. Su, "ArchTran: A Corpus-Based Statistics-Oriented English-Chinese Machine Translation System," Proceedings of Machine Translation Summit 11I, pp. 33-40, Washing- ton, D.C., USA, July 1-4, 1991. [Chiang 92] Chiang, T.-H., Y.-C. Lin and K.-Y. Su, "Syntactic Ambiguity Resolution Using A Discrimination and Robustness Oriented Adap- tive Leaming Algorithm", to appear in Pro- ceedings of COLING-92, 14th Int. Conference on Computational Linguistics, Nantes, France, 20-28 July, 1992. [Church 88] Church, K., "A Stochastic Parts Pro- gram and Noun Phrase Parser for Unrestricted Text," ACL Proc. 2nd Conf. on Applied Natu- ral Language Processing, pp. 136-143, Austin, Texas, USA, 9-12 Feb. 1988. [Church 89] Church, K. and P. Hanks, "Word As- sociation Norms, Mutual Information, and Lex- icography," Proc. 27th Annual Meeting of the ACL, pp. 76-83, University of British Colum- bia, Vancouver, British Columbia, Canada, 26- 29 June 1989. [DeRose 88] DeRose, SteverL J., "Grammatical Category Disambiguation by Statistical Opti- mization," Computational Linguistics, vol. 14, no. 1, pp. 31-39, 1988. [Devijver 82] Devijver, P.A., and J. Kittler, Pattern Recognition: A Statistical Approach, Prentice-Hall, London, 1982. [Fujisaki 89] Fujisaki, T., F. Jelinek, J. Cocke, E. Black and T. Nishino, "A Probabilistic Parsing Method for Sentence Disambiguation," Proc. of Int. Workshop on Parsing Technologies (IWPT- 89), pp. 85-94, CMU, Pittsburgh, PA, U.S.A., 28-31 August 1989. [Garside 87] Garside, Roger, Geoffrey Leech and Geoffrey Sampson (eds.), The Computational Analysis of English: A Corpus-Based Approach, Longman Inc., New York, 1987. [Liu 89] Liu, C.-L., On the Resolution of English PP Attachment Problem with a Probabilistic Se- mantic Model, Master Thesis, National Tsing Hua University, Hsinchu, TAIWAN, R.O.C., 1989. [Liu 90] Liu, C.-L, J.-S. Chang and K.-Y. Su, "The Semantic Score Approach to the Disam- biguation of PP Attachment Problem," Proc. of • ROCLING-III, pp. 253-270, Taipei, R.O.C., September 1990. [Sells 85] Sells, Peter, Lectures On Con- temporary Syntactic Theories: An Introduc- tion to Government-Binding Theory, General- ized Phrase Structure Grammar, and Lexical- Functional Grammar, CSLI Lecture Notes Number 3, Center for the Study of Language and Information, Leland Stanford Junior Uni- versity., 1985. [Su 88] Su, K.-Y. and J.-S. Chang, "Semantic and Syntactic Aspects of Score Function," Proc. of COLING-88, vol. 2, pp. 642-644, 12th Int. Conf. on Computational Linguistics, Budapest, Hungary, 22-27 August 1988. [Su 89] Su, K.-Y., J.-N. Wang, M.-H. Su and J.-S. Chang, "A Sequential Truncation Parsing Algo- rithm Based on the Score Function," Proc. of Int. Workshop on Parsing Technologies (IWPT- 89), pp. 95-104, CMU, Pittsburgh, PA, U.S.A., 28-31 August 1989. [Su 90] Su, K.-Y. and J.-S. Chang, "Some Key Issues in Designing MT Systems," Machine Translation, vol. 5, no. 4, pp. 265-300, 1990. [Su 91a] Su, K.-Y., and C.-H. Lee, "Robusmess and Discrimination Oriented Speech Recog- nition Using Weighted HMM and Subspace Projection Approach," Proceedings of IEEE ICASSP-91, vol. 1, pp. 541-544, Toronto, On- tario, Canada. May 14-17, 1991. [Su 91b] Su, K.-Y., J.-N. Wang, M.-H. Su, and J.- S. Chang, "GLR Parsing with Scoring". In M. Tomita (ed.), Generalized LR Parsing, Chapter 7, pp. 93-112, Kluwer Academic Publishers, 1991. [Wilks 83] Wilks, Y. A., "Preference Semantics, Ul-Formedness, and Metaphor," AJCL, vol. 9, no. 3-4, pp. 178 - 187, July - Dec. 1983. 184
1992
23
Development and Evaluation of a Broad-Coverage Probabilistic Grammar of English-Language Computer Manuals Ezra Black John Lafferty Salim Roukos <black I j laff ] roukos>*watson, ibm. tom IBM Thomas J. Watson Research Center, P.O. Box 704, Yorktown Heights, New York 10598 ABSTRACT We present an approach to grammar development where the task is decomposed into two separate subtasks. The first task is hnguistic, with the goal of producing a set of rules that have a large coverage (in the sense that the correct parse is among the proposed parses) on a bhnd test set of sentences. The second task is statistical, with the goal of developing a model of the grammar which assigns maximum probability for the correct parse. We give parsing results on text from computer manuals. 1. Introduction Many language understanding systems and machine translation systems rely on a parser of English as the first step in processing an input sentence. The general impres- sion may be that parsers with broad coverage of English are readily available. In an effort to gauge the state of the art in parsing, the authors conducted an experiment in Summer 1990 in which 35 sentences, all of length 13 words or less, were selected randomly from a several-million- word corpus of Associated Press news wire. The sentences were parsed by four of the major large-coverage parsers for general English. 1 Each of the authors, working sep- arately, scored 140 parses for correctness of constituent boundaries, constituent labels, and part-of-speech labels. All that was required of parses was accuracy in delim- iting and identifying obvious constituents such as noun phrases, prepositional phrases, and clauses, along with at least rough correctness in assigning part-of-speech labels, e.g. a noun could not be labelled as a verb. The tallies of each evaluator were compared, and were identical or very close in all cases. The best-performing parser was correct for 60% of the sentences and the the remaining parsers were below 40%. More recently, in early 1992, the cre- ator of another well-known system performed self-scoring on a similar task and reported 30% of input sentences as having been correctly parsed. On the basis of the pre- ceeding evidence it seems that the current state of the t At least one of the parties involved insisted that no perfor- mance results be made public. Such reticence is widespread and understandable. However, it is nonetheless important that perfor- mance norms be established for the field. Some progress has been made in this direction [3, 4]. art is far from being able to produce a robust parser of general English. In order to break through this bottleneck and begin making steady and quantifiable progress toward the goal of developing a highly accurate parser for general En- glish, organization of the grammar-development process along scientific lines and the introduction of stochastic modelling techniques are necessary, in our view. We have initiated a research program on these principles, which we describe in what follows. An account of our overall method of attacking the problem is presented in Section 2. The grammar involved is discussed in Section 3. Sec- tion 4 is concerned with the statistical modelling methods we employ. Finally, in Section 5, we present our experi- mental results to date. 2. Approach Our approach to grammar development consists of the following 4 elements: • Selection of application domain. • Development of a manually-bracketed corpus (tree- bank) of the domain. • Creation of a grammar with a large coverage of a blind test set of treebanked text. Statistical modeling with the goal that the cor- rect parse be assigned maximum probability by the stochastic grammar. We now discuss each of these elements in more detail. Application domain: It would be a good first step toward our goal of covering general English to demon- strate that we can develop a parser that has a high pars- ing accuracy for sentences in, say, any book listed in Books In Print concerning needlework; or in any whole- sale footwear catalog; or in any physics journal. The se- lected domain of focus should allow the acquisition of a naturally-occuring large corpus (at least a few million words) to allow for realistic evaluation of performance and 185 Fa Adverbial Phrase Fc Comparative Phrase Fn Nominal Clause Fr Relative Clause G Possessive Phrase J Adjectival Phrase N Noun Phrase Nn Nominal Proxy Nr Temporal Noun Phrase Nv Adverbial Noun Phrase P Prepositional Phrase S Full Sentence Si Sentential Interrupter Tg Present Participial Clause Ti Infinitival Clause Tn Past Participial Clause V Verb Phrase NULL Other Table 1: Lancaster constituent labels adequate amounts of data to characterize the domain so that new test data does not surprise system developers with a new set of phenomena hitherto unaccounted for in the grammar. We selected the domain of computer manuals. Be- sides the possible practical advantages to being able to assign valid parses to the sentences in computer manu- als, reasons for focusing on this domain include the very broad but not unrestricted range of sentence types and the availability of large corpora of computer manuals. We amassed a corpus of 40 million words, consisting of several hundred computer manuals. Our approach in attacking the goal of developing a grammar for computer manuals is one of successive approximation. As a first approxima- tion to the goal, we restrict ourselves to sentences of word length 7 - 17, drawn from a vocabulary consisting of the 3000 most frequent words (i.e. fully inflected forms, not lcmmas) in a 600,000-word subsection of our corpus. Ap- proximately 80% of the words in the 40-million-word cor- pus are included in the 3000-word vocabulary. We have available to us about 2 million words of sentences com- pletely covered by the 3000-word vocabulary. A lexicon for this 3000-word vocabulary was completed in about 2 months. Treebank: A sizeable sample of this corpus is hand- parsed ("treebanked"). By definition, the hand parse ("treebank parse") for any given sentence is considered AT1 CST CSW JJ NN1 PPH1 PPY RR VBDZ VVC VVG Singular Article (a, every) that as Conjunction whether as Conjunction General Adjective (free, subsequent) Singular Common Noun (character, site) the Pronoun "it" the Pronoun "you" General Adverb (exactly, manually) "was" Imperative form of Verb (attempt, proceed) -ing form of Verb (containing, powering) Table 2: Sample of Lancaster part-of-speech labels its "correct parse" and is used to judge the grammar's parse. To fulfill this role, treebank parses are constructed as "skeleton parses," i.e. so that all obvious decisions are made as to part-of-speech labels, constituent bound- aries and constituent labels, but no decisions are made which are problematic, controversial, or of which the tree- bankers are unsure. Hence the term "skeleton parse": clearly not all constituents will always figure in a tree- bank parse, but the essential ones always will. In practice, these are quite detailed parses in most cases. The 18 con- stituent labels 2 used in the Lancaster treebank are listed and defined in Table 1. A sampling of the approximately 200 part-of-speech tags used is provided in Table 2. To date, roughly 420,000 words (about 35,000 sen- tences) of the computer manuals material have been tree- banked by a team at the University of Lancaster, Eng- land, under Professors Geoffrey Leech and Roger Gar- side. Figure 1 shows two sample parses selected at ran- dom from the Lancaster Treebank. The treebank is divided into a training subcorpus and a test subcorpus. The grammar developer is able to in- spect the training dataset at will, but can never see the test dataset. This latter restriction is, we feel, crucial for making progress in grammar development. The purpose of a grammar is to correctly analyze previously unseen sentences. It is only by setting it to this task that its true accuracy can be ascertained. The value of a large bracketed training corpus is that it allows the grammar- ian to obtain quickly a very large 3 set of sentences that 2Actually there are 18 x 3 = 54 labels, as each label L has vari- ants LA: for a first conjunct, and L-{- for second and later conjuncts, of type L: e.g. [N[Ng~ the cause NSz] and [Nq- the appropriate action N-k]N]. 3 We discovered that the grammar's coverage (to be defined later) of the training set increased quickly to above 98% as soon as the grammarian identified the problem sentences. So we have been 186 IN It_PPH1 N] [V indicates_VVZ [Fn [Fn&whether_CSW [N a_AT1 call_NN1 N] [V completed_VVD successfully_RR V]Fn&] or_CC [Fn+ if_CSW IN some_DD error_NN1 N]@ [V was_VBDZ detected_VVN V] @[Fr that_CST [V caused_VVD [N the_AT call_NNl N] [Ti to_TO fail_VVI Wi]V]Fr]Fn+] Fn]V]._. [Fa If_CS [N you_PPY N] IV were_VBDR using_VVG [N a_AT1 shared_JJ folder_NN1 N]V]Fa] , -, IV include_VVC IN the_AT following_JJ N]V]:_: Figure 1: Two sample bracketed sentences from Lan- caster Treebank. the grammar fails to parse. We currently have about 25,000 sentences for training. The point of the treebank parses is to constitute a "strong filter," that is to eliminate incorrect parses, on the set of parses proposed by a grammar for a given sen- tence. A candidate parse is considered to be "accept- able" or "correct" if it is consistent with the treebank parse. We define two notions of consistency: structure- consistent and label-consistent. The span of a consitituent is the string of words which it dominates, denoted by a pair of indices (i, j) where i is the index of the leftmost word and j is the index of the rightmost word. We say that a constituent A with span (i, j) in a candidate parse for a sentence is structure-consistent with the treebank parse for the same sentence in case there is no constituent in the treebank parse having span (i', j') satisfying i' < i < j' < j or i < i' < j < j'. In other words, there can be no "crossings" of the span of A with the span of any treebank non-terminal. A grammar parse is structure-consistent with the treebank parse if all of its constituents are structure-consistent with the treebank parse. continuously increasing the training set as more data is treebanked. The notion of label-consistent requires in addition to structure-consistency that the grammar constituent name is equivalent 4 to the treebank non-terminal label. The following example will serve to illustrate our con- sistency criteria. We compare a "treebank parse": [NT1 [NT2 wl_pl w2_p2 NT2] [NT3 w3_p3 w4_p4 w5_p5 NT3]NT1] with a set of "candidate parses": [NT1 [NT2 wl_pl w2_p2 NT2] [NT3 w3_p3 [NT4 w4_p4 w5_p5 NT4]NT3]NTI] [NT1 [NT2 wl_p6 w2_p2 NT2] [NT5 w3_p9 w4_p4 w5_p5 NT5]NTI] [NTI wl_pl [NT6 b_p2 w3_p15 NT6][NT7 w4_p4 w5_p5 NTT]NTI] For the structure-consistent criterion, the first and sec- ond candidate parses are correct, even though the first one has a more detailed constituent spanning (4, 5). The third is incorrect since the constituent NT6 is a case of a crossing bracket. For the label-consistent criterion, the first candidate parse is the only correct parse, because it has all of the bracket labels and parts-of-speech of the treebank parse. The second candidate parse is incorrect, since two of its part-of-speech labels and one of its bracket labels differ from those of the treebank parse. Grammar writing and statistical estimation: The task of developing the requisite system is factored into two parts: a linguistic task and a statistical task. The linguistic task is to achieve perfect or near- perfect coverage of the test set. By this we mean that among the n parses provided by the parser for each sentence of the test dataset, there must be at least one which is consistent with the treebank ill- ter. s To eliminate trivial solutions to this task, the grammarian must hold constant over the course of development the geometric mean of the number of parses per word, or equivalently the total number of parses for the entire test corpus. The statistical task is to supply a stochastic model for probabilistically training the grammar such that the parse selected as the most likely one is a correct parse. 6 4See Section 4 for the definition of a many-to-many mapping be- tween grammar and trcebank non-terminals for determining equiv- Mence of non-termlnals. SWe propose this sense of the term coverage as a replacement for the sense in current use, viz. simply supplying one or more parses, correct or not, for some portion of a given set of sentences. 6Clcarly the grammarian can contribute to this task by, among other things, not just holding the average number of parses con- "I 87 The above decomposition into two tasks should lead to better broad-coverage grammars. In the first task, the grammarian can increase coverage since he can examine examples of specific uncovered sentences. In the second task, that of selecting a parse from the many parses pro- posed by a grammar, can best be done by maximum like- lihood estimation constrained by a large treebank. The use of a large treebank allows the development of sophisti- cated statistical models that should outperform the tra- ditional approach of using human intuition to develop parse preference strategies. We describe in this paper a model based on probabilistic context-free grammars es- timated with a constrained version of the Inside-Outside algorithm (see Section 4)that can be used for picking a parse for a sentence. In [2], we desrcibe a more sophisti- cated stochastic grammar that achieves even higher pars- ing accuracy. 3. Grammar Our grammar is a feature-based context-free phrase structure grammar employing traditional syntactic cate- gories. Each of its roughly 700 "rules" is actually a rule template, compressing a family of related productions via unification. 7 Boolean conditions on values of variables occurring within these rule templates serve to limit their ambit where necessary. To illustrate, the rule template below s f2 : V1 ~ f2 : V1 f2 : V1 f3 : V2 f3 : V3 f3 : V2 where (V2 = dig [h) & (V3 # ~) imposes agreement of the children with reference to fea- ture f2, and percolates this value to the parent. Accept- able values for feature f3 are restricted to three (d,g,h) for the second child (and the parent), and include all possi- ble values for feature f3 ezeept k, for the first child. Note that the variable value is also allowed in all cases men- tioned (V1,V2,V3). If the set of licit values for feature f3 is (d,e,f,g,h,i,j,k,1}, and that for feature f2 is {r,s}, then, allowing for the possibility of variables remaining as such, the rule template above represents 3*4*9 = 108 different rules. If the condition were removed, the rule template would stand for 3"10"10 = 300 different rules. stunt, but in fact steadily reducing it. The importance of this contribution will ultimately depend on the power of the statisti- cal models developed after a reasonable amount of effort. Unification is to be understood in this paper in a very limited sense, which is precisely stated in Section 4. Our grammar is not a unification grammar in the sense which is most often used in the literature. awhere fl,f2,f3 are features; a,b,c are feature values; and V1,V2,V3 are variables over feature values While a non-terminal in the above grammar is a fea- ture vector, we group multiple non-terminals into one class which we call a mnemonic, and which is represented by the least-specified non-terminal of the class. A sample mnemonic is N2PLACE (Noun Phrase of semantic cate- gory Place). This mnemonic comprises all non-terminals that unify with: I pos :n ] barnum : two details : place including, for instance, Noun Phrases of Place with no determiner, Noun Phrases of Place with various sorts of determiner, and coordinate Noun Phrases of Place. Mnemonics are the "working nonterminals" of the gram- mar; our parse trees are labelled in terms of them. A production specified in terms of mnemonics (a mnemonic production) is actually a family of productions, in just the same way that a mnemonic is a family of non-terminals. Mnemonics and mnemonic productions play key roles in the stochastic modelling of the grammar (see below). A recent version of the grammar has some 13,000 mnemon- ics, of which about 4000 participated in full parses on a run of this grammar on 3800 sentences of average word length 12. On this run, 440 of the 700 rule tem- plates contributed to full parses, with the result that the 4000 mnemonics utilized combined to form approximately 60,000 different mnemonic productions. The grammar has 21 features whose range of values is 2 - 99, with a median of 8 and an average of 18. Three of these features are listed below, with the function of each: det_pos degree noun_pronoun Determiner Subtype Degree of Comparison Nominal Subtype Table 3: Sample Grammatical Features To handle the huge number of linguistic distinctions required for real-world text input, the grammarian uses many of the combinations of the feature set. A sample rule (in simplified form) illustrates this: pos : j barnum : one details : V1 degree : V3 pos : j barnum : zero details : V1 degree : V3 This rule says that a lexical adjective parses up to an ad- jective phrase. The logically primary use of the feature "details" is to more fully specify conjunctions and phrases 188 involving them. Typical values, for coordinating conjunc- tions, are "or" and "but"; for subordinating conjunctions and associated adverb phrases, they include e.g. "that" and "so." But for content words and phrases (more pre- cisely, for nominal, adjectival and adverbial words and phrases), the feature, being otherwise otiose, carries the semantic category of the head. The mnemonic names incorporate "semantic" cate- gories of phrasal heads, in addition to various sorts of syntactic information (e.g. syntactic data concerning the embedded clause, in the case of "that-clauses"). The "se- mantics" is a subclassification of content words that is designed specifically for the manuals domain. To provide examples of these categories, and also to show a case in which the semantics succeeded in correctly biasing the probabilities of the trained grammar, we contrast (simpli- fied) parses by an identical grammar, trained on the same data (see below), with the one difference that semantics was eliminated from the mnemonics of the grammar that produced the first parse below. [SC[V1 Enter [N2[N2 the name [P1 of the system P1]N2][SD you [V1 want [V2 to [V1 connect [P1 to P 1]V1]V2]V1]SD]N2]V1]SC]. [SCSEND-ABS-UNIT[V1SEND-ABS-UNIT Enter [N2ABS-UNIT the name [P1SYSTEMOF of [N2SYSTEM the system [SDORGANIZE-PERSON you [V1ORGANIZE want [V2ORGANIZE to con- nect [P1WO to P1]V2]V1]SD]N2]P1]N2]V1]SC]. What is interesting here is that the structural parse is different in the two cases. The first case, which does not match the treebank parse 9 parses the sentence in the same way as one would understand the sentence, "En- ter the chapter of the manual you want to begin with." In the second case, the semantics were able to bias the statistical model in favor of the correct parse, i.e. one which does match the treebank parse. As an experiment, the sentence was submitted to the second grammar with a variety of different verbs in place of the original verb "connect", to make sure that it is actually the semanitc class of the verb in question, and not some other factor, that accounts for the improvement. Whenever verbs were substituted that were licit syntatically but not semanti- cally (e.g. adjust, comment, lead) the parse was as in the first case above. Of course other verbs of the class "OR- GANIZE" were associated with the correct parse, and verbs that did were not even permitted syntactically oc- casioned the incorrect parse. We employ a lexical preprocessor to mark multiword 9 [V Enter [N the name [P of [N the system [Fr[N you ][V want [Wl to connect [P to ]]]]]]]]. 189 units as well as to license unusual part-of-speech assign- ments, or even force labellings, given a particular context. For example, in the context: "How to:", the word "How" can be labelled once and for all as a General Wh-Adverb, rather than a Wh-Adverb of Degree (as in, "How tall he is getting!"). Three sample entries from our lexicon follow: "Full-screen" is labelled as an adjective which full-screen JSCREEN-PTB* Hidden VALTERN* 1983 NRSG* M-C-* Table 4: Sample lexical entries usually bears an attributive function, with the semantic class "Screen-Part". "Hidden" is categorized as a past participle of semantic class "Alter". "1983" can be a temporal noun (viz. a year) or else a number. Note that all of these classifications were made on the basis of the examination of concordances over a several-hundred- thousand-word sample of manuals data. Possible uses not encountered were in general not included in our lexicon. Our approach to grammar development, syntactical as well as lexical, is frequency-based. In the case of syn- tax, this means that, at any given time, we devote our attention to the most frequently-occurring construction which we fail to handle, and not the most "theoretically interesting" such construction. 4. Statistical Training and Evaluation In this section we will give a brief description of the procedures that we have adopted for parsing and training a probabilistic model for our grammar. In parsing with the above grammar, it is necessary to have an efficient way of determining if, for example, a particular feature bundle A = (AI, A2,...,AN) can be the parent of a given production, some of whose features are expressed as variables. As mentioned previously, we use the term unification to denote this matching procedure, and it is defined precisely in figure 2. In practice, the unification operations are carried out very efficiently by representing bundles of features as bit- strings, and realizing unification in terms of logical bit operations in the programming language PL.8 which is similar to C. We have developed our own tools to translate the rule templates and conditions into PL.8 programs. A second operation that is required is to partition the set of nonterminals, which is potentially extremely large, into a set of equivalence classes, or mnemonics, as mentioned earlier. In fact, it is useful to have a tree, which hierarchically organizes the space of possible fea- UNIFY(A, B): do for each feature f if not FEATURE_UNIFY(A/, B/) then return FALSE return TRUE FEATURE_UNIFY(a, b): if a -- b then return TRUE else if a is variable or b is variable then return TRUE return FALSE Figure 2 ture bundles into increasingly detailed levels of semantic and syntactic information. Each node of the tree is it- self represented by a feature bundle, with the root being the feature bundle all of whose features are variable, and with a decreasing number of variable features occuring as a branch is traced from root to leaf. To find the mnemonic .A4(A) assigned to an arbitrary feature bundle A, we find the node in the mnemonic tree which corresponds to the smallest mnemonic that contains (subsumes) the feature bundle A as indicated in Fugure 3. .A4(A): n = root_of_mnemonic_tree return SEARCH_SUBTREE(n, A) SEARCH_SUBTREE(n, A) do for each child m of n if Mnemonic(m) contains A then return SEARCH_SUBTREE(m, A) return Mnemonic(n) Figure 3 Unconstrained training: Since our grammar has an extremely large number of non-terminals, we first de- scribe how we adapt the well-known Inside-Outside algo- rithm to estimate the parameters of a stochastic context- free grammar that approximates the above context-free grammar. We begin by describing the case, which wc call unconstrained training, of maximizing the likelihood of an unbrackctcd corpus. We will later describe the modifica- tions necessary to train with the constraint of a bracketed corpus. To describe the training procedure we have used, we will assume familiarity with both the CKY algorithm [?] and the Inside-Outside algorithm [?], which we have adapted to the problem of training our grammar. The main computations of the Inside-Outside algorithm are indexed using the CKY procedure which is a bottom-up chart parsing algorithm. To summarize the main points 190 in our adaptation of these algorithms, let us assume that the grammar is in Chomsky normal form. The general case involves only straight-forward modifications. Pro- ceeding in a bottom-up fashion, then, we suppose that we have two nonterminals (bundles of features) B and C, and we find all nonterminals A for which A -~ B C is a production in the grammar. This is accomplished by using the unfication operation and checking that the relevent Boolean conditions are satisfied for the nonter- minals A, B, and C. Having found such a nonterminal, the usual Inside- Outside algorithm requires a recursive update of the Inside probabilities IA(i,j) and outside probabilities OA(i , j) that A spans (i, j). These updates involve the probability parameter PrA(A ---* B C). In the case of our feature-based grammar, however, the number of such parameters would be extremely large (the grammar can have on the order of few billion non- terminals). We thus organize productions into the equiv- alence classes induced by the mncmomic classes on the non-terminals. The update then uses mnemonic produc- tions for the stochastic grammar using the parameter PrM(A)(J~4(B) --) A4(C) A4(C)). Of course, for lexical productions A --) w we use the corresponding probability Pr~(A)(jVI(A ) -~ w) in the event that we are rewriting not a pair of nontermi- nals, but a word w. Thus, probabilities are expressed in terms of the set of mnemonics (that is, by the nodes in the mnemonic tree), rather that in terms of the actual nonterminals of the grammar. It is in this manner that we can obtain efficient and reliable estimates of our parameters. Since the grammar is very detailed, the mnemonic map JUt can be increasingly refined so that a greater number of lin- guistic phenomena are caputured in the probabilities. In principle, this could be carried out automatically to de- termine the optimum level of detail to be incorporated into the model, and different paramcterizations could be smoothed together. To date, however, we have only con- tructed mnemonic maps by hand, and have thus experi- mented with only a small number of paramcterizations. Constrained training: The Inside-Outside algo- rithm is a special case of the general EM algorithm, and as such, succssive iteration is guaranteed to converge to a set of parameters which locally maximize the likelihood of generating the training corpus. We have found it use- ful to employ the trccbank to supervise the training of these parameters. Intuitively, the idea is to modify the algorithm to locally maximize the likelihood of generat- ing the training corpus using parses which are "similar" to the treebank parses. This is accomplished by only collecting statistics over those parses which are consis- tent with the treebank parses, in a manner which we will now describe. The notion of label-consistent is defined by a (many-to-many) mapping from the mnemonics of the feature-based grammar to the nonterminal labels of the treebank grammar. For example, our grammar main- tains a fairly large number of semantic classes of singular nouns, and it is natural to stipulate that each of them is label-consistent with the nonterminal NI~I denoting a generic singular noun in the treebank. Of course, to ex- haustively specify such a mapping would be rather time consuming. In practice, the mapping is implemented by organizing the nonterminals hierarchically into a tree, and searching for consistency in a recursive fashion. The simple modification of the CKY algorithm which takes into account the treebank parse is, then, the follow- ing. Given a pair of nonterminals B and C in the CKY chart, if the span of the parent is not structure-consistent then this occurence of B C cannot be used in the parse and we continue to the next pair. If, on the other hand, it is structure-consistent then we find all candidate parents A for which A ~ B C is a production of the grammar, but include only those that are label-consistent with the treebank nonterminal (if any) in that position. The prob- abilities are updated in exactly the same manner as for the standard Inside-Outside algorithm. The procedure that we have described is called constrained training, and it significantly improves the effectiveness of the parser, providing a dramatic reduction in computational require- ments for parameter estimation as well as a modest im- provement in parsing accuracy. Sample mappings from the terminals and non- terminals of our grammar to those of the Lancaster tree- bank are provided in Table 5. For ease of understanding, we use the version of our grammar in which the semantics are eliminated from the mnemonics (see above). Category names from our grammar are shown first, and the Lan- caster categories to which they map are shown second: The first case above is straightforward: our prepositional-phrase category maps to Lancaster's. In the second case, we break down the category Relative Clause more finely than Lancaster does, by specifying the syntax of the embedded clause (e.g. FRV2: "that opened the adapter"). The third case relates to rela- tive clauses lacking prefatory particles, such as: "the row you are specifying"; we would call "you are specifying" an SD (Declarative Sentence), while Lancaster calls it an Fr (Relative Clause). Our practice of distinguishing con- stituents which function as interrupters from the same constituents tout court accounts for the fourth case; the category in question is Infinitival Clause. Finally, we gen- erate attributive adjectives (JB) directly from past par- ticiples (VVN) by rule, whereas Lancaster opts to label as adjectives (J J) those past participles so functioning. 5. Experimental Results We report results below for two test sets. One (Test Set A) is drawn from the 600,000-word subsection of our corpus of computer manuals text which we referred to above. The other (Test Set B) is drawn from our full 40- million-word computer manuals corpus. Due to a more or less constant error rate of 2.5% in the treebank parses themselves, there is a corresponding built-in margin of er- ror in our scores. For each of the two test sets, results are presented first for the linguistic task: making sure that a correct parse is present in the set of parses the grammar proposes for each sentence of the test set. Second, results are presented for the statistical task, which is to ensure that the parse which is selected as most likely, for each sentence of the test set, is a correct parse. Number of Sentences 935 Average Sentence Length 12 Range of Sentence Lengths 7-17 Correct Parse Present 96% Correct Parse Most Likely 73% Table 6: Results for Test Set A P1 P FRV2 Fr SD Fr IANYTI Ti JBVVN* :lJ Table 5: Sample of grammatical category mappings Number of Sentences 1105 Average Sentence Length 12 Range of Sentence Lengths 7-17 Correct Parse Present 95% Correct Parse Most Likely 75% Table 7: Results for Test Set B 191 Recall (see above) that the geometric mean of the number of parses per word, or equivalently the total num- ber of parses for the entire test set, must be held con- stant over the course of the grammar's development, to eliminate trivial solutions to the coverage task. In the roughly year-long period since we began work on the com- puter manuals task, this average has been held steady at roughly 1.35 parses per word. What this works out to is a range of from 8 parses for a 7-word sentence, through 34 parses for a 12-word sentence, to 144 parses for a 17-word sentence. In addition, during this development period, performance on the task of picking the most likely parse went from 58% to 73% on Test Set A. Periodic results on Test Set A for the task of providing at least one correct parse for each sentence are displayed in Table 8. We present additional experimental results to show that our grammar is completely separable from its accom- panying "semantics". Note that semantic categories are not "written into" the grammar; i.e., with a few minor exceptions, no rules refer to them. They simply perco- late up from the lexical items to the non-terminal level, and contribute information to the mnemonic productions which constitute the parameters of the statistical training model. An example was given in Section 3 of a case in which the version of our grammar that includes semantics out- performed the version of the same grammar without se- mantics. The effect of the semantic information in that particular case was apprently to bias the trained grammar towards choosing a correct parse as most likely. However, we did not quantify this effect when we presented the ex- ample. This is the purpose of the experimental results shown in Table 9. Test B was used to test our current grammar, first with and then without semantic categories in the mnemonics. It follows from the fact that the semantics are not written into the grammar that the coverage figure is the same with and without semantics. Perhaps surprising, however, is the slight degree of improvement due to the semantics on the task of picking the most likely parse: only 2 percentage points. The more detailed parametriza- January 1991 91% April 1991 92% August 1991 94% December 1991 96% April 1992 96% Table 8: Periodic Results for Test Set A: Sentences With At Least 1 Correct Parse Number of Sentences 1105 Average Sentence Length 12 Range of Sentence Lengths 7-17 Correct Parse Present (In Both Cases) 95% Correct Parse Most Likely (With Semantics) 75% Correct Parse Most Likely (No Semantics) 73% Table 9: Test Subcorpus B With and Without Semantics tion with semantic categories, which has about 13,000 mnemonics achieved only a modest improvement in pars- ing accuracy over the parametrization without semantics, which has about 4,600 mnemonics. 6. Future Research Our future research divides naturally into two efforts. Our linguistic research will be directed toward first pars- ing sentences of any length with the 3000-word vocabu- lary, and then expanding the 3000-word vocabulary to an unlimited vocabulary. Our statistical research will focus on efforts to improve our probabilistic models along the lines of the new approach presented in [2]. References 1. Baker, J., Trainable grammars for speech recognition. In Speech Communication papers presented at the 97-th Meeting of the Acostical Society of America, MIT, Can- bridge, MA, June 1979. 2. Black, E., Jelinek, F., Lafferty, J., Magerman, D., Mer- cer, R., and Itoukos, S., Towards History-based Gram- mars: Using Richer Models for Probabilistic Parsing. Proceedings of Fifth DARPA Speech and Natural Lan- guage Workshop, Harriman, NY, February 1992. 3. Black, E., Abney, S., Fhckenger, D., Gdaniec, C., Grish- man, R., Harrison, P., Hindle, D., Ingria, R., Jelinek, F., Klavans, J., Liberman, M., Marcus, M., Roukos, S., San- torini, B., and Strzalkowsld, T.. A Procedure for Quan- titatively Comparing the Syntactic Coverage of English Grammars. Proceedings of Fourth DARPA Speech and Natural Language Workshop, pp. 306-311, 1991. 4. Harrison, P., Abney, S., Black, E., Fhckenger, D., Gdaniee, C., Grishman, R., Hindle, D., Ingria, It., Mar- cus, M., Santorini, B., and Strzalkowski, T.. Evaluat- ing Syntax Performance of Parser/Grammars of English. Proceedings of Natural Language Processing Systems Evaluation Workshop, Berkeley, California, 1991. 5. Hopcraft, J. E. and Ullman, Jeffrey D. Introduction to Automata Theory, Languages, and Computation, Read- ing, MA: Addison-Wesley, 1979. 6. Jehnek, F., Lafferty, J. D., and Mercer, R. L. Basic Meth- ods of Probabilistic Context-Free Grammars. Computa- tional Linguistics, to appear. 192
1992
24
MODELING NEGOTIATION SUBDIALOGUES 1 Lynn Lambert and Sandra Carberry Department of Computer and Information Sciences University of Delaware Newark, Delaware 19716, USA email : lambert~cis, udel. edu, carberry@cis, udel. edu Abstract This paper presents a plan-based model that han- dles negotiation subdialogues by inferring both the communicative actions that people pursue when speaking and the beliefs underlying these actions. We contend that recognizing the complex dis- course actions pursued in negotiation subdialogues (e.g., expressing doubt) requires both a multi- strength belief model and a process model that combines different knowledge sources in a unified framework. We show how our model identifies the structure of negotiation subdialogues, including recognizing expressions of doubt, implicit accep- tance of communicated propositions, and negotia- tion subdialogues embedded within other negotia- tion subdialogues. 1 Introduction Since negotiation is an integral part of multi-agent activity, a robust natural language un- derstanding system must be able to handle subdi- alogues in which participants negotiate what has been claimed in order to try to come to some agreement about those claims. To handle such dialogues, the system must be able to recognize when a dialogue participant has initiated a nego- tiation subdialogue and why the participant began the negotiation (i.e., what beliefs led the partici- pant to start the negotiation). This paper presents a plan-based model of task-oriented interactions that assimilates negotiation subdialogues by in- ferring both the communicative actions that peo- ple pursue when speaking and the beliefs under- lying these actions. We will argue that recogniz- ing the complex discourse actions pursued in ne- gotiation subdialogues (e.g., expressing doubt) re- quires both a multi-strength belief model and a processing strategy that combines different knowl- edge sources in a unified framework, and we will show how our model incorporates these and rec- ognizes the structure of negotiation subdialogues. 2 Previous Work Several researchers have built argument un- derstanding systems, but none of these has ad- dressed participants coming to an agreement or mutual belief about a particular situation, ei- ther because the arguments were only monologues 1 This work is being supported by the National Science Foundation under Grant No. IRI-9122026. The Govern- ment has certain rights in this material. (Cohen, 1987; Cohen and Young, 1991), or be- cause they assumed that dialogue participants do not change their minds (Flowers, McGuire and Birnbaum, 1982; Quilici, 1991). Others have ex- amined more cooperative dialogues. Clark and Schaefer (1989) contend that utterances must be grounded, or understood, by both parties, but they do not address conflicts in belief, only lack of un- derstanding. Walker (1991) has shown that evi- dence is often provided to ensure both understand- ing and believing an utterance, but she does not address recognizing lack of belief or lack of under- standing. Reichman (1981) outlines a model for informal debate, but does not provide a detailed computational mechanism for recognizing the role of each utterance in a debate. In previous work (Lambert and Carberry, 1991), we described a tripartite plan-based model of dialogue that recognizes and differentiates three different kinds of actions: domain, problem- solving, and discourse. Domain actions relate to performing tasks in a given domain. We are mod- eling cooperative dialogues in which one agent has a domain goal and is working with another helpful, more expert agent to determine what do- main actions to perform in order to accomplish this goal. Many researchers (Allen, 1979; Car- berry, 1987; Goodman and Litman, 1992; Pol- lack, 1990; Sidner, 1985) have shown that recog- nition of domain plans and goals gives a system the ability to address many difficult problems in understanding. Problem-solving actions relate to how the two dialogue participants are going about building a plan to achieve the planning agent's domain goal. Ramshaw, Litman, and Wilensky (Ramshaw, 1991; Litman and Allen, 1987; Wilen- sky, 1981) have noted the need for recognizing problem-solving actions. Discourse actions are the communicative actions that people perform in say- ing something, e.g., asking a question or express- ing doubt. Recognition of discourse actions pro- vides expectations for subsequent utterances, and explains the purpose of an utterance and how it should be interpreted. Our system's knowledge about how to per- form actions is contained in a library of discourse, problem-solving, and domain recipes (Pollack, 1990). Although domain recipes are not mutually known by the participants (Pollack, 1990), how to communicate and how to solve problems are corn- 193 Discourse Recipe-C3:{_agent1 informs _agent~ of_prop} Action: Inform(_agentl, _agent2, _prop) Recipe-type: Decomposition App Cond: believe(_agentl, _prop, [C:C]) believe(_agentl, believe(_agent2, _prop, [CN:S]), [0:C]) Body: Tell(_agent 1, _agent2, _prop) Address-Believability(_agent2, _agentl, _prop) Effects: believe(_agent2, want(_agentl, believe(_agent2, _prop, [C:C])), [C:C]) Goal: believe(_agent2, _prop, [C:C]) Discourse Recipe-C2: {_agent1 expresses doubt to _agent2 about _propI because _agent1 believes _prop~ to be true} Action: Express-Doubt(_agentl, _agent2, _propl, _prop2, _rule) Recipe-type: Decomposition App Cond: believe(_agentl, _prop2, [W:S]) believe(_agentl, believe(_agent2, _propl, [S:C]), [S:C]) believe(_agentl, ((_prop2 A _rule) ::~ -,_propl), [S:C]) believe(_agentl, _rule, [S:C]) in-focus(_propl)) Body: Convey- Uncertain- Belief(_ agent 1, _agent 2, _prop2) Address-Q-Acceptanee(_agent2, _agentl, _prop2) Effects: believe(_agent2, believe(_agentl, _propl, [SN:W2~]), [S:C]) believe(_agent2, want(_agentl, Resolve-Conflict(_agent2, _agentl, _propl, _prop2)), [S:C]) Goal: want(_agent2, Resolve-Conflict(_agent2, _agentl, _propl, _prop2)) Figure 1. Two Sample Discourse Recipes men skills that people use in a wide variety of contexts, so the system can assume that knowl- edge about discourse and problem-solving recipes is shared knowledge. Figure 1 contains two dis- course recipes. Our representation of a recipe in- cludes a header giving the name of the recipe and the action that it accomplishes, preconditions, ap- plicability conditions, constraints, a body, effects, and a goal. Constraints limit the allowable instan- tiation of variables in each of the components of a recipe (Litman and Allen, 1987). Applicability conditions (Carberry, 1987) represent conditions that must be satisfied in order for the recipe to be reasonable to apply in the given situation and, in the case of many of our discourse recipes, the applicability conditions capture beliefs that the di- alogue participants must hold. Especially in the case of discourse recipes, the goals and effects are likely to be different. This allows us to differen- tiate between ilIocutionary and perlocutionary ef- fects and to capture the notion that one can, for example, perform an inform act without the hearer adopting the communicated proposition. 2 As actions are inferred by our process model, a structure of the discourse is built which is referred to as the Dialogue Model, or DM. In the DM, discourse, problem-solving, and domain ac- tions are each modeled on a separate level. Within each of these levels, actions may contribute to other actions in the dialogue, and this is captured with specialization (Kautz and Allen, 1986), sub- 2Consider, for example, someone saying "I in.formed you of X but you wouldn't believe me." action, and enablement arcs. Thus, actions at each level form a tree structure in which each node rep- resents an action that a participant is performing and the children of a node represent actions pur- sued in order to contribute to the parent action. By using a tree structure to model actions at each level and by allowing the tree structures to grow at the root as well as at the leaves, we are able to in- crementally recognize discourse, problem-solving, and domain intentions, and can recognize the re- lationship among several utterances that are all part of the same higher-level discourse act even when that act cannot be recognized from the first utterance alone. Other advantages of our tripar- tite model are discussed in Lambert and Carberry. (1991). An action on one level in the DM may also contribute to an action on an immediately higher level. For example, discourse actions may be ex- ecuted in order to obtain the information neces- sary for performing a problem-solving action and problem-solving actions may be executed in order to construct a domain plan. We capture this with links between actions on adjacent levels of the DM. Figure 2 gives a DM built by our proto- type system whose implementation is currently be- ing expanded to include belief ascription and use of linguistic information. It shows that a ques- tion has been asked and answered, that this ques- tion/answer pair contributes to the higher-level discourse action of obtaining information about what course Dr. Smith is teaching, that this dis- course action enables the problem-solving action Of instantiating a parameter in a Learn-Material 194 Domain Level • "*'°°*°°°'°'°'°'°°"°°°°***°°'°; -0-~. = Enable Arc i I.Ta~.Co~:s,. =o,,,,=) I ,.' ~" ............ ~t ............... • ~ = Subaction Arc Problem-Solvln_Cl Level . ............................ ~ooooo*****o****~**********ooo**~*o******o oo * o* • | • I Build-Plan(Sl, $2, Take-C0urse(S1, _course)) I | [ ¢ i t IInstamiate-Vars(Sl, S2, Learn-Matertal(S1, _course, Dr. Smith)) [ o • ' t ' : • T 0 • e • I I 0 ,0 l* Instamiate-Single-Var(Sl, S2, _course, Learn-Material(S1, _course, Dr. Smith)) ] ~ o.oto.oo********oo.o.oo.o.oo..*~**oo***ooo*ooo*o~u=ooomo*moooo**oooooeoooo -° Discourse Level * ! , ] Obtain-Info-Ref(Sl, S2, course, Teaches(Dr. S...__mith, _course)) I $2, Teaches(Dr. Smith, IAnswer-Ref(S2, SI-, course, Teaches(Dr. Smith, I course. I course), Teaches(Dr. Smith, Arch)) I I RexlUest(Sl, $2, Inf0rm-Rcf(S2, I Sl, _ course, Teaches(Dr. Smith, course)) I I t $ I Inform(S2, SI,. Teaches(Dr. Smith, Arch))] ¢ [ * Tell(S2, SI, Teaches(Dr. Smith, Arch)) J ¢ [ [ Surface-WH-Quesd0n(Sl, S2, Inform-Ref I [ [ ($2, SI, _course, Teaches(Dr. Smith, _course)) [ Surface-lnf0rm(S20 SI, Teaches(Dr. Smith, Arch)) o•oooo~ooooo =ooooo ~¢oo~o~o~*o~****=*•***••**••*** • o*••••*•oooo*~ooo•*o*o*ooo**oo**oo•***•o*~moooo~ oo*• ! E [ t i ,i Figure 2. Dialogue Model for two utterances action, and that this problem-solving action con- tributes to the problem-solving action of building a plan ill order to perform the domain action of taking a course. The work described in this paper uses our tripartite model, but addresses the recognition of discourse actions and their use in the modeling of negotiation subdialogues. 3 Discourse Actions and Implicit Acceptance One of the most important aspects of as- similating dialogue is the recognition of discourse actions and the role that an utterance plays with respect to the rest of the dialogue. For example, in (3), if S1 believes that each course has a sin- gle instructor, then S1 is expressing doubt at the proposition conveyed in (2). But in another con- text, (3) might simply be asking for verification. (1) SI: What is Dr. Smith teaching? (2) $2: Dr. Smith is teaching Architecture. (3) SI: Isu't Dr. Browa teaching Architecture? Unless a natural language system is able to iden- tify the role that an utterance is intended to play in a dialogue, the system will not be able to gener- ate cooperative responses which address the par- ticipants' goals. In addition to recognizing discourse ac- tions, it is also necessary for a cooperative sys- tem to recognize a user's changing beliefs as the dialogue progresses. Allen's representation of an Inform speech act (Allen, 1979) assumed that a listener adopted the communicated proposition. Clearly, listeners do not adopt everything they are told (e.g., (3) indicates that S1 does not im- mediately accept that Dr. Smith is teaching Ar- chitecture). Perrault's persistence model of belief (Perrault, 1990) assumed that a listener adopted the communicated proposition unless the listener had conflicting beliefs. Since Perrault's model as- sumes that people's beliefs persist, it cannot ac- count for S1 eventually accepting the proposition that Dr. Smith is teaching Architecture. We show in Section 6 how our model overcomes this limita- tion. Our investigation of naturally occurring di- alogues indicates that listeners are not passive par- ticipants, but instead assimilate each utterance into a dialogue in a multi-step acceptance phase. For statements, 3 a listener first attempts to un- derstand the utterance because if the utterance is not understood, then nothing else about it can be determined. Second, the listener determines if the utterance is consistent with the listener's beliefs; and finally, the listener determines the appropri- ateness of the utterance to the current context. Since we are assuming that people are engaged in a cooperative dialogue, a listener must indicate when the listener does not understand, believe, or consider relevant a particular utterance, address- ing understandability first, then believability, then relevance. We model this acceptance process by including acceptance actions in the body of many of our discourse recipes. For example, the actions the body of an Inform recipe (see Figure 1) are: il)n the speaker (_agentl) tells the listener (_agent2) 3Questions must also be accepted and assimilated into a dialogue, but we axe concentrating on statements here. 195 the proposition that the speaker wants the listener to believe (_prop); and 2) the listener and speaker address believability by discussing whatever is nec- essary in order for the listener and speaker to come to an agreement about what the speaker said. 4 This second action, and the subactions executed as part of performing it, account for subdialogues which address the believability of the proposition communicated in the Inform action. Similar ac- ceptance actions appear in other discourse recipes. The Tell action has a body containing a Surface- Inform action and an Address-Understanding ac- tion; the latter enables both participants to ensure that the utterance has been understood. The combination of the inclusion of accep- tance actions in our discourse recipes and the or- dered manner in which people address acceptance allows our model to recognize the implicit accep- tance of discourse actions. For example, Figure 2 presents the DM derived from utterances (1) and (2), with the current focus of attention on the dis- course level, the Tell action, marked with an aster- isk. In attempting to assimilate (3) into this DM, the system first tries to interpret (3) as address- ing the understanding of (2) (i.e., as part of the Tell action which is the current focus of attention in Figure 2). Since a satisfactory interpretation is not found, the system next tries to relate (3) to the Inform action in Figure 2, trying to interpret (3) as addressing the believability of (2). The system finds that the best interpretation of (3) is that of expressing doubt at (2), thus confirming the hy- pothesis that (3) is addressing the believability of (2). This recognition of (3) as contributing to the Inform action in Figure 2 indicates that S1 has implicitly indicated understanding by passing up the opportunity to address understanding in the Tell action that appears in the DM and instead moving to a relevant higher-level discourse action, thus conveying that the Tell action has been suc- cessful. 4 Recognizing Beliefs In the dialogue in the preceding section, in order for $1 to use the proposition communicated in (3) to express doubt at the proposition conveyed in (2), $1 must believe (a) that Dr. Brown teaches Architecture; (b) that $2 believes that Dr. Smith is teaching Architecture; and (c) that Dr. Brown teaching Architecture is an indication that Dr. Smith does not teach Architecture. We capture these beliefs in the applicability condi- tions for an Express-Doubt discourse act (see Fig- ure 1). In order for the system to recognize (3) 4This is where our model differs from Allen's and Per- rault's; we allow the listener to adopt, reject, or negoti- ate the speaker's claims, which might result in the listener eventually adopting the speakers claims, the listener chang- ing the mind of the speaker, or both agreeing to disagree. a~s an expression of doubt, it nmst come to be- lieve that these applicability conditions are satis- fied. The system's evidence that S1 believes (a) is provided by Sl's utterance, (3). But (3) does not state that Dr. Brown teaches Architecture; instead, Sl uses a negative yes-no question to ask whether or not Dr. Brown teaches Architecture. The surface form of this utterance indicates that S1 thinks that Dr. Brown teaches Architecture but is not sure of it. Thus, from the surface form of utterance (3), a listener can attribute to Sl an uncertain belief in the proposition that Dr. Brown teaches Architecture. This recognition of uncertain beliefs is an important part of recognizing complex discourse actions such as expressing doubt. If the system were limited to recognizing only lack of belief and belief, then yes-no questions would have to be in- terpreted as conveying lack of belief about the queried proposition, since a question in a cooper- ative consultation setting would not be felicitous if the speaker already knew the answer. Thus it would be impossible to attribute (a) to S1 from a question such as (3). And without this belief at- tribution, it would not be possible to recognize expressions of doubt. Furthermore, the system must be able to differentiate between expressions of doubt and objections; since we are assuming that people are engaged in a cooperative dialogue and communicate beliefs that they intend to be recognized, if S1 were certain of both (a) and (c), then S1 would object to (2), not simply express doubt at it. In summary, the surface form of ut- terances is one way that speakers convey belief. But these surface forms convey more than just be- lief and disbelief; they convey multiple strengths of belief, the recognition of which is necessary for identifying whether an agent holds the requisite beliefs for some discourse actions. We maintain a belief model for each partic- ipant which captures these multiple strengths of belief. We contend that at least three strengths of belief must be represented: certain belief (a be- lief strength of C); strong but uncertain belief, as in (3) above (a belief strength of S); and a weak belief, as in I think that Dr. C might be an edu- cation instructor (a belief strength of W). There- fore, our model maintains three degrees of belief, three degrees of disbelief (indicated by attaching a subscript of N, such as SN to represent strong disbelief and WN to represent weak disbelief), and one degree indicating no belief about a proposition (a belief strength of 0). 5 Our belief model uses belief intervals to specify the range of strengths 5Others (Walker, 1991; Galliers, 1991) have also argued for multiple strengths of belief, basing the strength of belief on the amount and kind of evidence available for that be- lief. We have not investigated how much evidence is needed for an agent to have a particular amount of confidence in a belief; our work has concentrated on recognizing how the strength of belief is communicated in a discourse and the impact that the different belief strengths have on the recog- nition of discourse acts. 196 within which an agent's beliefs are thought to fall, and our discourse recipes use belief intervals to specify the range of strengths that an agent's be- liefs may assume. Intervals such as [bi:bj] spec- ify a strength of belief within bi and bj, inclu- sive. For example, the goal of the Inform recipe in Figure 1, (believe(..agent2, _prop, [C:C])), is that _agentl be certain that _prop is true; on the other hand, believe(_agentl, _prop, [W:C]), means that _agent I must have some belief in _prop. In order to recognize other beliefs, such as (b) and (c), it is necessary to use more informa- tion than just a speaker's utterances. For exam- ple, $2 might attribute (c) to $1 because $2 be- lieves that most people think that only one pro- fessor teaches each course. Our system incorpo- rates these commonly held beliefs by maintaining a model of a stereotypical user whose beliefs may be attributed to the user during the conversation as appropriate. People also communicate their be- liefs by their acceptance (explicit and implicit) and non-acceptance of other people's actions. Thus, explicit or implicit acceptance of discourse actions provides another mechanism for updating the be- lief model: when an action is recognized as suc- cessful, we update our model of the user's beliefs with the effects and goals of the completed ac- tion. For example, in determining whether (3) is expressing doubt at (2), thereby implicitly indi- cating that (2) has been understood and that the Tell action has therefore been successful, the sys- tem tentatively hypothesizes that the effects and goals of the Tell action hold, the goal being that $1 believes that $2 believes that Dr. Smith is teaching Architecture (belief (b) above). If the system determines that tiffs Express-Doubt infer- ence is the most coherent interpretation of (3), it attributes the hypothesized beliefs to S1. So, our model captures many of the ways in which people infer beliefs: 1) from the surface form of utter- ances; 2) from stereotype models; and 3) from ac- ceptance (explicit or implicit) or non-acceptance of previous actions. 5 Combining Knowledge Sources Grosz and Sidner (1986) contend that mod- eling discourse requires integrating different kinds of knowledge in a unified framework in order to constrain the possible role that an utterance might be serving. We use three kinds of knowledge, 1) contextual information provided by previous utterances; 2) world knowledge; and 3) the lin- guistic information contained in each utterance. Contextual knowledge in our model is captured by the DM and the current focus of attention within it. The system's world knowledge contains facts about the world, the system's beliefs (including its beliefs about a stereotypical user's beliefs), and knowledge about how to go about performing dis- course, problem-solving, and domain actions. The linguistic knowledge that we exploit includes the surface form of the utterance, which conveys be- liefs and the strength of belief, as discussed in the preceding section, and linguistic clue words. Cer- tain words often suggest what type of discourse action the speaker might be pursuing (Litman and Allen, 1987; Hinkelman, 1989). For example, the linguistic clue please suggests a request discourse act (Hinkelman, 1989) while the clue word but sug- gests a non-acceptance discourse act. Our model takes these linguistic clues into consideration in identifying the discourse acts performed by an ut- terance. Our investigation of naturally occurring di- alogues indicates that listeners use a combination of information to determine what a speaker is try- ing to do in saying something. For example, S2's world knowledge of commonly held beliefs enabled $2 to determine that $1 probably believes (c), and therefore infer that $1 was expressing doubt at (2). However, $1 might have said (4) instead of (3). (4) But didn't Dr. Smith win a teaching award? It is not likely that $2 would think that people typ- ically believe that Dr. Smith winning a teaching award implies that she is not teaching Architec- ture. However, $2 would probably still recognize (4) as an expression of doubt because the linguis- tic clue but suggests that (4) may be some sort of non-acceptance action, there is nothing to suggest that S1 does not believe that Dr. Smith winning a teaching award implies that she is not teaching Ar- chitecture, and no other interpretation seems more coherent. Since linguistic knowledge is present, less evidence is needed from world knowledge to recognize the discourse actions being performed (Grosz and Sidner, 1986). In our model, if a new utterance contributes to a discourse action already in the DM, then there must be an inference path from the utterance that links the utterance up to the current tree structure on the discourse level. This inference path will contain an action that determines the relationship of the utterance to the DM by introducing new parameters for which there are many possible in- stantiations, but which must be instantiated based on values from the DM in order for the path to ter- minate with an action already in the DM. We will refer to such actions as e-actions since we contend that there must be evidence to support the infer- ence of these actions. By substituting values from the DM that are not present in the semantic repre- sentation of the utterance for the new parameters in e-actions, we are hypothesizing a relationship between the new utterance and the existing dis- course level of the DM. Express-Doubt is an example of an e-action (Figure 1). From the speaker's conveying uncer- tain belief in the proposition _prop2, plan chain- ing suggests that the speaker might be expressing doubt at some proposition _propl, and from this Express-Doubt action, further plan chaining may suggest a sequence of actions terminating at an Inform action already in the DM. The ability of _propl to unify with the proposition that was con- veyed by the Inform action (and _rule to unify 197 with a rule in the system's world knowledge) is not sufficient to justify inferring that the current utterance contributes to an Express-Doubt action which contributes to an Inform action; more evi- dence is needed. This is further discussed in Lam- bert and Carberry (1992). Thus we need evidence for including e- actions on an inference path. The required evi- dence for e-actions may be provided by linguistic knowledge that suggests certain discourse actions (e.g., the evidence that (4) is expressing doubt) or may be provided by world knowledge that in- dicates that the applicability conditions for a par- ticular action hold (e.g., the evidence that (3) is expressing doubt). Our model combines these different knowl- edge sources in our plan recognition algorithm. From the semantic representation of an utterance, higher level actions are inferred using plan infer- ence rules (Allen, 1979). If the applicability condi- tions for an inferred action are not plausible, this action is rejected. If the applicability conditions are plausible, then the beliefs contained in them are temporarily ascribed to the user (if an infer- ence line containing this action is later adopted as the correct interpretation, these applicability con- ditions are added to the belief model of the user). The focus of attention and focusing heuristics (dis- cussed in Lambert and Carberry (1991)) order these sequences of inferred actions, or inference lines, in terms of coherence. For those inference lines with an e-action, linguistic clues are checked to determine if the action is suggested by linguistic knowledge, and world knowledge is checked to de- termine if there is evidence that the applicability conditions for the e-action hold. If there is world and linguistic evidence for the e-action of one or more inference lines, the inference line that is clos- est to the focus of attention (i.e., the most contex- tually coherent) is chosen. Otherwise, if there is world or linguistic evidence for the e-action of one or more inference lines, again the inference line that is closest to the focus of attention is chosen. Otherwise, there is no evidence for the e-action in any inference line, so the inference line that is clos- est to the current focus of attention and contains no e-action is chosen. 6 Example The following example, an expansion of ut- terances (1), (2), and (3) from Section 3, illustrates how our model handles 1) implicit and explicit ac- ceptance; 2) negotiation subdialogues embedded within other negotiation subdialogues; 3) expres- sions of doubt at both immediately preceding and earlier utterances; and 4) multiple expressions of doubt at the same proposition. We will concen- trate on how Sl's utterances are understood and assimilated into the DM. (5) $1: What is Dr. Smith teaching? (6) S2: Dr. Smith is teaching Architecture. (7) SI: Isn't Dr. Brown teaching Architecture? (8) $2: No. (9) Dr. Brown is on sabbatical. (10) SI: But didn't 1see him on campus yesterday? (11) $2: Yes. (12) He was giving a University colloquium. (13) SI: OK. (14) But isn't Dr. Smith a theory person? The inferencing for utterances similar to (5) and (6) is discussed in depth in Lambert and Car- berry (1992), and the resultant DM is given in Figure 2. No clarification or justification of the Request action or of the content of the question has been addressed by either S1 or $2, and $2 has pro- vided a relevant answer, so both parties have im- plicitly indicated (Clark and Schaefer, 1989) that they think that S1 has made a reasonable and un- derstandable request in asking the question in (5). The surface form of (7) suggests that S1 thinks that Dr. Brown is teaching Architecture, but isn't certain of it. This belief is entered into the system's model of Sl's beliefs. This sur- face question is one way to Convey-Uncertain- Belief. As discussed in Section 3, the most coher- ent interpretation of (7) based on focusing heuris- tics, addressing the understandability of (6), is rejected (because there is not evidence to sup- port this inference), so the system tries to relate (7) to the Inform action in (6); that is, the sys- tem tries to interpret (7) as addressing the believ- ability of (6). Plan chaining determines that the Convey-Uncertain-Belief action could be part of an Express-Doubt action which could be part of an Address-Unacceptance action which could be an action in an Address-Believability discourse ac- tion which could in turn be an action in the In- form action of (6). Express-Doubt is an e-action because the action header introduces new argu- ments that have not appeared previously on the inference path (see Figure 1). Since there is evi- dence from world knowledge that the applicability conditions hold for interpreting (7) as an expres- sion of doubt and since there is no other evidence for any other e-action, the system infers that this is the correct interpretation and stops. Thus, (7) is interpreted as an Express-Doubt action. S2's re- sponse in (8) and (9) indicates that $2 is trying to resolve $1 and S2's conflicting beliefs. The struc- ture that the DM has built after these utterances is contained in Figure 3, 6 above the numbers (5) - (9). The Surface-Neg-YN-Question in utterance (10) is one way to Convey-Uneerlain-Belief. The linguistic clue but suggests that S1 is execut- 6 For space reasons, only inferencing of discourse actions will be discussed here, and only action names on the dis- course level are shown; the problem-solvlng and domain levels are as shown in Figure 2. 198 (5) (6) Resolve-Conflict Surface-Neg YN-Question ] (7) (9) Figure 3. Discourse Level of DM |Address-UnacCeptance I [Express-Doubt I [YN-Question J (14) i I I t 'eft/on Ibgue r (10) (11) (12) t" for Dialogue in Section 6 ing a non-acceptance discourse action; this non- acceptance action might be addressing either (9) or (6). Focusing heuristics suggest that the most likely candidate is the Inform act attempted in (9), and plan chaining suggests that the Convey- Uncertain-Belief could be part of an Express- Doubt action which in turn could be part of an Address-Unacceptance action which could be part of an Address-Believability action which could be part of the Inform action in (9). Again, there is evidence that the applicability conditions for the e-action (tile Express-Doubt action) hold: world knowledge indicates that a typical user believes that professors who are on sabbatical are not on campus. Thus, there is both linguistic and world knowledge giving evidence for the Express-Doubt action (and no other e-action has both linguistic and world knowledge evidence), so (10) is inter- preted as expressing doubt at (9). In (11) and (12), $2 clears up the confu- sion that S1 has expressed in (10), by telling S1 that the rule that people on sabbatical are not on campus does not hold in this case. In (13), S1 indicates explicit acceptance of the previously communicated proposition, so the system is able to determine that S1 has accepted S2's response in 12). This additional negotiation, utterances (10)- 13), illustrates our model's handling of negotia- tion subdialogues embedded within other negoti- ation subdialogues. The subtree contained within the dashed lines in Figure 3 shows the structure of this embedded negotiation subdialogue. The linguistic clue but in (14) then again suggests non-acceptance. Since (12) has been ex- plicitly accepted, (14) could be expressing non- acceptance of the information conveyed in either (9) or (6). Focusing heuristics suggest that (14) is most likely expressing doubt at (9). World knowledge, however, provides no evidence that the applicability conditions hold for (14) expressing doubt at (9). Thus, there is evidence from lin- guistic knowledge for this inference, but not from world knowledge. The system's stereotype model does indicate, however, that it is typically believed that faculty only teach courses in their field and that Architecture and Theory are different fields. So in this case, the system's world knowledge pro- vides evidence that Dr. Smith being a theory person is an indication that Dr. Smith does not teach Architecture. Therefore, the system inter- prets (14) as again expressing doubt at (6) because there is evidence for this inference from both world and linguistic knowledge. The system infers there- fore that S1 has implicitly accepted the statement in (9), that Dr. Smith is on sabbatical. Thus, the system is able to recognize and assimilate a second expression of doubt at the proposition conveyed in 6). The DM for the discourse level of the entire ialogue is given in Figure 3. 199 7 Conclusion We have presented a plan-based model that handles cooperative negotiation subdialogues by inferring both the communicative actions that people pursue when speaking and the beliefs un- derlying these actions. Beliefs, and the strength of those beliefs, are recognized from the surface form of utterances and from the explicit and implicit ac- ceptance of previous utterances. Our model com- bines linguistic, contextual, and world knowledge in a unified framework that enables recognition not only of when an agent is negotiating a con- flict between the agent's beliefs and the preceding dialogue but also which part of the dialogue the agent's beliefs conflict with. Since negotiation is an integral part of multi-agent activity, our model addresses an important aspect of cooperative in- teraction and communication. References Allen, James F. (1979). A Plan-Based Approach to Speech Act Recognition. PhD thesis, Uni- versity of Toronto, Toronto, Ontario, Canada. Carberry, Sandra (1987). Pragmatic Modeling: Toward a Robust Natural Language Interface. Computational Intelligence, 3, 117-136. Clark, tlerbert and Schaefer, Edward (1989). Con- tributing to Discourse. Cognitive Science, 259-294. Cohen, Robin (1987). Analyzing the Structure of Argumentative Discourse. Computational Linguistics, 13(1-2), 11-24. Cohen, Robin and Young, Mark A. (1991). Deter- mining Intended Evidence Relations in Natu- ral Language Arguments. Computational In- telligence, 7, 110-118. Flowers, Margot, McGuire, Rod, and Birnbaum, Lawrence (1982). Adversary Arguments and the Logic of Personal Attack. In W. Lehn- eft and M. Ringle (Eds.), Strategies for Natu- ral Language Processing (pp. 275-294). Hills- dage, New Jersey: Lawrence Erlbaum Assoc. Galliers, Julia R. (1991). Belief Revision and a Theory of Communication. Technical Report 193, University of Cambridge, Cambridge, England. Goodman, Bradley A. and Litman, Diane J. (1992). On the Interaction between Plan Recognition and Intelligent Interfaces. User Modeling and User-Adapted Interaction, 2, 83-115. Grosz, Barbara and Sidner, Candace (1986). At- tention, Intention, and the Structure of Dis- course. Computational Linguistics, le(3), 175-204. Hinkelman, Elizabeth (1989). Two Constraints on Speech Act Ambiguity. In Proceedings of the 27th Annual Meeting of the ACL (pp. 212- 219), Vancouver, Canada. Kautz, Henry and Allen, James (1986). General- ized Plan Recognition. In Proceedings of the Fifth National Conference on Artificial Intel- li.gence (pp. 32-37), Philadelphia, Pennsylva- nia. Lambert, Lynn and Carberry, Sandra (1991). A Tripartite Plan-based Model of Dialogue. In Proceedings of the 29th Annual Meeting of the ACL (pp. 47-54), Berkeley, CA. Lambert, Lynn and Carberry, Sandra (1992). Us- ing Linguistic, World, and Contextual Knowl- edge in a Plan Recognition Model of Dia- logue. In Proceedings of COLING-92, Nantes, France. To appear. Litman, Diane and Allen, James (1987). A Plan Recognition Model for Subdialogues in Con- versation. Cognitive Science, 11, 163-200. Perrault, Raymond (1990). An Application of De- fault Logic to Speech Act Theory. In P. Co- hen, J. Morgan, and M. Pollack (Eds.), Inten- tions in Communication (pp. 161-185). Cam- bridge, Massachusetts: MIT Press. Pollack, Martha (1990). Plans as Complex Men- tal Attitudes. In P. R. Cohen, J. Morgan, and M. E. Pollack (Eds.), Intentions in Commu- nication (pp 77-104). MIT Press. Quilici, Alexander (1991). The Correction Ma- chine: A Computer Model of Recognizing and Producing Belief Justifications in Argumenta- tive Dialogs. PhD thesis, Department of Com- puter Science, University of California at Los Angeles, Los Angeles, California. Ramshaw, Lance A. (1991). A Three-Level Model for Plan Exploration. In Proceedings of the 29th Annual Meeting of the ACL (pp. 36-46), Berkeley, California. Reichman, Rachel (1981). Modeling Informal De- bates. In Proceedings of the 1981 Interna- tional Joint Conference on Artificial Intelli- gence (pp. 19-24), Vancouver, B.C. IJCAI. Sidner, Candace L. (1985). Plan Parsing for In- tended Response Recognition in Discourse. Computational Intelligence, 1, 1-10. Walker, Marilyn (1991). Redundancy in Collabo- rative Dialogue. Presented at The AAAI Fall Symposium: Discourse Structure in Natural Language Understanding and Generation (pp. 124-129), Asilomar, CA. Wilensky, Robert (1981). Meta-Planning: Rep- resenting and Using Knowledge About Plan- ning in Problem Solving and Natural Lan- guage Understanding. Cognitive Science, 5, 197-233. 200
1992
25
HANDLING LINEAR PRECEDENCE CONSTRAINTS BY UNIFICATION Judith Engelkamp, Gregor Erbach and Hans Uszkoreit Universitfit des Saarlandes, Computational Linguistics, and Deutsches Forschungszentrum fiir Kiinstliche lntelligenz D-6600 Saarbriicken 11, Germany [email protected] ABSTRACT Linear precedence (LP) rules are widely used for stating word order principles. They have been adopted as constraints by HPSG but no encoding in the formalism has been provided. Since they only order siblings, they are not quite adequate, at least not for German. We propose a notion of LP constraints that applies to linguistically motivated branching domains such as head domains. We show a type-based encoding in an HPSG-style formalism that supports processing. The encoding can be achieved by a compilation step. INTRODUCTION Most contemporary grammar models employed in computational linguistics separate statements about dominance from those that determine linear precedence. The approaches for encoding linear precedence (LP) statements differ along several dimensions. Depending on the underlying grammatical theory, different criteria are employed in formulating ordering statements. Ordering constraints may be expressed by referring to the category, grammatical function, discourse r61e, and many other syntactic, semantic, morphological or phonological features. Depending on the grammar formalism, different languages are used for stating the constraints on permissible linearizations. LP rules, first proposed by Gazdar and Pullum (1982) for GPSG, are used, in different guises, by several contemporary grammar formalisms. In Functional Unification Grammar (Kay 1985) and implemented versions of Lexical Functional Grammar, pattern languages with the power of regular expressions have been utilized. Depending on the grammar model, LP statements apply within different ordering domains. In most frameworks, such as GPSG and HPSG, the ordering domains are local trees. Initial trees constitute the ordering domain in ID/LP TAGS (Joshi 1987). In current LFG (Kaplan & Zaenen 1988), functional precedence rules apply to functional domains. Reape Research for this paper was mainly carried out in the project LILOG supported by IBM Germany. Some of the research was performed in the project DISCO which is funded by the German Federal Ministry for Research and Technology under Grant-No.: ITW 9002. We wish to thank our colleagues in SaarbriJcken, three anonymous referees and especially Mark Hepple for their valuable comments and suggestions. (1989) constructs word order domains by means of a special union operation on embedded tree domains. It remains an open question which choices along these dimensions will turn out to be most adequate for the description of word order in natural language. In this paper we do not attempt to resolve the linguistic issue of the most adequate universal treatment of word order. However we will present a method for integrating word order constraints in a typed feature unification formalism without adding new formal devices. Although some proposals for the interaction between feature unification and LP constraints have been published (e.g. Seiffert 1991), no encoding has yet been shown that integrates LP constraints in the linguistic type system of a typed feature unification formalism. Linguistic processing with a head-driven phrase structure grammar (HPSG) containing LP constraints has not yet been described in the literature. Since no implemented NL system has been demonstrated so far that handles partially free word order of German and many other languages in a satisfactory way, we have made an attempt to utilize the formal apparatus of HPSG for a new approach to processing with LP constraints. However, our method is not bound to the formalism of HPSG. In this paper we will demonstrate how LP constraints can be incorporated into the linguistic type system of HPSG through the use of parametrized types. Neither additional operations nor any special provisions for linear precedence in the processing algorithm are required. LP constraints are applied through regular unification whenever the head combines with a complement or adjunct. Although we use certain LP-relevant features in our examples, our aproach does not hinge on the selection of specific linguistic criteria for constraining linear order. Since there is no conclusive evidence to the contrary, we assume the simplest constraint language for formulating LP statements, i.e., binary LP constraints. For computational purposes such constraints are compiled into the type definitions for grammatical categories. With respect to the ordering domain, our LP constraints differ from the LP constraints commonly assumed in HPSG (Pollard & Sag 1987) in that they 201 apply to nonsibling constituents in head domains. While LP constraints control the order of nodes that are not siblings, information is accumulated in trees in such a way that it is always possible to detect a violation of an LP constraint locally by checking sibling nodes. This modification is necessary for the proper treatment of German word order. It is also needed by all grammar models that are on the one hand confined to binary branching structures such as nearly all versions of categorial grammar but that would, on the other hand, benefit from a notion of LP constraints. Our approach has been tested with small sets of LP constraints. The grammar was written and run in STUF, the typed unification formalism used in the project LILOG. LINGUISTIC MOTIVATION This section presents the linguistic motivation for our approach. LP statements in GPSG (Gazdar et al. 1985) constrain the possibility of linearizing immediate dominance (ID) rules. By taking the right- hand sides of ID rules as their domain, they allow only the ordering of sibling constituents. Consequently, grammars must be designed in such a way that all constituents which are to be ordered by LP constraints must be dominated by one node in the tree, so that "flat" phrase structures result, as illustrated in figure 1. VmaX sollte V max should NP[nom] ADV NP[dat] NP[acc] V 0 der Kurier nachher einem Spion den Brief zustecken the courier later a spy the letter slip The courier was later supposed to slip a spy the letter. Figure 1 Uszkoreit (1986) argues that such flat structures are not well suited for the description of languages such as German and Dutch. The main reason 1 is so- called complex fronting, i.e., the fronting of a non- finite verb together with some of its complements and adjuncts as it is shown in (1). Since it is a well established fact that only one constituent can be fronted, the flat structure can account for the German examples in (1), but not for the ones in (2), (1) sollte der Kurier nachher einem Spion den Brief zustecken zustecken sollte der Kurier nachher einem Spion den Brief den Brief sollte der Kurier nachher einem Spion zustecken 1Further reasons are discussed in Uszkoreit (1991b). einem Spion sollte der Kurier nachher den Brief zustecken naehher sollte der Kurier einem Spion den Brief zustecken tier Kurier sollte nachher einem Spion den Brief zustecken (2) den Brief znsteeken sollte der Kurier nachher einem Spion einem Spion den Brief zusteeken sollte der Kurier nachher naehher einem Spion den Brief znsteeken sollte der Kurier In the hierarchical tree structure in figure 2, the boxed constituents can be fronted, accounting for the examples in (1) and (2). V~aX I der Kurier [] ! nachher Figure 2 I I den Brief zustecken But with this tree structure, LP constraints can no longer be enforced over siblings. The new domain for linear order is a head domain, defined as follows: A head domain consists of the lexical head of a phrase, and its complements and adjuncts. LP constraints must be respected within a head domain. An LP-constraint is an ordered pair <A,B> of category descriptions, such that whenever a node cx subsumed by A and a node 13 subsumed by B occur within the domain of an LP-rule (in the case of GPSG a local tree, in our case a head domain), cz precedes 13. An LP constraint <A,B> is conventionally written as A < B. It follows from the definition that B can never precede A in an LP domain. In the next section, we will show how this property is exploited in our encoding of LP constraints. ENCODING OF LP CONSTRAINTS From a formal point of view, we want to encode LP constraints in such a way that 202 • violation of an LP constraint results in unification failure, and • LP constraints, which operate on head domains, can be enforced in local trees by checking sibling nodes. The last condition can be ensured if every node in a projection carries information about which con- stituents are contained in its head domain. An LP constraint A < B implies that it can never be the case that B precedes A. We make use of this fact by the following additions to the grammar: • Every category A carries the information that B must not occur to its left. • Every category B carries the information A must not occur to its right. This duplication of encoding is necessary because only the complements/adjuncts check whether the pro- jection with which they are combined contains some- thing that is incompatible with the LP constraints. A projection contains only information about which constituents are contained in its head domain, but no restrictions on its left and right context 2. In the following example, we assume the LP-rules A<B and B<C. The lexical head of the tree is X 0, and the projections are X, and X max. The complements are A, B and C. Each projection contains information about the constituents contained in it, and each complement contains information about what must not occur to its left and right. A complement is only combined with a projection if the projection does not contain any category that the complement prohibits on its right or left, depending on which side the projection is added. xmax {A, B, C} A X [left: ~B] {B, C} B -- [left: ~C ] X Lright: --,AJ Figure 3 {cl c x [right:-- B] [ } Having now roughly sketched our approach, we will turn to the questions of how a violation of LP constraints results in unification failure, how the 2Alternatively, the projections of the head could as well accumulate the ordering restrictions while the arguments and adjuncts only carry information about their own LP-relevant features. The choice between the alternatives has no linguistic implications since it only affects the grammar compiled for processing and not the one written by the linguist. information associated with the projections is built up, and what to do if LP constraints operate on feature structures rather than on atomic categories. VIOLATION OF LP-CONSTRAINTS AS UNIFICATION FAILURE As a conceptual starting point, we take a number of LP constraints. For the expository purposes of this paper, we oversimplifiy and assume just the following four LP constraints: nora < Oat (nominative case precedes dative case) nora < ace (nominative case precedes accusative case) Oat < ace (dative case precedes accusative case) 3to < nonpro (pronominal NPs precede non-pronominal NPs) Figure 4 Note that nora, Oat, ace, pro and nonpro are not syntactic categories, but rather values of syntactic features. A constituent, for example the pronoun ihn (him) may be both pronominal and in the accusative case. For each of the above values, we introduce an extra boolean feature, as illustrated in figure 5. NOM bool 1 DAT bool ACC boot PRO boot NON-PRO boo Figure 5 Arguments encode in their feature structures what must not occur to their left and right sides. The dative NP einem Spion (a spy), for example, must not have any accusative constituent to its left, and no nominative or pronominal constituent to its right, as encoded in the following feature structure. The feature structures that constrain the left and right contexts of arguments only use '-' as a value for the LP-relevant features. FLE [ACC-] ] NOM - Figure 6: Feature $mJcture for einem Spion Lexical heads, and projections of the head contain a feature LP-STORE, which carries information about the LP-relevant information occuring within their head domain (figure 7). ]1 |DAT - LP-STORE |ACC - |PRO - t.NON-PRO - Figure 7: empty LP-STORE 203 In our example, where the verbal lexical head is not affected by any LP constraints, the LP-STORE contains the information that no LP-relevant features are present. For a projection like einen Brief zusteckt (a letter[acc] slips), we get the following LP-STORE. [NOM- ÷1 |DAT - LP-STORE/ACC + [PRO - L.NON-PRO Figure 8: LP-STORE of einen Briefzusteckt The NP einem Spion (figure 6) can be combined with the projection einen Brief zusteckt (figure 8) to form the projection einem Spion einen Brief zusteckt (a spy[dat] a letter[acc] slips) because the RIGHT feature of einera Spion and the LP-STORE of einen Brief zusteckt do not contain incompatible information, i.e., they can be unified. This is how violations of LP constraints are checked by unification. The projection einem Spion einen Brief zusteckt has the following LP-STORE. FNOM- 1 |DAT + LP-STORE |ACC ÷ /PRO - LNON-PRO + Figure 9: LP-STORE of einem Spion einen Brief zusteckt The constituent ihn zusteckt (figure 10) could not be combined with the non-pronominal NP einem Spion (figure 6). [NOM- ]] /DAT - || LP-STORE/ACC + II |PRO + ]l I_NON-PRO =ll Figure 10: LP-STORE of ihn zusteckt In this case, the value of the RIGHT feature of the argument einem Spion is not unifiable with the LP- STORE of the head projection ihn zusteckt because the feature PRO has two different atoms (+ and -) as its value. This is an example of a violation of an LP constraint leading to unification failure. In the next section, we show how LP-STOREs are manipulated. MANIPULATION OF THE LP-STORE Since information about constituents is added to the LP-STORE, it would be tempting to add this information by unification, and to leave the initial LP- STORE unspecified for all features. This is not possible because violation of LP constraints is also checked by unification. In the process of this unification, values for features are added that may lead to unwanted unification failure when information about a constituent is added higher up in the tree. Instead, the relation between the LP-STORE of a projection and the LP-STORE of its mother node is encoded in the argument that is added to the projection. In this way, the argument "changes" the LP-STORE by "adding information about itselff. Arguments there- fore have the additional features LP-IN and LP-OUT. When an argument is combined with a projection, the projection's LP-STORE is unified with the argument's LP-IN, and the argument's LP-OUT is the mother node's LP-STORE. The relation between LP-IN and LP-OUT is specified in the feature structure of the argument, as illustrated in figure 11 for the accusative pronoun ihn, which is responsible for changing figure 7 into figure 10. No matter what the value for the features ACC and PRO may be in the projection that the argument combines with, it is '+' for both features in the mother node. All other features are left unchanged 3. [NOM ~] ] /DARN / / t'P- N/ACCt] / / IPRO [ ] / / LNON-PRO ~]J / [NOM [i] ]1 |DAT~] 11 LP-OUT/ACC + II /PRO + II LNON-PRO / Figure 11 Note that only a %' is added as value for LP- relevant features in LP-OUT, never a '-'. In this way, only positive information is accumulated, while negative information is "removed". Positive information is never "removed". Even though an argument or adjunct constituent may have an LP-STORE, resulting from LP constraints that are relevant within the constituent, it is ignored when the constituent becomes argument or adjunct to some head. Our encoding ensures that LP constraints apply to all head domains in a given sentence, but not across head domains. It still remains to be explained how complex phrases that become arguments receive their LP-IN, LP-OUT, RIGHT and LEFT features. These are specified in the lexical entry of the head of the phrase, but they are ignored until the maximal projection of the head becomes argument or adjunct to some other head. They must, however, be passed on unchanged from the lexical head to its maximal projection. When 3Coreference variables are indicated by boxed numbers. [ ] is the feature structure that contains no information (TOP) and can be unified with any other feature structure. 204 the maximal projection becomes an argument/adjunct, they are used to check LP constrains and "change" the LP-STORE of the head's projection. Our method also allows for the description of head- initial and head-final constructions. In German, for example, we find prepositions (e.g. far), postpositions (e.g. halber) and some words that can be both pre- and postpostions (e.g. wegen). The LP-rules would state that a postposition follows everything else, and that a preposition precedes everything else. [PRE +] < [] [ ] < [POST +] Figure 12 The information about whether something is a preposition or a postposition is encoded in the lexical entry of the preposition or postposition. In the following figure, the LP-STORE of the lexical head contains also positive values. Figure 13: part of the lexical entry of a postposition [LP-STORE [pP~REST+]] Figure 14: part of the lexical entry of a preposition A word that can be both a preposition and a postposition is given a disjunction of the two lexical entries: POST - LP-STO [POST ÷Ill /LPRE - .ILl Figure 15 All complements and adjuncts encode the fact that there must be no preposition to their right, and no postposition to their left. LEFT [POST Figure 16 The manipulation of the LP-STORE by the features LP-IN and LP-OUT works as usual. The above example illustrates that our method of encoding LP constraints works not only for verbal domains, but for any projection of a lexical head. The order of quantifiers and adjectives in a noun phrase can be described by LP constraints. INTEGRATION INTO HPSG In this section, our encoding of LP constraints is incorporated into HPSG (Pollard & Sag 1987). We deviate from the standard HPSG grammar in the following respects: • The features mentioned above for the encoding of LP-constraints are added. • Only binary branching grammar rules are used. • Two new principles for handling LP-constraints are added to the grammar. Further we shall assume a set-valued SUBCAT feature as introduced by Pollard (1990) for the description of German. Using sets instead of lists as the values of SUBCAT ensures that the order of the complements is only constrained by LP-statements. In the following figure, the attributes needed for the handling of LP-constraints are assigned their place in the HPSG feature system. I- ,..,:,-,i,,,ti Ill cP-otrr[ I/l/ SVNSEM, LOC / L FTC ] /// / .RIGHT[] all LLP-STORE [ ] J] Figure 17 The paths SYNSEMILOCIHEADI{LP-IN,LP- OUT,RIGHT,LEFT} contain information that is relevant when the constituents becomes an argument/adjunct. They are HEAD features so that they can be specified in the lexical head of the constituent and are percolated via the Head Feature Principle to the maximal projection. The path SYNSEMILOCILP-STORE contains information about LP-relevant features contained in the projection dominated by the node described by the feature structure. LP-STORE can obviously not be a head feature because it is "changed" when an argument or adjunct is added to the projection. In figures 18 and 19, the principles that enforce LP-constraints are given 4. Depending on whether the head is to the right or to the left of the comple- ment/adjunct, two versions of the principle are dis- tinguished. This distinction is necessary because linear order is crucial. Note that neither the HEAD features of the head are used in checking LP constraints, nor the LP-STORE of the complement or adjunct. PHON append(N, l; ... [LP-STORE ~]] T [PHON ~l FLEFTFil ll H l"" LP'sT~LPE?7 [~J Head Complement/Adjunct Figure 18: Left-Head LP-Prineiple 4The dots (...) abbreviate the path SYNSEMILOCAL 205 PHON append(~],~)] ... [LP-STORE ~] J ( PHON [~] [ R ~ HEAD |LP-IN ri] III I • .. [LP-Otrr ~ll [PHON ~-] ] u>-s+oREt] JJ [... [L~-STORENJ Complement/Adjunct Head Figure 19: Right-Head LP-Prineiple In the following examples, we make use of the parametrized type notation used in the grammar formalism STUF (D6rre 1991). A parametrized type has one or more parameters instantiated with feature structures. The name of the type (with its parameters) is given to the left of the := sign, the feature structure to the right. In the following we define the parametrized types nom(X,Y), dat(X,Y), pro(X,Y), and non-pro(X,Y), where X is the incoming LP-STORE and Y is the outgoinl LP-STORE. "NOM [ ] ] "NOM + "] DAT[] / DAT[] / ) nom ACC[] /, ACC[] / := PROlT1 / PROr~ / NON-PRO~I] ~ON+RO711 CASE nom rs~s~l,..ocri.~.~r rDA T 1]]] t L t L~-'r [ACC-IIII Figure 20 /I-NoM[] ] rNo~rri -I //DAT [] / IDAT+ J dai[/ACC~I l,l~+cm |[PROlTI I [PROITI X LNON-PRO ~ t.NON-PRO I" rCASEd~ SYNSEMILOC [HEAD [LEFT I ACC- L [RIGHT I NOM - Figure 21 @w /I-NoMrrl -II-NoMIrl -IX //OATI'7"I / /DAT [] / / proll ACC [] l,l~m II:= l IPRO [ ] I IPRO + I | LNON+RO ml LNO~*"O IZll / [SYNSEmILOC [HEAO [LEEr I NON-PRO -]]] Figure 22 /[NOMI'7"I ] I-NOMI'rl ]\ /|DAT~ J |DAT ~ // non-prol/ACC [] ,/ACC~ //:= | [PRO I'a"l |PRO[] // tLNON-PRO[ ] LNON-PRO+J ] [SYNSEMII£)C [HEAD[RIGHT I PRO -]]] Figure 23 The above type definitions can be used in the definition of lexical entries. Since the word ihm, whose lexical entry 5 is given in figure 24, is both dative case and pronominal, it must contain both types. While the restrictions on the left and right context invoked by dat/2 and pro/2 can be unified 6, matters are not that simple for the LP-IN and LP-OUT features. Since their purpose is to "change" rather than to "add" information, simple unification is not possible. Instead, LP-IN of ihm becomes the in- coming LP-STORE of dat/2, the outgoing LP- STORE of daft2 becomes the incoming LP-STORE of pro/2, and the outgoing LP-STORE of pro/2 becomes LP-OUT of ihm, such that the effect of both changes is accumulated. ihm := LP-IN ri] [SYNSEMILOC [HEAD [LP_OUT ~ ^ ~fi],~b ^ p,o~,~ Figure 24: lexical entry for ihm After expansion of the types, the following feature structure results. Exactly the same feature structure had been resulted if dat/2 and pro/2 would have been exchanged in the above lexical entry I"! I'-'1 I ' 1 1 " 1 (go(W, 2[~) A dat(121, 3[~) ), because the effect of both is to instantiate a '+' in LP-OUT. - - - I-~o~iri I--- IDAT II / LP-IN/ACC [~ [ /PRO [ ] / L~oN-PRol3I i 1 |DAT + / SYNSEMILOC HEAD I..P-OUTiACC~] ] [PRO + / LNON-PRoITll I ]~lTr [ACC - -] / ~" LNON-PRO Riol-rr INOM -] - - -CASE dat Figure 25: expanded lexical entry for ihm 5Only the information which is relevant for the processing of LP constraints is included in this lexical entry. 6dat/2 means the type dat with two parameters. 206 The next figure shows the lexical entry for a non- pronominal NP, with a disjunction of three cases. Peter := [SYNSEMILOC [HEAD [LP'IN [~ ]]1 LLP-OUTNJJ ^ (nom(~,~]) v dat~,~]) v acc([~,[~))^ non-pro([2~,[3-b Figure 26 COMPILATION OF THE ENCODING As the encoding of LP constraints presented above is intended for processing rather than grammar writing, a compilation step will initialize the lexical entries automatically according to a given grammar including a separated list of LP-constraints. Consequently the violation of LP-constraints results in unification failure. For reasons of space we only present the basic idea. The compilation step is based on the assumption that the features of the LP-constraints are morphologically motivated, i.e. appear in the lexicon. If this is not the case (for example for focus, thematic roles) we introduce the feature with a disjunction of its possible values. This drawback we hope to overcome by employing functional dependencies instead of LP-IN and LP-OUT features. For each side of an LP-constraint we introduce boolean features. For example for [A: v] < [B: w] we introduce the features a_v and b_w. This works also for LP-constraints involving more than one feature such as [,>.o + 1 r,>.o %3 CASE accJ < LCASE For encoding the possible combinations of values for the participating features, we introduce binary auxiliary features such as pro_plus_case_acc, because we need to encode that there is at least a single constituent which is both pronominal and accusative. Each lexical entry has to be modified as follows: 1. A lexical entry that can serve as the head of a phrase receives the additional feature LP-STORE. 2. An entry that can serve as the head of a phrase and bears LP-relevant information, i.e. a projection of it is subsumed by one side of some LP-constraint, has to be extended by the features LP-IN, LP-OUT, LEFT, RIGHT. 3. The remaining entries percolate the LP information unchanged by passing through the information via LP-IN and LP-OUT. The values of the features LEFT and RIGHT follow from the LP-constraints and the LP-relevant information of the considered lexical entry. The values of LP-STORE, LP-IN and LP-OUT depend on whether the considered lexical entry bears the information that is represented by the boolean feature (attribute A with value v for boolean feature a_v). entry bears the entry doesn't bear information the information LP-STORE + LP-IN TOP new variable x LP-OUT + coreference to x CONCLUSION We have presented a formal method for the treatment of LP constraints, which requires no addition to standard feature unification formalisms. It should be emphasized that our encoding only affects the compiled grammar used for the processing. The linguist does not lose any of the descriptive means nor the conceptual clarity that an ID/LP formalism offers. Yet he gains an adequate computational interpretation of LP constraints. Because of the declarative specification of LP con- straints, this encoding is neutral with respect to pro- cessing direction (parsing-generation). It does not depend on specific strategies (top-down vs. bottom-up) although, as usual, some combinations are more efficient than others. This is an advantage over the formalization of unification ID/LP grammars in Seiffert (1991) and the approach by Erbach (1991). Seiffert's approach, in which LP constraints operate over siblings, requires an addition to the parsing algo- rithm, by which LP constraints are checked during processing to detect violations as early as possible, and again after processing, in case LP-relevant infor- mation has been added later by unification. Erbach's approach can handle LP constraints in head domains by building up a list of constituents over which the LP constraints are enforced, but also requires an addition to the parsing algorithm for checking LP constraints during as well as after processing. Our encoding of LP constraints does not require any particular format of the grammar, such as left- or right-branching structures. Therefore it can be incorporated into a variety of linguistic analyses. There is no need to work out the formal semantics of LP constraints because feature unification formalisms already have a well-defined formal semantics. Reape (1989) proposes a different strategy for treating partially free word order. His approach also permits the application of LP constraints across local trees. This is achieved by separating word order variation from the problem of building a semantically motivated phrase structure. Permutation across constituents can be described by merging the fringes (terminal yields) of the constituents using the operation of sequence union. All orderings imposed on the two merged fringes by LP constraints are preserved in the merged fringe. Reape treats clause union and scrambling as permutation that does not affect constituent structure. Although we are intrigued by the elegance and descriptive power of Reape's approach, we keep our bets with our more conservative proposal. The main problem we see with Reape's strategy is the additional 207 burden for the LP component of the grammar. For every single constituent that is scrambled out of some clause into a higher clause, the two clauses need to be sequence-unioned. A new type of LP constraints that refer to the position of the constituents in the phrase or dependency structure is employed for ensuring that the two clauses are not completely interleaved. Hopefully future research will enable us to arrive at better judgements on the adequacy of the different approaches. Pollard (1990) proposes an HPSG solution to German word order that lets the main verb first combine with some of its arguments and adjuncts in a local tree. The resulting constituent can be fronted. The remaining arguments and adjuncts are raised to the subcategorization list 7 of the auxiliary verb above the main verb. Yet, even if a flat structure is assumed for both the fronted part of the clause and the part remaining in situ as in (Pollard 1990), LP constraints have to order major constituents across the two parts. For a discussion, see Uszkoreit (1991b). Uszkoreit (1991b) applies LP principles to head domains but employs a finite-state automaton for the encoding of LP constraints. We are currently still investigating the differences between this approach and the one presented here. Just as most other formal appraoches to linear pre- cedence, we treat LP-rules as absolute constraints whose violation makes a string unacceptable. Sketchy as the data may be, they suggest that violation of certain LP-eonstraints merely makes a sentence less acceptable. Degrees of acceptability are not easily captured in feature structures as they are viewed today. In terms of our theory, we must ensure that the unification of the complement's or adjunct's left or right context restriction with the head's LP-STORE does not fail in case of a value clash, but rather results in a feature structure with lower acceptability than the structure in which there is no feature clash. But until we have developed a well-founded theory of degrees of acceptability, and explored appropriate formal means such as weighted feature structures, as proposed in (Uszkoreit 1991a), we will either have to ignore order- ing principles or treat them as absolute constraints. REFERENCES [DOrre 1991] Jochen DOrre. The Language of STUF. In: Herzog, O. and Rollinger, C.-R. (eds.): Text Understanding in LILOG. Springer, Berlin. [Erbach 1991] Gregor Erbach. A flexible parser for a linguistic experimentation environment. In: Herzog, O. and Rollinger, C.-R. (eds.): Text Understanding in LILOG. Springer, Berlin. 7Actually, in Pollard's proposal the subcat feature is set-valued. [Gazdar & PuUum 1982] Gerald Gazdar, G. K. Pullum. Generalized Phrase Structure Grammar. A Theoretical Synopsis. Indiana Linguistics Club, Bloomington, Indiana. [Gazdar et al. 1985] Gerald Gazdar, Ewan Klein, G. K. Pullum, Ivan Sag. Generalized Phrase Structure Grammar. Basil Blackwell, Oxford, UK [Joshi 1987] A. K. Joshi. Word-Over Variation in Natural Language Generation. In: Proceedings of AAAI-87, 550-555 [Kaplan & Zaenen 1988] R. M. Kaplan, A. Zaenen. Functional Uncertainty and Functional Precedence in Continental West Germanic. In: H. Trost (ed.), Proceedings of 4. 0sterreichische Artificial-InteUigence-Tagung. Springer, Berlin. [Kay 1985] Martin Kay. Parsing in Functional Unification Grammar. In: D. Dowty, L. Karttunen and A. Zwicky (eds.), Natural Language Parsing. Cambridge University Press, Cambidge, UK. [Pollard 1990] Carl Pollard. On Head Non-Movement. In: Proceedings of the Symposium on Discontinuous Constituency, Tilburg, ITK. [Pollard & Sag 1987] Carl Pollard, Ivan Sag. Information-based syntax and semantics. Vol. 1: Fundamentals. CSLI Lecture Notes No. 13, Stanford, CA. [Reape 1989] Mike Reape. A Logical Treatment of Semi-Free Word Order and Bounded Discontinuous Constituency. In: Proceedings of the 4th Meeting of the European Chapter of the ACL, Manchester, UK. [Seiffert 1991] Roland Seiffert. Unification-ID/LP Grammars: Formalization and Parsing. In: Herzog, O. and Rollinger, C.-R. (eds.): Text Understanding in LILOG. Springer, Berlin. [Uszkoreit 1986] Hans Uszkoreit. Linear Precedence in Discontinuous Constituents: Complex Fronting in German. CSLI Report CSLI-86-47. Stanford, CA. [Uszkoreit 1991a] Hans Uszkoreit. Strategies for Adding Control Information to Declarative Grammars. Proceedings of ACL '91, Berkeley. [Uszkoreit 1991b] Hans Uszkoreit. Linear Prededence in Head Domains. Workshop on HPSG and German, SaarbriJcken, FRG (Proceedings to be published) 208
1992
26
A UNIFICATION-BASED SEMANTIC INTERPRETATION FOR COORDINATE CONSTRUCTS Jong C. Park University of Pennsylvania Computer and Information Science 200 South 33rd Street Philadephia, PA 19104-6389 USA Internet: park@line, cis. upenn, edu Abstract This paper shows that a first-order unification- based semantic interpretation for various coordi- nate constructs is possible without an explicit use of lambda expressions if we slightly modify the standard Montagovian semantics of coordination. This modification, along with partial execution, completely eliminates the lambda reduction steps during semantic interpretation. 1 Introduction Combinatory Categorial Grammar (CCG) has been offered as a theory of coordination in nat- ural language (Steedman [1990]). It has usually been implemented in languages based on first or- der unification. Moore [1989] however has pointed out that coordination presents problems for first- order unification-based semantic interpretation. We show that it is possible to get over the problem by compiling the lambda reduction steps that are associated with coordination in the lexicon. We show how our first-order unification handles the following examples of coordinate constructs. (1.1) Harry walks and every farmer walks. (1.2) A farmer walks and talks. (1.3) A farmer and every senator talk. (1.4) Harry finds and a woman cooks a mushroom. (1.5) Mary gives every dog a bone and some policeman a flower. We will first start with an illustration of why standard Montagovian semantics of coordination cannot be immediately rendered into a first-order 209 unification strategy. The lexicon must contain multiple entries for the single lexical item "and", since only like categories are supposed to conjoin. For example, the lexical entry for "and" in (1.1) specifies the constraint that the lexical item should expect on both sides sentences to give a sentence. Moore [1989] predicts that a unification-based semantic interpretation for sentences which in- volve for example noun phrase coordination won't be possible without an explicit use of lambda expressions, though there are cases where some lambda expressions can be eliminated by di- rectly assigning values to variables embedded in a logical-form expression. The problematic exam- ple is shown in (1.6), where proper noun subjects are conjoined. (1.6) john and bill walk. The argument is that if we do not change the se- mantics of "john" from j to AP.P(j), where P is a second order variable for property in the Montago- vian sense 1 , then the single predicate AX. walk(X) should accommodate two different constants j and b in a single variable X at the same time. Since the unification simply blocks in this case, the ar- gument goes, we need to use higher order lambda expressions such as AP.P(j) or AP.P(b), which when conjoined together, will yield semantics for e.g. "john and bill" as ,~P.(P(j) ~ P(b)) . Combined finally with the predicate, this will re- sult in the semantics (1.7), after lambda reduction. (1.7) walk(j) & walk(b) 1Montague [1974]. )~pVp(j) to be exact, taking in- tensionality into account. The semantics of the predi- cate "walks" will then be (^AX.walk(X)). Although Moore did not use quantified noun phrases to illustrate the point, his observation gen- eralizes straightforwardly to the sentence (1.3). In this case, the semantics of "and", "every" and "some" (or "a") will be (1.8) a, b, and c, respec- tively. (1.8) (a) AO.AR.AP.(Q(P) • R(P)) (b) AS. AP'. forall(X, S (X) =>P' (X)) (c) AS.AP". exists(X,S(X)~P' ' (X)) Thus, after four lambda reduction steps, one for each of Q, R, P' and P' ', the semantics of "a farmer and every senator" will be AP.(exists(X,faxmer(X)RP(X)) forall(X,senator(X)=>P(X))), as desired. Moore's paper showed how lambda reduction could be avoided by performing lambda reduction steps at compile time, by utilizing the lexicon, in- stead of doing them at run time. Consider again (1.8a). The reason why this formulation requires foursubsequent lambda reduction steps, not three, is that the property P should be applied to each of the conjuncts, requiring two separate lambda reduction steps. Suppose that we try to eliminate these two lambda reduction steps at compile time by making the argument of the property P explicit in the lexicon, following the semantics (1.9). (1.9) AQ.AR.AP.(Q(AX.P(X)) • R(AX.P(X))) The first-order variable X ranges over the set of individuals, and the hope is that after lambda re- duction it will be bound by the quantifiers, such as forall, embedded in the expressions denoted by the variables Q and R. Since the same variable is used for both constructs, however, (1.9) works only for pairs of quantified noun phrases, which don't provide constants, but not for pairs involv- ing proper nouns, which do provide constants. In- cidentally, this problem is particular to a unifica- tion approach, and there is nothing wrong with the semantics (1.9), which is equivalent to (1.8a). This unification problem cannot be avoided by having two distinct variables Y and Z as in (1.10) either, since there is only one source for the predicate property for the coordinate noun phrases, thus there is no way to isolate the argument of the pred- icate and assign distinct variables for it at compile time. (1.10) AQ.AR.AP.(Q(XY.P(Y)) ~ R(XZ.P(Z))) 210 The way we propose to eliminate the gap be- tween (1.9) and (1.10) is to introduce some spuri- ous binding which can always be removed subse- quently. The suggestion then is to use (1.11) for the semantics of "and" for noun phrase conjunc- tion. (1.11) Semantics of"and"for NP Conjunction: AQ.AR.AP.(Q(AY.oxists(X,X=Y~P(X))) R(AZ.exists(X,X=Z~P(X)))) This satisfies, we believe, the two requirements, one that the predicate have the same form, the other that the variables for each conjunct be kept distinct, at the same time. The rest of the lambda expressions can be eliminated by using the notion of partial execution (Pereira & Shieber [1987]). Details will be shown in Section 3, along with some "more immediate but faulty" solutions. It is sur- prising that the same idea can be applied to some fairly complicated examples as (1.5), and we be- lieve that the solution proposed is quite general. In order to show how the idea works, we use a first-order Montagovian Intensional Logic (Jowsey [1987]; Jowsey [1990]) for a semantics. We apply the proposal to CCG, but it could equally well be applied to any lexicon based grammar formal- ism. We explain briefly how a CCG works in the first part of Section 2. As for the semantics, noth- ing hinges on a particular choice, and in fact the code we show is devoid of some crucial features of Jowsey's semantics, such as indices for situ- ations or sortal constraints for variable binding. We present the version of Jowsey's semantics that we adopt for our purposes in the second part of Section 2, mainly for completeness. In Section 3, each of the cases in (1.1) through (1.5), or varia- tions thereof, is accounted for by encoding lexical entries of "and", although only (1.3) and (1.5) de- pend crucially on the technique. We have a few words for the organization of a semantic interpretation system we are assum- ing in this paper. We imagine that it consists of two levels, where the second level takes a scope- neutral logical form to produce every possible, gen- uinely ambiguous, scoping possibilities in paral- lel and the first level produces this scope-neutral logical form from the source sentence. We as- sume that our second level, which we leave for future research, will not be very different from the one in Hobbs & Shieber [1987] or Pereira &: Shieber [1987]. The goal of this paper is to show how the scope-neutral logical forms are de- rived from natural language sentences with co- ordinate constructs. Our "scope-neutral" logical form, which we call "canonical" logical form (CLF), syntactically reflects derivation-dependent order of quantifiers since they are derived by a derivation- dependent sequence of combination. We empha- size that this derivation-dependence is an artifact of our illustrative example, and that it is not an inherent consequence of our technique. 2 Background Formalisms A Combinatory Categorial Grammar The minimal version of CCG we need to process our examples contains four reduction rules, (2.1) through (2.4), and two type raising rules, (2.5) and (2.6), along with a lexicon where each lexical item is assigned one or more categories. For the reasons why we need these, the reader is referred to Steedman [1990]. (2.1) Function Application (>): X/Y ¥ => X (2.2) Function Application (<): Y X\Y => X (2.3) Function Composition (>B): X/Y Y/Z => X/Z 2 (2.4) Function Composition (<B): Y\Z X\Y => XXZ (2.5) Type Raising, Subject (>T): np => s/(sknp) (2.6) Type Raising, Backward (<T): np => X\(X/np) The present fragment is restricted to the basic categories n, np and s. 3 Derived categories, or categories, are recursively defined to be basic cat- egories combined by directional symbols (/or \). Given a category X/Y or X\Y, we call X the range category and Y the domain category. Parentheses may be used to change the left-associative default. The semantics part to be explained shortly, (2.7a) through (2.7e) show examples of a common noun, a proper noun, a quantifier, an intransitive verb, a sentential conjunction, respectively. (2.7) Sample Lexicon (a) cat(farmer, n:X'farmer(X)). (b) cat(harry, np:AI'(h'B)'B). (c) cat(every, np: (X'A)'(X'B)'forall(X,A=>B) /n:X'A). 2In Steedman [1990], this rule is conditioned by Z s\np in order to prevent such constructs as "*[Harry] but [I doubt whether Fred] went home" or "*[I think that Fred] and [Harry] went home." 3For simplicity, we do not show variables for gen- der, case, tense, and number. Larger fragment would include pp, etc. 211 (d) cat (walks, s : S\np: (X'A)" (X'walk(X)) "S). (e) cat(and, (s: (St ~ S2)\s:S1)/s:S2),4 A First-Order Montague Semantics In this section, we will focus on describing how Jowsey has arrived at the first-order formalism that we adopt for our purposes, and for further details, the reader is referred to Jowsey [1987] and Jowsey [1990]. The reader can safely skip this sec- tion on a first reading since the semantics we use for presentation in Section 3 lacks many of the new features in this section. Montague's PTQ analysis (Dowty, Wall & Pe- ters [1981]) defines an intensional logic with the basic types e, t and s, where e is the type of en- tities, t the type of truth values and s the type of indices. Derived types <a,b> and <s,a> are re- cursively defined over the basic types. A name, which is of type e, denotes an individual; individ- ual concepts are names relativized over indices, or functions from indices to the set of individuals. In- dividual concepts are of type <s, e>. A predicate denotes a set of individuals, or a (characteristic) function from the set of individuals to truth val- ues. Properties are intensional predicates, or func- tions from indices to the characteristic functions. Properties are of type <s,<e,t>>, or <e,<s,t>>. A formula denotes a truth value, and propositions are intensional formulas, thus of type <s,t>. By excluding individual concepts, we can en- sure that only truth values are relativized over in- dices, and thus a modal (omega-order) logic will suffice to capture the semantics. For this purpose, Jowsey defines two basic types e and o, where o corresponds to the type <s,t>, and then he de- fines derived types <a,b>, where a and b range over basic types and derived types. The logic is then made into first-order by relying on a fixed number of sorts and eliminating recursively de- fined types. These sorts include e, s, o, p and q, which correspond to the types e, s, <s,t>, <e,<s,t>> and <<e,<s,t>>,<s,t>> respectively in an omega-order logic. For a full exposition of the logic, the reader is referred to Jowsey [1990]. For our presentation, we 4The category (s\s)/s has the potential danger of allowing the following construct, if combined with the rule <B: "*Mary finds a man who [walks]s\n p [and he taIks]s\s." The suggestion in Steedman [1990] is to add a new pair of reduction rules, X [X]~ ffi> X and conj X => [X]~, together with the category of "and" as conj. Thus, the category of "and harry talks" is now [s]t~, blocking the unwanted combination. will simplify the semantics and drop intensional- ity altogether. We also drop the sortal constraint, since our examples do not include belief operators and hence the only variables left are of sort e. 3 A First-Order Unification We will follow the standard technique of combin- ing the syntactic information and the semantic information as in (3.1), where up-arrow symbols (,-,)5 are used to give structures to the seman- tic information for partial execution (Pereira & Shieber [1987]), which has the effect of perform- ing some lambda reduction steps at compile time. (3.1) Basic Categories (a) n: (de'do) (b) rip: (de'do)" (de'ro) "So (c) The term do in (3.1a) and (3.1b) encodes domain constraint for the variable de. Likewise, the term ro in (3.1b) specifies range constraint for de. The term So in (3.1b) and (3.1c) encodes the sentential constraint associated with a sentence. In order to avoid possible confusion, we shall henceforth call categories without ~emantic information "syntac- tic" categories. In this section, we will develop lexical entries for those coordinate constructs in (1.1) through (1.5), or variations thereof. For each case, we will start with "more immediate but faulty" solutions and present what we believe to be the correct solution in the last. (For those who want to skip to the correct lexical entries for each of the cases, they are the ones not commented out with %.) We have seen the lexical entry for sentential conjunction in (2.7d). The lexical entry for predicate conjunction can be similarly encoded, as in (3.2). (3.2) Lexical Entry for Predicate Co~unct~n cat(and, ((s:S\np:A'(X*(B1 ~ B2))'S) \(s:Slknp:A'(X'BI)'SI)) /(s:S2knp:A'(X'B2)'S2)). When the conjoined predicates are combined with the subject noun phrase, the subject NP provides only the domain constraint, through A in the first line. The range constraints in the last two NP categories guarantee that B1 and B2 will bear the same variable X in them, so that they can be safely SNot to be confused with Montague's ha~ek sym- bol, '^' 212 put as the range constraint of the first NP cate- gory. The CLF for (1.2) from (3.2) is shown in (3.3). (3.3) exists(Xl, farmer(Xl)~(walk(Xl)~ talk(Xl))) Let us turn to noun phrase coordination, e.g., (1.3). The first try, on the model of predicate con- junction, would be: (3.4) Lexical Entry for NP Conjunction: %cat(and, (np:A'(X'D)'(B & C) % \rip :AI" (Y'D) "B) % /rip: A2" (Z'D) "C). The intention is to collect the two domain con- straints via A1 and A2, to get the range constraint from D in the first line, and then to combine them by joining the two sentential constraints B and C of the domain categories. This idea however does not work, since the variables ¥ and Z do not ap- pear in the range constraint D. As a result, (3.4) will give the following ill-formed CLF for (1.3). exists (Xl, farmer (X i) &talk (X3)) Rforall (X2, senator (X2) =>talk (X3)) We therefore need to use distinct variables in place of D for the two range constraints which will have the same predicate symbol for their range cate- gories. Using the Prolog predicate univ ('=.. '), we can correct (3.4) as follows: 6 (3.5) Lexical Entry for NP Conjunction: %cat(and, (np:A'(X'D)'(B & C) % \np : AI" (Y'B1) -B) /rip: A2" (Z'C1)'C) :- D =.. [Pred, X], % B1 =.. [Pred, Y], C1 =.. [Pred, Z]. This is an explicit case of a first-order simulation of second order variables. Unfortunately, this does not work, for several reasons7 First, this handles predicates of arity 1 only, and we need to know the type of each argument if we want to provide a different category for each predicate of different arity. Second, this can not be combined with pred- icate coordination, for example, such as "john and 6D •.. [P,X] succeeds if D is unifiable with P(X). 7One implementation-dependent reason is that the Prolog requires at least one of the two variables V and Fred to be already instantiated for the univ to work. This can not be expected when the noun phrase con- junction is being processed, since we don't yet know what predicate(s) will follow. a woman walk and talk," or some complex verbs that may require several predicates, such as "be- lieves", since it assumes only one predicate for the range constraint. The solution we propose is to use the revised semantics of "and" in (1.11) instead. That is, we expect (3.6) from (1.3): (3.6) Proposed Semantics of (1.3): exists (Xl, farmer(Xl) ~(exists (X2, (X2=Xl)&talk (X2)) ) ) &f orall(X3, senat or (X3) =>(exists (X2, (X2=X3) ~tt a]k (X2)) ) ) We need to distinguish the variable X2 in the second line from the variable X2 in the fourth line, via something like c~ conversion, since in the present form, the Prolog will consider them as the same, while they are under distinct quantifiers. In fact, since we are separating the semantic in- terpretation into two levels, we can further pro- cess the CLF at the second semantic interpretation level to eliminate those spurious bindings such as exists(X, (X=u)~tu) along with variable renaming to derive the logical form (3.7) from (3.6): (3.7) exists (Xl, farmer(Xl ) &talk(Xl) ) aforall (X3, senator (X3) =>talk (X3)) (3.8) produces the CLF (3.6) for (1.3). (3.8) Lexical Entry for NP Conjunction: cat (and, (np:A'(X'D)'(B ~t C) \np: A1" (Y" (exists (X, (X=Y) &D) ) ) "B) /np : A2" (Z" (exists (X, (X=Z) ~tD) ) ) "C). The reason why we are able to maintain in the two domain categories two different forms of range contraints is that the only place that will unify with the actual range constraint, i.e., the predi- cate, is the range constraint part of the range cat- egory only. We note in passing that Jowsey pro- vided yet another approach to noun phrase coordi- nation, a generalized version of his idea as shown below. (3.8a) Lexical Entry for NP Conjunction: cat(and, (np:(X*A)'(X'D)'B \np:(Y'A1)*(Y'C)'B) /np:(Z'A2)'(Z'forall(X,(X=Y v X=Z)=>D))'C). For example,(3.8a) will give the following seman- tics for (1.3). exists(Xl,farmer(Xl)&forall(X2,senator(X2) =>forall(X3,(X3=Xl v X3=X2)=>talk(X3)))) 213 This approach has its limits, however, as indicated in the footnote 8. We now turn to some of the non-standard con- stituent coordination. First, consider (1.4), which is an instance of Right Node Raising (RNR). The CCG syntactic category of the conjunction "and" in this case is (C\C)/C, where C is s/np. (3.9) shows one derivation, among others, for (1.4). The syntactic category of "finds" is (sknp)/np. (3.9) One derivation for (1.4). harry finds and a woman cooks a musMroom ..... >T ....... >T - s/(s\np) .... s/(s\np) ..... np ............. >B >B s/np s/np (s/np)k(s/np) < s/rip . . . . . . . . . . . > s Assuming that the category of "finds" is as follows, (3.10) Lexical Entry for '~nds": cat(finds. ((s:S\np:AI'(X'A)'S) /np:A2"(Y'find(X,Y))'A)). here is the first try for the RNR "and." (3.11) Lexical Entry for RNR Conjunction: %cat(and, ((s:S/np:A'CX'(Bl~B2))'S1) % \(s:S/np:A'(X'B1)'Si) % /(s:S3/np:A'(X'B2)'S2). For example, (3.11) will produce the CLF (3.12) for the sentence "harry finds and mary cooks a mushroom." (3.12) exists(Xl,musbxoom(Xl)~find(h,Xl)& cook(m,Xl)) However, this works only for pairs of proper nouns. For example, for the sentence "every man finds and a woman cooks a mushroom," it will give the ill-formed CLF (3.13) where the domain constraint for the noun phrase "a woman" is gone and X3 is therefore unbound. This happens because the sentential constraint S2 is not utilized for the final sentential constraint. (3.13)%forall(Xl,man(X1)=>exists(X2, %mushroom(X2)&find(XI,X2) %&cook(X3,X2))) Putting the two sentential constraints Sl and s2 together as follows does not work at all, since the relation between S and SO is completely undefined, unlike the ones between S1 and B1 and between S2 and B2. %cat (and, % % ((s:S/np:A'(X'(SIaS2))'SO) \(s:SI/np:AI'(X'BI)'BI)) /(s:S2/np:A2"(X'B2)'B2)). This problem is corrected in (3.14), which will pro- duce the eLF (3.15) for (1.4): (3.14) Lexical Entry for RNR Co~unctmn. catCand, ((s:S/np:A'CX'CSl&S2))'S) \(s:SI/np:AI"(X'BI)'BI)) I(s:S2/np:A2"(X'B2)*B2)). (3.15) Semantics of (1.4) from (3.14): exists(Xl,mushroom(Xl)kfind(h,Xl)) kexists(X2,.oman(X2)kcook(X2,Xl))) (1.5) shows another case of non-standard con- stituent coordination, which we will call an in- stance of Left Node Raising (LNR). The syntactic category of "and" for LNR is (C\C)/C where C is (sknp)\(((sknp)/np)/np). (3.16) shows one syntactic derivation for (1.5). The syntactic cate- gory of "gives" is ((sknp)/np)/np. (3.16) One derivation for (1.5), fragment. every dog a bone <T ...... <T ((sXnp)/np) \ ( ((sknp)/np)/np) (sknp) \ ((sknp)/np) <B (s\np)k(((sMap)/np)/np) Again, we assume that the category of "gives" is: (3.17) LexicM Entry for "gives": cat(gives, ((s:Slknp:AI'(X*S2)'S1) /np:A2"(Y'give(X,Z,Y))'B) /np:A3"(Z'B)'S2). (3.18) shows the first try for the lexical entry, s (3.18) Lexical Entry for LNR Conjunction. %cat(and, % (((s:_\np:_) % \(((s:S\np:(X'A)-(X'(S4 ~ S6))'S) /np:AI'(Y-B)'SI)/np:A2"(Z'SI)-S2)) Sin this case, we can no longer use the disjunctive technique such as forall(Xl, (Xl= v Xl= )=>give( ,X1, )) for the CLF, since Xl is now a pair. The prob- lem gets worse when the conjoined pairs do not have the same type of quantifiers, as in (1.5). 214 % \((s:_\np:_)\(C(s:_knp:_) % Inp:A3"(Y'B)'S3) % /np:A4"(Z'S3)'S4))) % /((s:_\np:_)k(((s:_knp:_) % /np:AS"(Y'B)'SS) % /np:A6"(Z'SS)'S6))). It gives the eLF (3.19) for (1.5): (3.19) Semantics of (1.5) from (3.18): forall (Xl, dog (X 1 ) =>exist s (X2, bone (X2) ~give (m, Xl ,X2) ) ) ~exist s (Xl, policeman(Xl) • exist s (X2, flo.er (X2) ~give (m, X I, X2) ) ) Unfortunately, (3.18) favors quantified nouns too much, so that when any proper noun is involved in the conjunction the constant for the proper noun will appear incorrectly in the two sentential con- straints at the same time. It seems that the only way to resolve this problem is to create four vari- ables, Y1, Y2, 7.1 and Z2, at the semantics level, similar to idea in (1.11). (3.20) implements this proposal. (3.20) Lexical Entry for LNR Conjunction. cat(and, (((s:_\np:_) \(((s:S knp:(X'A)'(X'(S4 ~ Se))'S) /np: At" (Y'B)'SI) /np:A2" (Z'S1) "$2)) \((s :_\np:_) \(((s :_knp:_) /np : A3" (Y 1" (exists (Y, (Y=Y1)~B)) ) "$3) /np: A4" (Zl" (exists (Z, (Z=Z I) ~$3) ) ) "S4) ) ) /((s :_\np:_) \(((s:_\np:_) /rip: A5" (Y2" (exists (Y, (Y=Y2) kB) ) ) "$5) /np: AS" (Z2" exists (Z, (Z=Z2) &SS) ) ) "S6) ) ). (3.20) will give the eLF (3.21) for (1.5). (3.21) Semantics of (1.5) from (3.20): f orall (Xl, dog(Xl) =>exist s (X2, X2=Xl &exist s (X3, bone (X3) ~exist s (X4, X4=X3 \give (m,X2, X4) ) ) ) ) \exist s (Xl, policeman(Xl) ~exist s (X2, X2=Xl Rexist s (X3, flower (X3)~exist s (X4, X4=X3 agive (m, X2, X4) ) ) ) ) Using the technique of eliminating spurious bind- ings, (3.21) may be replaced by a logical form (3.22): (3.22) forall(Xl ,dog(Xl) =>exists (X3 ,bone (X3) ~give (m, Xl, X3) ) ) ~exists (Xl, policeman(Xl) &exists (X3, flo.er (X3)~give (m, Xl, X3) ) ) In addition to this, (3.20) gives the CLF (3.23) for (3.24), (3.23) exists (Xl, Xl=j~exist s (X2, bone (X2) • exist s (X3, X3=X2 &give (m, X 1, X3) ) ) ) • exists (X 1, Xl=b~exist s (X2. flo.er (X2) Rexist s (X3, X3=X2 ~give (m, X1, X3) ) ) ) (3.24) mary gives john a bone and bill a flower. for which no CLF could be derived if we were using (3.18). This completes our demonstration for the technique. The natural question at this point is how many lexical entries we need for the conjunct "and". If natural language makes every possible category conjoinable, the number of entries should be in- finite, since function composition can grow cate- gories unboundedly, if it can grow them at all. We predict that in natural language we can limit the conjunction arity to n, where n is the maximum arity in the lexicon. 4 Conclusion The system described in this paper is implemented in Quintus Prolog. We expect that the approach can be extended to any lexicon-based grammar of the same power as CCG if it provides means for term unification. The reason we choose to eliminate all the lambda expressions is that it allows uniform treat- ment within first-order unification, since Jowsey's results suggest that in other respects natural lan- guage semantics can be characterized in a first- order logic. As an alternative, we could choose to enforce uniform treatment within second-order unification, using the idea for example in Na- dathur & Miller [1988]. Although we leave this possibility for future research, we believe that this option might turn out to be more appropriate in terms of elegance of the approach. And the result- ing conceptual clarity might be exploited to design a schema for generating these entries for "and". the content. I am also very grateful to Dr. Mark Johnson, who suggested, and took pains of going over in detail, another way of presenting the thesis, that resulted in the material in the introduction section. All errors are however entirely mine. The author was supported by the ARO grant DAAL03- 89-C-0031PRI. References David R. Dowty, Robert E. Wall & Stanley Peters [1981], Introduction to Montague Seman- tics, D. Reidel Publishing Company. Jerry R. Hobbs ~ Stuart M. Shieber[January- June 1987], "An Algorithm for Generat- ing Quantifier Scopings," Computational Linguistics 13, 47-63. Einar Jowsey[1987], "Montague Grammar and First Order Logic," Edinburgh Work- ing Papers in Cognitive Science: Catego- rim Grammar, Unification Grammar and Parsing 1, 143-194. Einar Jowsey [1990], Constraining Montague Grammar for Computational Applications, Doctoral Dissertation, Department of AI, Univer- sity of Edinburgh. Richard Montague [1974], in Forma/ Philosophy, Richmond H. Thomason, ed., Yale Uni- versity Press. Robert C. Moore[1989], "Unification-Based Se- mantic Interpretation," Proceedings of the ACL. Gopalan Nadathur & Dale Miller[1988], "An Overview of A-Prolog," Proceedings of the Fifth International Logic Programming Conference. Fernando C.N. Pereira & Stuart M. Shieber [1987], Prolog and NaturM-Language Ananlysis, CSLI Lecture Notes Number 10. Mark J. Steedman [April 1990], "Gapping as Con- stituent Coordination," Linguistics and Philosophy 13, 207-263. 5 Acknowledgements Many thanks are due to Dr. Mark Steedman, whose guidance immensely helped to improve the quality of presentation, as well as the quality of 215
1992
27
CORPUS-BASED ACQUISITION OF RELATIVE PRONOUN DISAMBIGUATION HEURISTICS Claire Cardie Department of Computer Science University of Massachusetts Amherst, MA 01003 E-mail: [email protected] ABSTRACT This paper presents a corpus-based approach for deriving heuristics to locate the antecedents of relative pronouns. The technique dupficates the performance of hand-coded rules and requires human intervention only during the training phase. Because the training instances are built on parser output rather than word cooccurrences, the technique requires a small number of training examples and can be used on small to medium-sized corpora. Our initial results suggest that the approach may provide a general method for the automated acquisition of a variety of disambiguation heuristics for natural language systems, especially for problems that require the assimilation of syntactic and semantic knowledge. 1 INTRODUCTION State-of-the-art natural language processing (NLP) systems typically rely on heuristics to resolve many classes of ambiguities, e.g., prepositional phrase attachment, part of speech disambiguation, word sense disambiguation, conjunction, pronoun resolution, and concept activation. However, the manual encoding of these heuristics, either as part of a formal grammar or as a set of disarnbiguation rules, is difficult because successful heuristics demand the assimilation of complex syntactic and semantic knowledge. Consider, for example, the problem of prepositional phrase attachment. A number of purely structural solutions have been proposed including the theories of Minimal Attachment (Frazier, 1978) and Right Association (Kimball, 1973). While these models may suggest the existence of strong syntactic preferences in effect during sentence understanding, other studies provide clear evidence that purely syntactic heuristics for prepositional phrase attachment will not work (see (Whittemore, Ferrara, & Brunner, 1990), (Taraban, & McClelland, 1988)). However, computational linguists have found the manual encoding of disarnbiguation rules -- especially those that merge syntactic and semantic constraints -- to be difficult, time-consuming, and prone to error. In addition, hand-coded heuristics are often incomplete and perform poorly in new domains comprised of specialized vocabularies or a different genre of text. In this paper, we focus on a single ambiguity in sentence processing: locating the antecedents of relative pronouns. We present an implemented corpus-based approach for the automatic acquisition of disambiguation heuristics for that task. The technique uses an existing hierarchical clustering system to determine the antecedent of a relative pronoun given a description of the clause that precedes it and requires only minimal syntactic parsing capabilities and a very general semantic feature set for describing nouns. Unlike other corpus-based techniques, only a small number of training examples is needed, making the approach practical even for small to medium-sized on- line corpora. For the task of relative pronoun disambignation, the automated approach duplicates the performance of hand-coded rules and makes it possible to compile heuristics tuned to a new corpus with little human intervention. Moreover, we believe that the technique may provide a general approach for the automated acquisition of disambiguation heuristics for additional problems in natural language processing. In the next section, we briefly describe the task of relative pronoun disambiguation. Sections 3 and 4 give the details of the acquisition algorithm and evaluate its performance. Problems with the approach and extensions required for use with large corpora of unrestricted text are discussed in Section 5. 2 DISAMBIGUATING RELATIVE PRONOUNS Accurate disambiguation of relative pronouns is important for any natural language processing system that hopes to process real world texts. It is especially a concern for corpora where the sentences tend to be long and information-packed. Unfortunately, to understand a sentence containing a relative pronoun, an NLP system must solve two difficult problems: the system has to locate the antecedent of the relative pronoun and then determine the antecedent's implicit position in the embedded clause. Although finding the gap in the embedded clause is an equally difficult 216 problem, the work we describe here focuses on locating the relative pronoun antecedent.1 This task may at first seem relatively simple: the antecedent of a relative pronoun is just the most recent constituent that is a human. This is the case for sentences S1-$7 in Figure 1, for example. However, this strategy assumes that the NLP system produces a perfect syntactic and semantic parse of the clause preceding the relative pronoun, including prepositional phrase attachment (e.g., $3, $4, and $7) and interpretation of conjunctions (e.g., $4, $5, and $6) and appositives (e.g., $6). In $5, for example, the antecedent is the entire conjunction of phrases (i.e., "Jim, Terry, and Shawn"), not just the most recent human (i.e., "Shawn"). In $6, either s1. Tony saw the boy who won the award. $2. The boy who gave me the book had red hair. $3. Tony ate dinner with the men from Detroit who sold computers. $4. I spoke to the woman with the black shirt and green hat over in the far comer of the room whc wanted a second interview. SS. I'd like to thank Jim. Terry, and Shawn, who provided the desserts. $6. I'd like to thank our sponsors, GE andNSF, who provide financial support. ST. The woman from Philadelphia who played soccer was my sister. $8. The awards for the children who pass the test are in the drawer. $9. We wondered who stole the watch. S10. We talked with the woman and the man who danced. Figure 1. Examples of Relative Pronoun Antecedents "our sponsors" or its appositive "GE and NSF" is a semantically valid antecedent. Because pp-attachment and interpretation of conjunctions and appositives remain difficult for current systems, it is often unreasonable to expect reliable parser output for clauses containing those constructs. Moreover, the parser must access both syntactic and semantic knowledge in finding the antecedent of a relative pronoun. The syntactic structure of the clause preceding "who" in $7 and $8, for example, is identical (NP-PP) but the antecedent in each case is different. In $7, the antecedent is the subject, "the woman;" in $9, it is the prepositional phrase 1For a solution to the gap-finding problem that is consistent with the simplified parsing strategy presented below, see (Cardie & Lehnert, 1991). modifier, "the children." Even if we assume a perfect parse, there can be additional complications. In some cases the antecedent is not the most recent constituent, but is a modifier of that constituent (e.g., $8). Sometimes there is no apparent antecedent at all (e.g., $9). Other times the antecedent is truly ambiguous without seeing more of the surrounding context (e.g., S10). As a direct result of these difficulties, NLP system builders have found the manual coding of rules that find relative pronoun antecedents to be very hard. In addition, the resulting heuristics are prone to errors of omission and may not generalize to new contexts. For example, the UMass/MUC-3 system 2 began with 19 rules for finding the antecedents of relative pronouns. These rules included both structural and semantic knowledge and were based on approximately 50 instances of relative pronouns. As counter- examples were identified, new rules were added (approximately 10) and existing rules changed. Over time, however, we became increasingly reluctant to modify the rule set because the global effects of local rule changes were difficult to measure. Moreover, the original rules were based on sentences that UMass/MUC-3 had found to contain important information. As a result, the rules tended to work well for relative pronoun disambiguation in sentences of this class (93% correct for one test set of 50 texts), but did not generalize to sentences outside of the class (78% correct on the same test set of 50 texts). 2.1 CURRENT APPROACHES Although descriptions of NLP systems do not usually include the algorithms used to find relative pronoun antecedents, current high-coverage parsers seem to employ one of 3 approaches for relative pronoun disambiguation. Systems that use a formal syntactic grammar often directly encode information for relative pronoun disambiguation in the grammar. Alternatively, a syntactic filter is applied to the parse tree and any noun phrases for which coreference with the relative pronoun is syntactically legal (or, in some cases, illegal) are passed to a semantic component which determines the antecedent using inference or preference rules (see (Correa, 1988), (Hobbs, 1986), (Ingria, & Stallard, 1989), (Lappin, & McCord, 1990)). The third approach employs hand- coded disambiguation heuristics that rely mainly on 2UMass/MUC-3 is a version of the CIRCUS parser (Lehnert, 1990) developed for the MUC-3 performance evaluation. See (Lehnert et. al., 1991) for a description of UMass/MUC-3. MUC-3 is the Third Message Understanding System Evaluation and Message Understanding Conference (Sundheim, 1991). 217 semantic knowledge but also include syntactic constraints (e.g., UMass/MUC-3). However, there are problems with all 3 approaches in that 1) the grammar must be designed to find relative pronoun antecedents for all possible syntactic contexts; 2) the grammar and/or inference rules require tuning for new corpora; and 3) in most cases, the approach unreasonably assumes a completely correct parse of the clause preceding the relative pronoun. In the remainder of the paper, we present an automated approach for deriving relative pronoun disambigu_a6on rules. This approach avoids the problems associated with the manual encoding of heuristics and grammars and automatically tailors the disambiguation decisions to the syntactic and semantic profile of the corpus. Moreover, the technique requires only a very simple parser because input to the clustering system that creates the disambiguation heuristics presumes neither pp-attachment nor interpretation of conjunctions and appositives. 3 AN AUTOMATED APPROACH Our method for deriving relative pronoun disambiguation heuristics consists of the following steps: 1. Select from a subset of the corpus all sentences containing a particular relative pronoun. (For the remainder of the paper, we will focus on the relative pronoun "who.") 2. For each instance of the relative pronoun in the selected sentences, a. parse the portion of the sentence that precedes it into low-level syntactic constituents b. use the results of the parse to create a training instance that represents the disambiguation decision for this occurrence of the relative pronoun. 3. Provide the training instances as input to an existing conceptual clustering system. During the training phase outlined above, the clustering system creates a hierarchy of relative pronoun disambiguation decisions that replace the hand-coded heuristics. Then, for each new occurrence of the wh-word encountered after training, we retrieve the most similar disambiguation decision from the hierarchy using a representation of the clause preceding the wh-word as the probe. Finally, the antecedent of the retrieved decision guides the selection of the antecedent for the new occurrence of the relative pronoun. Each step of the training and testing phases will be explained further in the sections that follow. 3.1 SELECTING SENTENCES FROM THE CORPUS For the relative pronoun disambiguation task, we used the MUC-3 corpus of 1500 articles that range from a single paragraph to over one page in length. In theory, each article describes one or more terrorist incidents in Latin America. In practice, however, about half of the texts are actually irrelevant to the MUC task. The MUC-3 articles consist of a variety of text types including newspaper articles, TV news reports, radio broadcasts, rebel communiques, speeches, and interviews. The corpus is relatively small - it contains approximately 450,000 words and 18,750 sentences. In comparison, most corpus-based algorithms employ substantially larger corpora (e.g., 1 million words (de Marcken, 1990), 2.5 million words (Brent, 1991), 6 million words (Hindle, 1990), 13 million words (Hindle, & Rooth, 1991)). Relative pronoun processing is especially important for the MUC-3 corpus because approximately 25% of the sentences contain at least one relative pronoun. 3 In fact, the relative pronoun "who" occurs in approximately 1 out of every 10 sentences. In the experiment described below, we use 100 texts containing 176 instances of the relative pronoun "who" for training. To extract sentences containing a specific relative pronoun, we simply search the selected articles for instances of the relative pronoun and use a preprocessor to locate sentence boundaries. 3.2 PARSING REQUIREMENTS Next, UMass/MUC-3 parses each of the selected sentences. Whenever the relative pronoun "who" is recognized, the syntactic analyzer returns a list of the low-level constituents of the preceding clause prior to any attachment decisions (see Figure 2). UMass/MUC-3 has a simple, deterministic, stack- oriented syntactic analyzer based on the McEli parser (Schank, & Riesbeck, 1981). It employs lexically- indexed local syntactic knowledge to segment incoming text into noun phrases, prepositional phrases, and verb phrases, ignoring all unexpected constructs and unknown words. 4 Each constituent 3There are 4707 occurrences of wh-words (i.e., who, whom, which, whose, where, when, why) in the approximately 18,750 sentences that comprise the MUC-3 corpus. 4Although UMass/MUC-3 can recognize other syntactic classes, only noun phrases, prepositional phrases, and verb phrases become part of the training instance. 218 Sources in downtown Lima report that the police last night detained Juan Bautista and Rogoberto Matute, who ... ~ U Mass/MUC-3 syntactic analyzer the police : [subject, human] detained : [verb] Juan Bautista : [np, proper-name] Rogoberto Matute : [np, proper-name] Figure 2. Syntactic Analyzer Output returned by the parser (except the verb) is tagged with the semantic classification that best describes the phrase's head noun. For the MUC-3 corpus, we use a set of 7 semantic features to categorize each noun in the lexicon: human, proper-name, location, entity, physical-target, organization, and weapon. In addition, clause boundaries are detected using a method described in (Cardie, & Lehnert, 1991). It should be noted that all difficult parsing decisions are delayed for subsequent processing components. For the task of relative pronoun disambiguation, this means that the conceptual clustering system, not the parser, is responsible for recognizing all phrases that comprise a conjunction of antecedents and for specifying at least one of the semantically valid antecedents in the case of appositives. In addition, pp-attachment is more easily postponed until after the relative pronoun antecedent has been located. Consider the sentence "I ate with the men from the restaurant in the club." Depending on the context, "in the club" modifies either "ate" or "the restaurant." If we know that "the men" is the antecedent of a relative pronoun, however (e.g., "I ate with the men from the restaurant in the club, who offered me the job"), it is probably the case that "in the club" modifies "the men." Finally, because the MUC-3 domain is sufficiently narrow in scope, lexical disambiguation problems are infrequent. Given this rather simplistic view of syntax, we have found that a small set of syntactic predictions covers the wide variety of constructs in the MUC-3 corpus. 3.3 CREATING THE TRAINING INSTANCES Output from the syntactic analyzer is used to generate a training instance for each occurrence of the relative pronoun in the selected sentences. A training instance represents a single disambiguation decision and includes one attribute-value pair for every low- level syntactic constituent in the preceding clause. The attributes of a training instance describe the syntactic class of the constituent as well as its position with respect to the relative pronoun. The value associated with an attribute is the semantic feature of the phrase's head noun. (For verb phrases, we currently note only their presence or absence using the values t and nil, respectively.) Consider the training instances in Figure 3. In S 1, for example, "of the 76th district court" is represented with the attribute ppl because it is a prepositional phrase and is in the first position to the left of "who." Its value is "physical-target" because "court" is classified as a physical-target in the lexicon. The subject and verb constituents (e.g., "her DAS bodyguard" in $3 and "detained" in $2) retain their traditional s and v labels, however -- no positional information is included for those attributes. S1: [The judge] [of the 76th court] [,] who ... I I Training instance: [ (s human) (pp l physical-rargeO (v nil) (antecedent ((s) ) ) ] f12: [The police] [detained] Uuan Bautista] [and] [Rogoberto Matute] [,] who ... Training instanoa: [ (s human) (v 0 (np2 proper-name) (npl proper-name) (antecedent ((rip2 npl))) ] S8: [Her DAS bodyguard] [,] [Dagoberto Rodriquez] [,] who... I I Training instance: [( s human) (npl proper-name) (v nil) (antecedent ((npl )(s npl )(s)))] Figure 3. Training Instances 219 In addition to the constituent attribute-value pairs, a training instance contains an attribute-value pair that represents the correct antecedent. As shown in Figure 3, the value of the antecedent attribute is a list of the syntactic constituents that contain the antecedent (or (none) if the relative pronoun has no anteceden0. In S 1, for example, the antecedent of "who" is "the judge." Because this phrase is located in the subject position, the value of the antecedent attribute is (s). Sometimes, however, the antecedent is actually a conjunction of phrases. In these cases, we represent the antecedent as a list of the constituents associated with each element of the conjunction. Look, for example, at the antecedent in $2. Because "who" refers to the conjunction "Juan Bautista and Rogoberto Matute," and because those phrases occur as rip1 and rip2, the value of the antecedent attribute is (np2 npl). $3 shows yet another variation of the antecedent attribute-value pair. In this example, an appositive creates three equivalent antecedents: 1) "Dagoberto Rodriguez" (rip1), 2) "her DAS bodyguard" m (s), and 3) "her DAS bodyguard, Dagoberto Rodriguez" -- (s npl). UMass/MUC-3 automatically generates the training instances as a side effect of parsing. Only the desired antecedent is specified by a human supervisor via a menu-driven interface that displays the antecedent options. 3.4 BUILDING THE HIERARCHY OF DISAMBIGUATION HEURISTICS As the training instances become available they are input to an existing conceptual clustering system called COBWEB (Fisher, 1987). 5 COBWEB employs an evaluation metric called category utility (Gluck, & Corter, 1985) to incrementally discover a classification hierarchy that covers the training instances. 6 It is this hierarchy that replaces the hand- coded disambiguation heuristics. While the details of COBWEB are not necessary, it is important to know that nodes in the hierarchy represent concepts that increase in generality as they approach the root of the tree. Given a new instance to classify, COBWEB 5 For these experiments, we used a version of COBWEB developed by Robert Williams at the University of Massachusetts at Amherst. 6Conceptual clustering systems typically discover appropriate classes as well as the the concepts for each class when given a set of examples that have not been preclassified by a teacher. Our unorthodox use of COBWEB to perform supervised learning is prompted by plans to use the resulting hierarchy for tasks other than relative pronoun disambiguation. 220 retrieves the most specific concept that adequately describes the instance. 3.5 USING THE DISAMBIGUATION HEURISTICS HIERARCHY After training, the resulting hierarchy of relative pronoun disambiguation decisions supplies the antecedent of the wh-word in new contexts. Given a novel sentence containing "who," UMass/MUC-3 generates a set of attribute-value pairs that represent the clause preceding the wh-word. This probe is just a training instance without the antecedent attribute- value pair. Given the probe, COBWEB retrieves from the hierarchy the individual instance or abstract class that is most similar and the antecedent of the retrieved example guides selection of the antecedent for the novel case. We currently use the following selection heuristics to 1) choose an antex~ent for the novel sentence that is consistent with the context of the probe; or to 2) modify the retrieved antecedent so that it is applicable in the current context: 1. Choose the first option whose constituents are all present in the probe. 2. Otherwise, choose the first option that contains at least one constituent present in the probe and ignore those constituents in the retrieved antex~ent that are missing from the probe. 3. Otherwise, replace the np constituents in the retrieved antecedent that are missing from the probe with pp constituents (and vice versa), and try 1 and 2 again. In S 1 of Figure 4, for example, the first selection heuristic applies. The retrieved instance specifies the np2 constituent as the location of the antecedent and the probe has rip2 as one of its constituents. Therefore, UMass/MUC-3 infers that the antecedent of "who" for the current sentence is "the hardliners," i.e., the contents of the np2 syntactic constituent. In $2, however, the retrieved concept specifies an antecedent from five constituents, only two of which are actually present in the probe. Therefore, we ignore the missing constituents pp5, rip4, and pp3, and look to just np2 and rip1 for the antecedent. For $3, selection heuristics 1 and 2 fail because the probe contains no pp2 constituent. However, if we replace pp2 with np2 in the retrieved antecedent, then heuristic 1 applies and "a specialist" is chosen as the antecedent. Sl: [It] [encourages] [the military men] [,] [and] [the hardliners] [in ARENA] who... I I I [(s enaty) (vO (np3 human) (np2 human) (ppl org)] Antecedent of Retrieved Instance: ((np2)) Antecedent of Probe:. (np2) = "the hardliners" S2: [There] [are] [also] [criminals] [like] [Vice President Merino] [,] [a man] who... [(s entity) (v t) (rip3 human) (rip2 proper-name) (rip1 human)] Antecedent of Retrieved Instance: ((pp5 np4 pp3 np2 np1)) Antecedent of Probe:. (np2 np1) = Wice President Merino, a man" $3: [It] [coincided] [with the arrival] [of Smith] [,] [a specialist] [from the UN] [,] who... ~ (pp4Jntity) [ [ (plplentity)] [(s entity) (v 0 (pp3 proper-name) (rip2 human) Antecedent of Retrieved Instance: ((pp2)) Antecedent of Probe: (np2) = "a specialist" Figure 4. Using the Disambiguation Heuristics Hierarchy 4 RESULTS As described above, we used 100 texts (approximately 7% of the corpus) containing 176 instances of the relative pronoun "who" for training. Six of those instances were discarded when the UMass/MUC-3 syntactic analyzer failed to include the desired antecedent as part of its constituent representation, making it impossible for the human supervisor to specify the location of the antecedent. 7 After training, we tested the resulting disambiguation hierarchy on 71 novel instances extracted from an additional 50 texts in the corpus. Using the selection heuristics described above, the correct antecedent was found for 92% of the test instances. Of the 6 errors, 3 involved probes with antecedent combinations never seen in any of the training cases. This usually indicates that the semantic and syntactic structure of the novel clause differs significantly from those in the disambiguation hierarchy. This was, in fact, the case for 2 out of 3 of the errors. The third error involved a complex conjunction and appositive combination. In this case, the retrieved antecedent specified 3 out of 4 of the required constituents. If we discount the errors involving unknown antecedents, our algorithm correctly classifies 94% of the novel instances (3 errors). In comparison, the original UMass/MUC-3 system that relied on hand- coded heuristics for relative pronoun disambiguation finds the correct antecedent 87% of the time (9 errors). However, a simple heuristic that chooses the most recent phrase as the antecedent succeeds 86% of the time. (For the training sets, this heuristic works only 75% of the time.) In cases where the antecedent was not the most recent phrase, UMass/MUC-3 errs 67% of the time. Our automated algorithm errs 47% of the time. It is interesting that of the 3 errors that did not specify previously unseen an~exlents, one was caused by parsing blunders. The remaining 2 errors involved relative pronoun antecedents that are difficult even for people to specify: 1) "... 9 rebels died at the hands of members of the civilian militia, who resisted the attacks" and 2) "... the government expelled a group of foreign drug traffickers who had established themselves in northern Chile". Our algorithm chose "the civilian militia" and "foreign drug traffickers" as the antecedents of "who" instead of the preferred antecedents "members of the civilian militia" and "group of foreign drug traffickers. "8 5 CONCLUSIONS We have described an automated approach for the acquisition of relative pronoun disambiguation heuristics that duplicates the performance of hand- ceded rules. Unfortunately, extending the technique for use with unrestricted texts may be difficult. The UMass/MUC-3 parser would clearly need additional mechanisms to handle the ensuing part of speech and 7Other parsing errors occurred throughout the training set, but only those instances where the antecedent was not recognized as a constituent (and the wh-word had an anteceden0 were discarded. 8Interestingly, in work on the automated classification of nouns, (Hindle, 1990) also noted problems with "empty" words that depend on their complements for meaning. 221 word sense disambiguation problems. However, recent research in these areas indicates that automated approaches for these tasks may be feasible (see, for example, (Brown, Della Pietra, Della Pietra, & Mercer, 1991) and (l-Iindle, 1983)). In addition, although our simple semantic feature set seems adequate for the current relative pronoun disambiguntion task, it is doubtful that a single semantic feature set can be used across all domains and for all disambignation tasks. 9 In related work on pronoun disambig~_~_afion, Dagan and Itai (1991) successfully use statistical cooccurrence patterns to choose among the syntactically valid pronoun referents posed by the parser. Their approach is similar in that the statistical database depends on parser output. However, it differs in a variety of ways. First, human intervention is required not to specify the correct pronoun antecedent, but to check that the complete parse tree supplied by the parser for each training example is correct and to rule out potential examples that are inappropriate for their approach. More importantly, their method requires very large COrlxra of data. Our technique, on the other hand, requires few training examples because each training instance is not word-based, but created from higher-level parser output. 10 Therefore, unlike other corpus-based techniques, our approach is practical for use with small to medium-sized corpora in relatively narrow domains. ((Dagan & Itai, 1991) mention the use of semantic feature-based cooccurrences as one way to make use of a smaller corpus.) In addition, because human intervention is required only to specify the antecedent during the training phase, creating disambiguation heuristics for a new domain requires little effort. Any NLP system that uses semantic features for describing nouns and has minimal syntactic parsing capabilities can generate the required training instances. The parser need only recognize noun phrases, verbs, and prepositional phrases because the disambiguation heuristics, not the parser, are responsible for recognizing the conjunctions and appositives that comprise a relative pronoun antecedent. Moreover, the success of the approach for structurally complex antecedents suggests that the technique may provide a general approach for the 9 In recent work on the disambiguation of structurally, but not semantically, restricted phrases, however, a set of 16 predefined semantic categories sufficed (Ravin, 1990). 10Although further work is needed to determine the optimal number of training examples, it is probably the case that many fewer than 170 instances were required even for the experiments described here. 222 automated acquisition of disambiguation rules for other problems in natural language processing. 6 ACKNOWLEDGMENTS This research was supported by the Office of Naval Research, under a University Research Initiative Grant, Contract No. N00014-86-K-0764 and NSF Presidential Young Investigators Award NSFIST- 8351863 (awarded to Wendy Lehnert) and the Advanced Research Projects Agency of the Department of Defense monitored by the Air Force Office of Scientific Research under Contract No. F49620-88-C-0058. 7 REFERENCES Brent, M. (1991). Automatic acquisition of subcategorization frames from untagged text. Proceedings, 29th Annual Meeting of the Association for Computational Linguists. University of California, Berkeley. Association for Computational Linguists. Brown, P. F., Della Pietra, S. A., Della Pietra, V. J., & Mercer, R. L. (1991). Word-sense disambiguation using statistical methods. Proceedings, 29th Annual Meeting of the Association for Computational Linguists. University of California, Berkeley. Association for Computational Linguists. Cardie, C., & Lehnert, W. (1991). A Cognitively Plausible Approach to Understanding Complex Syntax. Proceedings, Eighth National Conference on Artificial Intelligence. Anaheim, CA. AAAI Press ] The MIT Press. Correa, N. (1988). A Binding Rule for Government-Binding Parsing. Proceedings, COLING '88. Budapest. Dagan, I. and Itai, A. (1991). A Statistical Filter for Resolving Pronoun References. In Y.A. Feldman and A.Bruckstein (Eds.), Artificial Intelligence and Computer Vision (pp. 125-135). North-Holland: Elsevier. de Marcken, C. G. (1990). Parsing the LOB corpus. Proceedings, 28th Annual Meeting of the Association for Computational Linguists. University of Pittsburgh. Association for Computational Linguists. Fisher, D. H. (1987). Knowledge Acquisition Via Incremental Conceptual Clustering. Machine Learning, 2, 139-172. Frazier, L. (1978). On comprehending sentences: Syntactic parsing strategies. Ph.D. Thesis. University of Connecticut. Gluck, M. A., & Corter, J. E. (1985). Information, uncertainty, and the utility of categories. Proceedings, Seventh Annual Conference of the Cognitive Science Society. Lawrence Erlbaum Associates. Hindle, D. (1983). User manual for Fidditch (7590-142). Naval Research Laboratory. Hindle, D. (1990). Noun classification from predicate-argument structures. Proceedings, 28th Annual Meeting of the Association for Computational Linguists. University of Pittsburgh. Association for Computational Linguists. Hindle, D., & Rooth, M. (1991). Structural ambiguity and lexical relations. Proceedings, 29th Annual Meeting of the Association for Computational Linguists. University of California, Berkeley. Association for Computational Linguists. Hobbs, J. (1986). Resolving Pronoun References. In B. J. Grosz, K. Sparck Jones, & B. L. Webber (Eds.), Readings in Natural Language Processing (pp. 339-352). Los Altos, CA: Morgan Kaufmann Publishers, Inc. Ingria, R., & Stallard, D. (1989). A computational mechanism for pronominal reference. Proceedings, 27th Annual Meeting of the Association for Computational Linguistics. Vancouver. Kimball, J. (1973). Seven principles of surface structure parsing in natural language. Cognition, 2, 15-47. Lappin, S., & McCord, M. (1990). A syntactic filter on pronominal anaphora for slot grammar. Proceedings, 28th Annual Meeting of the Association for Computational Linguistics. University of Pittsburgh. Association for Computational Linguistics. Lehnert, W. (1990). Symbolic/Subsymbolic Sentence Analysis: Exploiting the Best of Two Worlds. In J. Bamden, & J. Pollack (Eds.), Advances in Connectionist and Neural Computation Theory. Norwood, NJ: Ablex Publishers. Lehnert, W., Cardie, C., Fisher, D., Riloff, E., & Williams, R. (1991).University of Massachusetts: Description of the CIRCUS System as Used for MUC-3. Proceedings, Third Message Understanding Conference (MUC-3). San Diego, CA. Morgan Kaufmann Publishers. Ravin, Y. (1990). Disambignating and interpreting verb definitions. Proceedings, 28th Annual Meeting of the Association for Computational Linguists. University of Pittsburgh. Association for Computational Linguists. Schank, R., & Riesbeck, C. (1981). Inside Computer Understanding: Five Programs Plus Miniatures. Hillsdale, NJ: Lawrence Erlbaum. Sundheim, B. M. (May,1991). Overview of the Third Message Understanding Evaluation and Conference. Proceedings,Third Message Understand- ing Conference (MUC-3). San Diego, CA. Morgan Kanfmann Publishers. Taraban, R., & McClelland, J. L. (1988). Constituent attachment and thematic role assignment in sentence processing: influences of content-based expectations. Journal of Memory and Language, 27, 597-632. Whittemore, G., Ferrara, K., & Brunner, H. (1990). Empirical study of predictive powers of simple attachment schemes for post-modifier prepositional phrases. Proceedings, 28th Annual Meeting of the Association for Computational Linguistics. University of Pittsburgh. Association for Computational Linguistics. 223
1992
28
Association-based Natural Language Processing with Neural Networks KIMURA Kazuhiro SUZUOKA Takashi AMANO Sin-ya Information Systems Laboratory Research and Development Center TOSHIBA Corp. 1 Komukai-T6siba-ty6, Saiwai-ku, Kawasaki 210 Japan kim~isl.rdc.toshiba.co.jp Abstract This paper describes a natural language pro- cessing system reinforced by the use of associ- ation of words and concepts, implemented as a neural network. Combining an associative net- work with a conventional system contributes to semantic disambiguation in the process of interpretation. The model is employed within a kana-kanji conversion system and the advan- tages over conventional ones are shown. 1 Introduction Currently, most practical applications in nat- ural language processing (NLP) have been realized via symbolic manipulation engines, such as grammar parsers. However, the cur- rent trend (and focus of research) is shift- ing to consider aspects of semantics and dis- course as part of NLP. This can be seen in the emergence of new theories of language, such as Situation Theory [Barwise 83] and Discourse Representation Theory [Kamp 84]. While these theories provide an excellent the- oretical framework for natural language un- derstanding, the practical treatment of con- text dependency within the language can also be improved by enhancing underlying compo- nent technologies, such as knowledge based systems. In particular, alternate approaches to symbolic manipulation provided by connec- tionist models [Rumelhart 86] have emerged. Connectionist approaches enable the extrac- tion of processing knowledge from examples, instead of building knowledge bases manually. The model described here represents the unification of the connectionist approach and conventional symbolic manipulation; its most valuable feature is the use of word as- sociations using neural network technology. Word and concept associations appear to be central in human cognition [Minsky 88]. Therefore, simulating word associations con- tributes to semantic disambiguation in the computational process of interpreting sen- tences by putting a strong preference to ex- pected words(meanings). The paper describes NLP reinforced by as- sociation of concepts and words via a con- nectionist network. The model is employed within a NLP application system for kana- 224 kanji conversion x. Finally, an evaluation of the system and advantages over conventional systems are presented. 2 A brief overview of kana-kanji conversion Japanese has a several interesting feature in its variety of letters. Especially the ex- istence of several thousand of kanji (based on Chinese characters; ~, 111,..) made typing task hard before the invention of kana-kanji conversion[Amano 79] . Now it has become a standard method in inputting Japanese to computers. It is also used in word processors and is familiar to those who are not computer experts. It comes from the simpleness of op- erations. By only typing sentences by pho- netic expressions of Japanese (kan a), the kana- kanji converter automatically converts kana into meaningful expressions(kanji). The sim- plified mechanism of kana-kanji conversion can be described as two stages of processing: mor- phological analysis and homonym selection. • Morphological Analysis Kana-inputted (fragment of) sentences are morphologically analized through dic- tionary look up, both lexicons and gram- mars. There are many ambiguities in word division due to the agglutinative na- ture of Japanese (Japanese has no spaces in text), Each partitioning of the kana is then further open to being a possible interpretation of several alternate kanji. The spoken word douki, for example, can mean motivation, pulsation, synchroniza- tion, or copperware. All of them are spelt identically in kana( k°5 -~), but have dif- ferent kanji eharacters(~, ~'t-~, ~], ~1 1 Many commercial products use kana-kanji conver- sion technology in Japan, including the TOSHIBA Tosword-series of Japanese word processors. ~-~,respectively). Some kana words have 10 or more possible meanings. Therefore the stage of Homonym Selection is indis- pensable to kana-kanji conversion for the reduction of homonyms. Homonym Selection Preferable semantic homonyms are se- lected according to the co-occurrence restrictions and selectional restrictions. The frequency of use of each word is also taken into account. Usually, the selection is also reinforced by a simple context hold- ing mechanism; when homonyms appear in previous discourse and one of them is chosen by a user, the word is automat- ically memorized in the system as in a cache technology. Then, when the same homonyms appear the memorized word is selected as the most preferred candidate and is shown to the user. 3 Association-based kana- kanji conversion The above mechanisms are simple and effec- tive in regarding kana-kanji converter as a typ- ing aid. However, the abundance of homonyms in Japanese contributes to many of the am- biguities and a user is forced to choose the desired kanji from many candidates. To re- duce homonym ambiguities a variety of tech- niques are available; however, these tend to be limited from a semantic disambiguation perspective. In using word co-occurrence re- strictions, it is necessary to collect a large amount of co-occurrence phenomena, a prac- tically impossible task. In the case of the use of selectional restrictions, an appropri- ate thesaurus is necessary but it is known that defining the conceptual hierarchy is dif- ficult work [Lenat 89][EDR 90]. Techniques for storing previous kanji selections (cache) 225 Text;ua.l Znpu ~" . ......... ...... j ',,'-.. / ~ \ ~ ~ \ ',, "t.. ~ 2"~J ~ ',, " ~ Figure 1: Kana-Kanji Conversion with a Neural Network are too simple to disambiguate between possi- ble previous selections for the same homonym with respect to the context or between context switches. To avoid these problems without increasing computational costs, we propose the use of the associative functionality of neural networks. The use of association is a natural extension to the conventional context holding mechanism. The idea is summarized as follows. There are two stages of processing: network generation and kana-kanji conversion. A network representing the strength of word association is automatically generated from real documents. Real documents can be con- sideredas training data because they are made of correctly converted kanji. Each node in the network uniquely correspond to a word entry in the dictionary of kana-kanji conver- sion. Each node has an activation level. The link between nodes is a weighted link and represents the strength of association be- tween words. The network is a Hopfield-type network[Hopfield 84]; links are bidirectional and a network is one layered. When the user chooses a word from homonym candidates, a certain value is in- putted to the node corresponding to the cho- sen word and the node will be activated. The activation level of nodes connected to the ac- tivated node will be then activated. In this manner, the activation spreads over the net- 226 work through the links and the active part of the network can be considered as the associa- tive words in that context. In kana-kanji con- version, the converter decides the preference of word order for homonyms in the given con- text by comparing the node activation level of each node of homonyms. An example of the method is shown in Figure 1. Assume the network is already built from certain documents. A user is inputting a text whose topic is related to computer hardware. In the example, words like clock ( ~ t~ .~ ~ ) and signal (4~'-~-) already appear in the previ- ous context, so their activation levels are rela- tively high. When the word DOUKI (~") ~) is inputted in kana and the conversion starts, the activation level of synchronization (~J~) is higher than that of other candidates due to its relationship to clock or signal. The input douki is then correctly converted into synchro- nization ([~jtj]). The advantages of our method are: * The method enables kanji to be given based on a preference related to the cur- rent context. Alternative kanji selections are not discarded but are just given a lower context weighing. Should the con- text switch, the other possible selections will obtain a stronger context preference; this strategy allows the system to capably handle context change. * Word preferences of a user are reflected in the network. • The correctness of the conversion is im- proved without high-cost computation such as semantic/discourse analyses. 4 Implementation The system was built on Toshiba AS-4000 workstation (Sun4 compatible machine) using C. The system configuration is shown in Fig- ure 2. The left-hand side of the dashed line repre- sents an off-line network building process. The right-hand side represents a kana-kanji con- version process reinforced with a neural net- work handler. The network is used by the neural network handler and word associations are done in parallel with kana-kanji conver- sion. The kana-kanji converter receives kana- sequences from a user. It searches the dictio- nary for lexical and grammatical information and finally creates a list of possible homonym candidates. Then the neural network handler is requested for activation levels of homonyms. After the selection of preferred homonyms, it shows the candidates in kanji to a user. When the user chooses the desired one, the chosen word information is sent to the neural network handler through a homonym choice interface and the corresponding node is activated. The roles and the functions of main compo- nents are described as follows. * Neural Network Generator Several real documents are analyzed and the network nodes and the weights of links are automatically decided. The docu- ments consist of the mixture of kana and kanji; homonyms for the kanji within the given context are also provided. The doc- uments, therefore, can be seen as training data for the neural network. The analysis proceeds through the following steps. 1. Analyze the documents morpholog- ically and convert into a sequence of words. Note that particles and demonstratives are ignored because they have no characteristics in word association. 2. Count up the frequency of the all combination of co-appeared word- pair in a paragraph and memorize 227 ass~laClve net~rX I 1 F~ D~tlom.~7 H~dler Lex.lcons £ 1 ~ammars hiragana sequeltce4 I Kana.Kaq/i -! activation levels o1" neurons homonym candlda tes fin kanJ$) actlvet~ngchoeen neurons Figure 2: System Configuration !~ ..j I,u.#'~ i them as the strength of connection. A paragraph is recognized only by a format information of documents. 3. Sum up the strength of connection for each word-pair. 4. Regularize the training data; this involves removing low occurrences (noise) and partitioning the fre- quency range in order to obtain a monotonically decreasing (in fre- quency) training set. Although the network data have only positive links and not all nodes are connected, non-connected nodes are assumed to be connected by neg- ative weights so that the Hopfield conditions [Hopfield 84] are satisfied. As described above, the technique used here is a morphological and statistical analysis. Actually this module is a pat- tern learning of co-appearing words in a paragraph. The idea behind of this approach is that words that appear together in a para- graph have some sort of associative con- nection. By accumulating them, pairs without such relationships will be statis- tically rejected. From a practical point of view, automated network generation is inevitable. Since human word association differ by individ- 228 ual, creation of a general purpose asso- ciative network is not realistic. Because the training data for the network is sup- posed to be supplied by users' documents in our system, automatic network genera- tion mechanism is necessary even if the generated network is somewhat inaccu- rate. • Neural Network Handler The role of the module is to recall the total patterns of co-appearing words in a paragraph from the partial patterns of the current paragraph given by a user. The output value Oj for each node j is calculated by following equations. Oj = f(nj) nj = (1 - 5)nj + 6(Z wjiO i -11- Ij) i where f : a sigmoidal function : a real number representing the inertia of the network(0 < ~ < 1). nj : input value to node j. Ij : external input value to node j. wjl : weight of a link from node i to node j; Wji ---- Wij , Wii .~ O. The external input value Ij takes a cer- tain positive value when the word corre- sponding to node j is chosen by a user. Otherwise zero. Although the module is software imple- mented, it is fast enough to follow tile typing speed of a user. 2 • Kana-Kanji Converter 2A certain optinfization technique is used respect- ing for the spm-seness of the network. Tile basic algorithm is almost same as the conventional one. The difference is that holnonym candidates are sorted by the activation levels of the correspond- ing nodes in the network, except when lo- cal constraints such as word co-occurrence restrictions are applicable to the candi- dates. The associative information also affects the preference decision of gram- matical ambiguities. 5 Evaluation To evaluate tile method, we tested the im- plemented sytem by doing kana-kanji conver- sion for real documents. The training data and tested data were taken from four types of documents: business letters, personal let- ters, news articles, and technical articles. The amount of training data and tested data was over 100,000 phrases and 10,000 phrases re- spectively, for each type of document. The measure for accuracy of conversion was a re- duction ratio(RR) of the homonym choice operations of a user. For comparison, we also evaluated the reduction ratio(RR ~) of the kana-kanji conversion with a conventional con- text holding mechanism. RR = (A - B)/A RR' = (A - C)/A whe1:e A : number of clmice operations required when an untrained kana-kanji converter was used. B : number of choice operations required when a NN-trained kana-kanji converter was used. C : nunlber of choice operations required when a kana-kanji converter with a conven- tional context holding mechanism was used. Tile result is shown in Table 1. The ad- vantages of our method is clear for each type 229 Table 1: Result of the Evaluation document-type RR(%) RR'(%) business letters 41.8 32.6 personal letters 20.7 12.7 news articles 23.4 12.2 technical articles 45.6 40.7 of documents. Especially, it is notable that the advantages in business letter field is promi- nent, because more than 80% of word proces- sor users write business letters. 6 Discussion Although the result of conversion test is sat- isfactory, word associations by neural network are not human-like ones yet. Following is a list of improvements that many further enhance the system: • Improvements for generating a network The quality of the network depends on how to reduce noisy word occurrence in the network from the point of view of as- sociation. The existence of noisy words is inevitable in automatic generation but plays a role to make unwanted associa- tions. One approach to reducing noisy words is to identify those words which are context independent and remove them from the network generation stage. The identification can be based on word cat- egories and meanings. In most cases, words representing very abstract concepts are noisy because they force unwanted ac- tivations in unrelated contexts. There- fore they should be detected through ex- periments. Another problem arises be- cause of the ambiguity of morphological analysis. Word extraction from real doc- uments is not always correct because of the agglutinative nature of the Japanese language. Other possibility for network improvement is to consider a syntactic relationship or co-occurrence relationship while deciding link weights. In addition, there are keywords in a document in gen- eral which play a central role in associa- tion. They will be reflected in a network more in consideration of technical terms. Preference decision in kana-kanji conver- sion The reinforcement of associative informa- tion complicates the decision of homonym preference in kana-kanji conversion. We already have several means of seman- tic disambiguation of homonyms: co- occurrence restrictions and selectional re- strictions. As building a complete the- saurus is very difficult, our thesaurus is still not enough to select the cor- rect meaning(kanfi-conversion) of kana- written word. So selectional restrictions should be weak constraints in homonym selection. In the same vein, associative information should be considered a weak constraint because associations by neural networks are not always reliable. Pos- sible conflict between selectional restric- tions and associative information, added to tile grammatical ambiguities remaining in the stage of homonym selection, make kanji selection very complex. The prob- lem of multiply and weakly constrained 230 homonyms is one to which we have not yet found the best solution. 7 Conclusion This paper described an association based nat- ural language processing and its application to kana.kanji conversion. We showed advan- tages of the method over the conventional one through the experiments. After the improve- ments discussed above, we are planning to de- velop a neuro-word processor available in com- mercial use. We are also planning the applica- tion of the method to other fields including machine translations and discourse analyses for natural language interface to computers. References [Amano 79] [Barwise 83] [EDR 90] [Hopfield 84] [Kamp 84] Kawada, T. and Amano, S., "Japanese Word Processor," Proc. IJCAI-79, pp. 466-468, 1979. Barwise, J. and Perry, J., "Sit- uations and Attitudes," MIT Press, 1983. Japan Electronic Dictionary Research Institute, "Concept Dictionary," Tech. Rep. No.027, 1990. Hopfield, J., "Neurons with Graded Response Have Col- lective Computational Proper- ties Like Those of Two-State Neurons," Proc. Natl. Acad. Sci. USA 81, pp. 3088-3092, 1984. Kamp, H., "A Theory of Truth and Semantic Repre- sentation," in Groenendijk et [Lenat 89] [Minsky 88] [Rumelhart 86] [Waltz 85] al(eds.) "Truth, Interpreta- tion and Information", 1984. Lenat, D. and Guha, R., "Building Large Knowledge- Based Systems: Represen- tation and Inference in the Cyc Project," Addison- Wesley, 1989. Minsky, M., "The Society Of Mind,", Simon gz Schuster Inc., 1988. Rumelhart, D., McClelland, J., and the PDP Research Group, "Parallel Distributed Processing: Explorations in the Microstructure of Cogni- tion," MIT Press, 1986. Waltz, D. and Pollack, J., "Massively Parallel Parsing: A Strongly Interactive Model of Natural Language Interpre- tation," Cognitive Science, pp. 51-74, 1985. 231
1992
29
A SIMPLE BUT USEFUL APPROACH TO CONJUNCT IDENTIFICATION 1 Rajeev Agarwal Lois Boggess Department of Computer Science Mississippi State University Mississippi State, MS 39762 e-mail: [email protected] ABSTRACT This paper presents an approach to identifying conjuncts of coordinate conjunctions appearing in text which has been labelled with syntactic and semantic tags. The overall project of which this research is a part is also briefly discussed. The program was tested on a 10,000 word chapter of the Merck Veterinary Manual. The algorithm is deterministic and domain independent and it performs relatively well on a large real-life domain. Constructs not handled by the simple algorithm are also described in some detail. INTRODUCTION Identification of the appropriate conjuncts of the coordinate conjunctions in a sentence is fundamental to the understanding of the sentence. We use the phrase 'conjunct identification' to refer to the process of identifying the components (words, phrases, clauses) in a sentence that are conjoined by the coordinate conjunctions in it. Consider the following sentence: "The president sent a memo to the managers to inform them of the tragic inciden[ and to request their co- operation." In this sentence, the coordinate conjunction 'and' conjoins the infinitive phrases "to inform them of the tragic incident" and "to request their co- operation". If a natural language understanding system fails to recognize the correct conjuncts, it is likely to misinterpret the sentence or to lose its meaning entirely. The above is an example of a simple sentence where such conjunct identification is easy. In a realistic domain, one encounters sentences which are longer and far more complex. 1 This work is supported in part by the National Science Foundation under grant number IRI-9002135. This paper presents an approach to conjunct identification which, while not perfect, gives reasonably good results with a relatively simple algorithm. It is deterministic and domain independent in nature, and is being tested on a large domain - the Merck Veterinary Manual, consisting of over 700,000 words of uncontrolled technical text. Consider this sentence from the manual: "The mites live on the surface of the skin of the ear and canal, and feed by piercing the skin and sucking lymph, with resultant irritation, inflammation, exudation, and crust formation". This sentence has four coordinate conjunctions; identification of their conjuncts is moderately difficult. It is not uncommon to encounter sentences in the manual which are more than twice as long and even more complex. The following section briefly describes the larger project of which this research is a part. Then the algorithm used by the authors and its drawbacks are discussed. The last section gives the results obtained when an implementation was run on a 10,000-word excerpt from the manual and discusses some areas for future research. THE RESEARCH PROJECT This research on conjunct identification is a part of a larger research project which is exploring the automation of extraction of information from structured reference manuals. The largest manual available to the project in machine-readable form is the Merck Veterinary Manual, which serves as the primary testbed. The system semi-automatically builds and updates its knowledge base. There are two components to the system - an NLP (natural language processing) component and a knowledge analysis component. (See Figure 4 at the end.) 15 The NLP component consists of a tagger, a semi-parser, a prepositional phrase attachment specialist, a conjunct identifier for coordinate conjunctions, and a restructurer. The tagger is a probabilistic program that tags the words in the manual. These tags consist of two parts - a mandatory syntactic portion, and an optional semantic portion. For example: the word 'cancer' would be tagged as noun//disorder, the word 'characterized' would be verb~past_p, etc. The semantic portion of the tags provides domain-specific information. The semi-parser, which is not a full-blown parser, is responsible for identifying noun, verb, prepositional, gerund, adjective, and infinitive phrases in the sentences. Any word not captured as one of these is left as a solitary 'word' at the top level of the sentence structure. The output produced by the semi- parser has very little embedding and consists of very simple structures, as will be seen below. The prepositional phrase attachment disambiguator and the conjunct identifier for coordinate conjunctions are considered to be "specialist" programs that work on these simple structures and manipulate them into more deeply embedded structures. More such specialist programs are envisioned for the future. The restructurer is responsible for taking the results of these specialist programs and generating a deeper structure of the sentence. These deeper structures are passed on to the knowledge analysis component. The knowledge analvsis comnonent is responsible for extracting from these structures several kinds of objects and relationships to build and update an object-oriented knowledge base. The system can then be queried about the information contained in the text of the manual. This paper primarily discusses the conjunct identifier for coordinate conjunctions. Detailed information about the other components of the system can be found in [Hodges et al., 1991], [Boggess et al., 1991], [Agarwal, 1990], and [Davis, 1990]. CONJUNCT IDENTIFICATION The program assigns a case label to every noun phrase in the sentence, depending on the role that it fulfills in the sentence. A large proportion of the nouns of the text have semantic labels; for the most part, the case label of a noun phrase is the label associated with the head noun of the noun phrase. In some instances, a preceding adjective influences the case label of the noun phrase, as, for example, when an adjective with a semantic label precedes a generic noun. A number of the resulting case labels for noun phrases (e.g. time, location, etc.)are similar those suggested by Fillmore [1972], but domain dependent case labels (e.g. disorder, patient, etc.) have also been introduced. For example: the noun phrase "a generalized dermatitis" is assigned a case label of disorder, while "the ear canal" is given a case label of body_part. It should be noted that, while the coordination algorithm assumes the presence of semantic case labels for noun phrases, based on semantic tags tor the text, it does not depend on the specific values of these labels, which change from domain to domain. THE ALGORITHM The algorithm makes the simplifying assumption that each coordinate conjunction conjoins only two conjuncts. One of these appears shortly after the conjunction and is called the post-conjunct, while the other appears earlier in the sentence and is referred to as the pre-conjunct. The identification of the post-conjunct is fairly straightforward: the first complete phrase that follows the coordinate conjunction is presumed to be the post-conjunct. This has been found to work in all of the sentences on which this algorithm has been tested. The identification of the pre-conjunct is somewhat more complicated. There are three different levels of rules that are tried in order to find the matching pre-conjunct. These are referred to as level-l, level-2, and level-3 rules in decreasing order of importance. The steps involved in the identification of the pre- and the post-conjunct are described below. (a) The sentential components (phrases or single words not grouped into a phrase by the parser) are pushed onto a stack until a coordinate conjunction is encountered. (b) When a coordinate conjunction is encountered, the post-conjunct is taken to be the immediately following phrase, and its type (noun phrase, prepositional phrase, etc.) and case label are noted. (c) Components are popped off the stack, one at a time, and their types and case labels are compared with those of the post-conjunct. For each component that is popped, the rules at level- 1 and level-2 are tried first. If both the type and case label of a popped component match those of the post-conjunct (level-I rule), then this component is taken to be the pre-conjunct. Otherwise, if the type of the popped component is the same as that of the post-conjunct and the case label is compatible (case labels like medication and treatment, which are semantically 16 sentence([ noun_phrase(ease_label(body_part}, [(~h¢, det), (¢~r, noun I body_part)]) verb_phrase([(should, aux), (be, aux), (cleaned, verb l past_p)l) prep_phrase([(by, prep), gerund_phrase([(flushing, verb I gerund)I)1) word([(away, advl Ilocation)]) noun_phrase(ease label{unknown), [(the, det), (debris, noun)]) word([(and, conj I co ord)]) noun_phrase(ease_label{body_fluld), [(exudate, noun l I body_fluid)]) gerund_phrase([(using, verb I gerund), noun_phrase(ease_label{medication}, [(warm, adj), (saline, adj I I medication), (solution, noun l I medication)])]) word([(or, conj I co_ord)]) noun phrase(ease_label{unknown), [(water, noun)]) prep_phrase([(with, prep), noun phrase(ease_label{medication), [(a, det), (very, adv I degree), (dilute, adj I I degree), (germicidal, adj I I medical), (detergent, noun I I medication)])]) word([(comma, punc)]) word([(and, conj I co_ord)]) noun_phrase(case_label{body_part), [(the, det), fcanal, noun l I body_part)]) verb_phrase([(dried, verb I past p)]) word([(as, conj I correlative)I) word([(gently, adv)]) word([(as, conj I correlative)]) adj_phrase([(possible, adj)]) ]). Figure 1 similar, are considered to be compatible) to that of the post-conjunct (level-2 rule), then this component is identified as the pre-conjunct. If the popped component satisfies neither of these rules, then another component is popped from the stack and the level- 1 and level-2 rules are tried for that component. (d) If no component is found that satisfies the level-1 or level-2 rules and the beginning of the sentence is reached (popping components off the stack moves backwards through the sentence), then the requirement that the case label be either the same or compatible is relaxed. The component with the same type as that of the post-conjunct (irrespective of the case label) that is closest to the coordinate conjunction, is identified as the pre-conjunct (level-3 rule). (e) If a pre-conjunct is still not found, then the post-conjunct is conjoined to the first word in the sentence. Although there is very little embedding of phrases in the structures provided by the semi- parser, noun phrases may be embedded in prepositional phrases, infinitive phrases, and gerund phrases on the stack. The algorithm does permit noun phrases that are post-conjuncts to be conjoined with noun phrases embedded as objects of, say, a previous prepositional phrase (e.g., in the sentence fragment "in dogs and cats", the noun phrase 'cats' is conjoined with the noun phrase 'dogs' which is embedded as the object of the prepositional phrase 'in dogs'), or other similar phrases. We have observed empirically that, at least for this fairly carefully written and edited manual, long distance conjuncts have a strong tendency to exhibit high degrees of parallelism. Hence, conjuncts that are physically adjacent may merely be of the same syntactic type (or may even be syntactically dissimilar); as the distance between conjuncts increases, the degree of parallelism tends to increase, so that conjuncts are highly likely to be of the same semantic category, and syntactic and even lexical repetitions are to be found (e.g., on those occasions when a post- conjunct is to be associated with a prepositional phrase that occurs 30 words previous, the preposition may well be repeated). The gist of the algorithm, then, is as follows: to look for sentential components with the same syntactic and semantic categories as the post-conjunct, first nearby and then with increasing distance toward the beginning of the sentence; failing to find such, to look for the same syntactic category, 17 sentence([ prep_phrase([(with, prep), noun_phrase([(persistent, adjll time), (or, conjlco_ord), (untreated, adj), (otitis_externa, noun I I disorder)I)]) word([(comma, pune)]) noun phrase([(the, det), (epithelium, noun)]) prep_phrase([(of, prep), noun phrase([(the, det), (ear, noun I I body_part), (canal, noun l I body_part)])]) verb_phrase([(undergoes, verb 13sg)]) noun_phrase([(hypertrol~hy, noun I I disorder)]) word([(and, eonj I co_ord)]) verb_phrase([(becomes, verb I beverb 13sg)]) adj_phrase([(fibroolastie, adj I I disorder)]) ]). Figure 2 first close at hand and then with increasing distance, and if all else fails to default to the beginning of the sentence as the pre-conjunct (the semi-parser does not recognize clauses as such, and there may be no parallelism of any kind between the beginnings of coordinated clauses). Provisions must be made for certain kinds of parallelism which on the surface appear to be syntactically dissimilar - for example, the near- equivalence of noun and gerund phrases. In the text used as a testbed, gerund phrases are freely coordinated with noun phrases in virtually all contexts. Our probabilistic labelling system is currently being revised to allow the semantic categories for nouns to be associated with gerunds, but at the time this experiment was conducted, gerund phrases were recognized as conjuncts with nouns only on syntactic grounds - a relatively weak criterion for the algorithm. Further, there are instances in the text where prepositional phrases are conjoined with adjectives or adverbs - the results reported here do not incorporate provisions for such. Consider the sentence "The ear should be cleaned by flushing away the debris and exudate using warm saline solution or water with a very dilute germicidal detergent, and the canal dried as gently as possible." The semi-parser produces the structure shown in Figure 1. The second 'and' conjoins the entire clause preceding it with the clause that follows it in the sentence. Although the algorithm does not identify clause conjuncts, it does identify the beginnings of the two clauses, "the ear" and "the canal", as the pre- and post-conjuncts, in spite of several intervening noun phrases. This is possible because the case labels of both these noun phrases agree (they arc both body_part). 18 THE DRAWBACKS Before reporting the results of an implementation of the algorithm on a 10,000 word chapter of the Merck Veterinary Manual we describe some of the drawbacks of the current implementation. (i) The algorithm assumes that a coordinate conjunction conjoins only two conjuncts in a sentence. This assumption is often incorrect. If a construct like [A, B, C, and D] appears in a sentence, the coordinate conjunction 'and' frequently, but not always, conjoins all four components. (B, for example, could be parenthetical.) The implemented algorithm looks for only two conjuncts and produces a structure like [A, B, [and [C, DIll, which is counted as correct for purposes of reporting error rates below. Our "coordinate conjunction specialist" needs to work very closely with a "comma specialist" - an as-yet undeveloped program responsible for, among other things, identifying parallelism in components separated by commas. (ii) The current semi-parser recognizes certain simple phrases only and is unable to recognize clause boundaries. For the conjunct identifier, this means that it becomes impossible to identify two clauses with appropriate extents as conjuncts. The conjunct identifier has, however, been written in such a way that whenever a "clause specialist" is developed, the final structure produced should be correct. Therefore, the conjunct identifier was held responsible for correctly recognizing only the beginnings of the clauses that are being conjoined. Similarly, for phrases not explicitly recognized by the semi-parser, the current conjunct specialist is expected only to conjoin the beginnings of the phrases - not to somehow bound the extents of the phrases. Consider the sentence([ noun_phrase([(antibacterial, adj I I medication), (drugs,noun I plurall I medication)]) verb_phrase([(administered, verb I past_p)]) prep_phrase([(in, prep), noun_phrase([(the, det),(feed, noun)])]) verb phrase([(appeared, verb l beverb)]) inf_phrase([(to, infinitive), verb_phrase([(be, verb lbeverb)]), adj_phrase([(effective, adj)])l) prep_phrase([(in, prep), noun_phrase([(some, adj I I quantity), (herds, noun lplural I I patient)])]) word([w(and, conj I co_ord)]) prep_phrase([(with out, prep), noun_phrase([fbenefit, noun)])]) prep_phrase([(in, prep), noun_phrase([(others, pro I plural)])]) ]). Figure 3 sentence "With persistent or untreated otitis externa, the epithelium of the ear canal undergoes hypertrophy and becomes fibroplastic." The structure received by the coordination specialist from the semi-parser is shown in Figure 2. In this sentence, the components "undergoes hypertrophy" and "becomes fibroplastic" are conjoined by the coordinate conjunction 'and'. The conjunct identifier only recognizes the verb phrases "undergoes" and "becomes" as the pre- and post-conjuncts respectively and is not expected to realize that the noun phrases following the verb phrases are objects of these verb phrases. (iii) Although it is generally true that the components to be conjoined should be of the same type (noun phrase, infinitive phrase, etc.), some cases of mixed coordination exist. The current algorithm allows for the mixing of only gerund and noun phrases. Consider the sentence "Antibacterial drugs administered in the feed appeared to be effective in some herds and without benefit in others." The structure that the coordination specialist receives from the semi- parser is shown in Figure 3. Note that the prepositional phrases are eventually attached to their appropriate components, so that the phrase "in some herds" ultimately is attached to the adjective "effective". The system does not include any rule for the conjoining of prepositional phrases with adjectival or adverbial phrases. Hence the phrases "effective in some herds" and "without benefit in others" were not conjoined. RESULTS AND FUTURE WORK The algorithm was tested on a 10,000 word chapter of the Merck Veterinary Manual. The results of the tests are shown in Table 1. We are satisfied with these results for the following reasons: (a) The system is being tested on a large body of uncontrolled text from a real domain. (b) The conjunct identification algorithm is domain independent. While the semantic labels produced by the probabilistic labelling system are domain dependent, and the rules for generalizing them to case labels for the noun phrases contain some domain dependencies (there is some evidence, for example, that a noun phrase Table 1: Con i unction and Or but TOTAL Results of the algorithm on the 'Eye and Ear' chapter Total Cases Cowect Cases Percenm~e 366 305 83.3% 137 109 79.6% 41 30 73.2% 544 444 81.6% 19 consisting of a generic noun preceded by a semantically labelled modifier should not always receive the semantic label of the modifier) the conjunct specialist pays attention only to whether the case labels match - not to the actual values of the case labels. (c) The true error rate for the simple conjunct identification algorithm alone is lower than the 18.4% suggested by the table, and making some fairly obvious modifications will make it lower still. The entire system is composed of several components and the errors committed by some portions of the system affect the error rate of the others. A significant proportion of the errors committed by the conjunct identifier are due to incorrect tagging, absence of semantic tags for gerunds, improper parsing, and other matters beyond its control. For example, the fact that gerunds were not marked with the semantic labels attached to nouns has resulted in a situation where any gerund occurring as post-conjunct is preferentially conjoined with any preceding ~eneric noun. More often than not, the gerund should have received a semantic tag and would properly be conjoined to a preceding non-generic noun phrase that would have been of the same semantic type. (The conjunction specialist is not the only portion of the system which would benefit from semantic tags on the gerunds; the system is currently under revision to include them.) From an overall perspective, the conjunct identification algorithm presented above seems to be a very promising one. It does depend a lot upon help received from other components of the system, but that is almost inevitable in a large system. The identification of conjuncts is vital to every NLP system. However, the authors were unable to find references to any current system where success rates were reported for conjunct identification. We believe that the reason behind this could be that most systems handle this problem by breaking it up into smaller parts. They start with a more sophisticated parser that takes care of some of the conjuncts, and then employ some semantic tools to overcome the ambiguities that may still exist due to co-ordinate conjunctions. Since these systems do not have a "specialist" working solely for the purpose of conjunct identification, they do not have any statistic about the success rate for it. Therefore, we are unable to compare our success rates with those of other systems. However, due to the reasons given above, we feel that an 81.6% success rate is satisfactory. We have noted several other modifications that would improve performance of the conjunct specialist. For example, it has been noticed that the coordinate conjunction 'but' behaves sufficiently differently from 'and' and 'or' to warrant a separate set of rules. The current algorithm also ignores lexical parallelism (direct repetition of words already employed in the sentence), which the writers of our text frequently use to override plausible alternate readings. The current algorithm errs in most such contexts. As mentioned above, the algorithm also needs to allow prepositional phrases to be conjoined with adjectives and adverbs in some contexts. Some attempt was made to implement such mixed coordination as a last level of rules, level-4, but it did not meet with a lot of success. FUTURE RESEARCH In addition to the above, the most important step to be taken at this point is to build the comma specialist and clause recognition specialist. Another problem that needs to be addressed involves deciding priorities when one or more prepositional phrases are attached to oneof the conjuncts of a coordinate conjunction. For example, we need to decide between the structures [[A and B] in dogs] and [A and [B in dogs]], where A and B are typically large structures themselves, A and B should be conjoined, and 'in dogs' may appropriately be attached to B. It is not clear whether the production of the appropriate structure in such cases rightfully belongs to the knowledge analysis portion of our system, or whether most such questions can be answered by the NLP portion of our system with the means at its disposal. Further, the basic organization of the NLP component, with the tagger and the semi-parser generating the flat structure and then the various specialist programs working on the sentence structure to improve it, looks a lot like a blackboard system architecture. Therefore, one of the future ventures could be to try to look into some blackboard architecture and assess its applicability in this system. Finally, there are ambiguities inherently associated with coordinate conjunctions, including the problem of differentiating between "segregatory" and "combinatory" use of conjunctions [Quirk et al., 1982] (e.g. "fly and mosquito repellants" could refer to 'fly' and 'mosquito repellants' or to 'fly repellants' and 'mosquito repcllants'), and the determination of whether the 'or' in a sentence is really used as an 'and' (e.g. "dogs with glaucoma or keratoconjunctivitis will recover" implies that dogs with glaucoma and dogs with keratoconjunctivitis will recover). The current algorithm does not address these issues. 20 REFERENCES Agarwal, Rajeev. (1990). "Disambiguation of prepositional phrase attachments in English sentences using case grammar analysis." MS Thesis, Mississippi State University. Boggess, Lois; Agarwal, Rajeev; and Davis, Ron. (1991). "Disambiguation of prepositional phrases in automatically labeled technical text." In Proceedings of the Ninth National Conference on Artificial Intelligence:l: 155-9. Davis, Ron. (1990). "Automatic text labelling system." MCS project report, Mississippi State University Fillmore, Charles J. (1972). "The case for case." Universals in Linguistic Theory, Chicago Holt, Rinehart & Winston, Inc. 1-88. Hodges, Julia; Boggess, Lois; Cordova, Jose; Agarwal, Rajeev; and Davis, Ron. (1991). "The automated building and updating of a knowledge base through the analysis of natural language text." Technical Report MSU-910918, Mississippi State University. Quirk, Randolph; Grcenbaum,,Sidney; Leech, Geoffrey; and Svartvik, Jan. (1982). A__ comprehensive grammar of the English language. Longman Publishers. k --1 f Probabillstic ~ \ I Text I . ~ ¢ . ~ \ cLlaa~llf~ddatndxt ~ Semi-Parser ) F/ruct• (Coojunot Specialist) ( Preposition Disambiguator 1 / Knowled~,e Base ? I Restructurer 1 Facts Deeper ~ Structures, Relations Knowledge Base Manager Acquisition ps Expert System Figure 4: Overall System 21
1992
3
TENSE TREES AS THE "FINE STRUCTURE" OF DISCOURSE Chung Hee Hwang &: Lenhart K. Schubert Department of Computer Science University of Rochester Rochester, New York 14627, U. S. A. {hwang, schubert }@cs. rochester, edu ABSTRACT We present a new compositional tense-aspect deindex- ing mechanism that makes use of tense trees as com- ponents of discourse contexts. The mechanism allows reference episodes to be correctly identified even for embedded clauses and for discourse that involves shifts in temporal perspective, and permits deindexed logical forms to be automatically computed with a small num- ber of deindexing rules. 1 Introduction Work on discourse structure, e.g., [Reichman, 1985; Grosz and Sidner, 1986; Allen, 1987], has so far taken a rather coarse, high-level view of discourse, mostly treating sentences or sentence-like entities ("utterance units, .... contributions," etc.) as the lowest-level dis- course elements. To the extent that sentences are ana- lyzed at all, they are simply viewed as carriers of certain features relevant to supra-sentential discourse structure: cue words, tense, time adverbials, aspectual class, into- national cues, and others. These features are presumed to be extractable in some straightforward fashion and provide the inputs to a higher-level discourse segment analyzer. However, sentences (or their logical forms) are not in general "flat," with a single level of structure and fea- tures, but may contain multiple levels of clausal and ad- verbial embedding. This substructure can give rise to arbitrarily complex relations among the contributions made by the parts, such as temporal and discourse rela- tions among subordinate clausal constituents and events or states of affairs they evoke. It is therefore essen- tial, in a comprehensive analysis of discourse structure, that these intra-sentential relations be systematically brought to light and integrated with larger-scale dis- course structures. Our particular interest is in tense, aspect and other indicators of temporal structure. We are developing a uniform, compositional approach to interpretation in which a parse tree leads directly (in rule-to-rule fash- ion) to a preliminary, indezical logical form, and this LF is deindezed by processing it in the current context (a well-defined structure). Deindexing simultaneously transforms the LF and the context: context-dependent constituents of the LF, such as operators past, pres and perf and adverbs like today or earlier, are replaced by explicit relations among quantified episodes; (anaphora are also deindexed, but this is not discussed here); and new structural components and episode tokens (and other information) are added to the context. This dual transformation is accomplished by simple recur- sive equivalences and equalities. The relevant context structures are called tense trees; these are what we pro- pose as the "fine structure" of discourse, or at least as a key component of that fine structure. In this paper, we first review Reichenbach's influen- tial work on tense and aspect. Then we describe tem- poral deindexing using tense trees, and extensions of the mechanism to handle discourse involving shifts in temporal perspective. 2 Farewell to Reichenbach Researchers concerned with higher-level discourse struc- ture, e.g., Webber [1987; 1988], Passonneau [1988] and Song and Cohen [1991], have almost invariably relied on some Reichenbach [1947]-1ike conception of tense. The syntactic part of this conception is that there are nine tenses in English, namely simple past, present and fu- ture tense, past, present and future perfect tense, and posterior past, present and future tense 1 (plus progres- sive variants). The semantic part of the conception is that each tense specifies temporal relations among ex- actly three times particular to a tensed clause, namely the event time (E), the reference time (R) and the speech time (S). On this conception, information in discourse is a matter of "extracting" one of the nine Re- ichenbachian tenses from each sentence, asserting the 1Exarnples of expressions in posterior tense are would, was going to (posterior past), is going to (posterior present), and will be going to (posterior future). 232 appropriate relations among E, R and S, and appro- priately relating these times to previously introduced times, taking account of discourse structure cues im- plicit in tense shifts. It is easy to understand the appeal of this approach when one's concern is with higher-level structure. By viewing sentences as essentially flat, carrying tense as a top-level feature with nine possible values and evoking a triplet of related times, one can get on with the higher- level processing with minimum fuss. But while there is much that is right and insightful about Reichenbach's conception, it seems to us unsatisfactory from a mod- ern perspective. One basic problem concerns embedded clauses. Consider, for instance, the following passage. (1) John will find this note when he gets home. (2) He will think(a) Mary has left(b). Reichenbach's analysis of (2) gives us Eb < S, Rb < Ra, Ea, where tl < t~ means tl is before t2, as below. S I I I Eb Rb R~ E~ That is, John will think that Mary's leaving took place some time before the speaker uttered sentence (2). This is incorrect; it is not even likely that John would know about the utterance of (2). In actuality, (2) only implies that John will think Mary's leaving took place some time before the time of his thinking, i.e., S < Ra, Ea and Eb < Rb, Ra , as shown below. S ~ Ra,E~ Eb f Rb Thus, Reichenbach's system fails to take into account the local context created by syntactic embedding. Attempts have been made to refine Reichenbach's theory (e.g., [Hornstein, 1977; Smith, 1978; Nerbonne, 1986]), but we think the lumping together of tense and aspect, and the assignment of E, R, S triples to all clauses, are out of step with modern syntax and se- mantics, providing a poor basis for a systematic, com- positional account of temporal relations within clauses and between clauses. In particular, we contend that English past, present, future and perfect are separate morphemes making separate contributions to syntactic structure and meaning. Note that perfect have, like most verbs, can occur untensed ("She is likely to have left by now"). Therefore, if the meaning of other tensed verbs such as walks or became is regarded as compos- ite, with the tense morpheme supplying a "present" or "past" component of the meaning, the same ought to be said about tensed forms of have. The modals will and would do not have untensed forms. Nevertheless, con- siderations of syntactic and semantic uniformity suggest that they too have composite meanings, present or past tense being one part and "future modality" the other. This unifies the analyses of the modals in sentences like "He knows he will see her again" and "He knew he would see her again," and makes them entirely parallel to paraphrases in terms of going to, viz., "He knows he is going to see her again" and "He knew he was going to see her again." We take these latter "posterior tense" forms to be patently hierarchical (e.g., is going to see her has 4 levels of VP structure, counting to as an aux- iliary verb) and hence semantically composite on any compositional account. Moreover, going to can both subordinate, and be subordinated by, perfect have, as in "He is going to have left by then." This leads to ad- ditional "complex tenses" missing from Reichenbach's list. We therefore offer a compositional account in which operators corresponding to past (past), present (pres), future (futr) and perfect (perf) contribute separately and uniformly to the meanings of their operands, i.e., formulas at the level of LF. Thus, for instance, the tem- poral relations implicit in "John will have left" are ob- tMned not by extracting a "future perfect" and asserting relations among E, R and S, but rather by successively taking account of the meanings of the nested pres, futr and perf operators in the LF of the sentence. As it happens, each of those operators implicitly introduces exactly one episode, yielding a Reichenbach-like result in this case. (But note: a simple present sentence like "John is tired" would introduce only one episode con- current with the speech time, not two, as in Reichen- bach's analysis.) Even more importantly for present purposes, each ofpres, past, futr and perf is treated uni- formly in deindexing and context change. More specif- ically, they drive the generation and traversal of tense trees in deindexing. 3 Tense Trees Tense trees provide that part of a discourse context structure 2 which is needed to interpret (and deindex) temporal operators and modifiers within the logical form of English sentences. They differ from simple lists of Reichenbachian indices in that they organize episode tokens (for described episodes and the utterances them- selves) in a way that echoes the hierarchy of temporal and modal operators of the sentences and clauses from which the tokens arose. In this respect, they are anal- 2In general, the context structure would also contain speaker and hearer parameters, temporal and spatial frames, and to- kens for salient referents other than episodes, among other components--see [Allen, 1987]. 233 ogous to larger-scale representations of discourse struc- ture which encode the hierarchic segment structure of discourse. (As will be seen, the analogy goes further.) Tense trees for successive sentences are "overlaid" in such a way that related episode tokens typically end up as adjacent elements of lists at tree nodes. The traver- sal of trees and the addition of new tokens is simply and fully determined by the logical forms of the sentences being interpreted. The major advantage of tense trees is that they al- low simple, systematic interpretation (by deindexing) of tense, aspect, and time adverbials in texts consisting of arbitrarily complex sentences, and involving implicit temporal reference across clause and sentence bound- aries. This includes certain relations implicit in the ordering of clauses and sentences. As has been fre- quently observed, for a sequence of sentences within the same discourse segment, the temporal reference of a sentence is almost invariably connected to that of the previous sentence in some fashion. Typically, the rela- tion is one of temporal precedence or concurrency, de- pending on the aspectual class or aktionsart involved (eft, "John closed his suitcase; He walked to the door" versus "John opened the door; Mary was sleeping"). However, in "Mary got in her Ferrari. She bought it with her own money," the usual temporal precedence is reversed (based on world knowledge). Also, other dis- course relations could be implied, such as cause-of, ex- plains, elaborates, etc. (more on this later). Whatever the relation may be, finding the right pair of episodes involved in such relations is of crucial importance for discourse understanding. Echoing Leech [1987, p41], we use the predicate constant orients, which subsumes all such relations. Note that the orients predications can later be used to make probabilistic or default inferences about the temporal or causal relations between the two episodes, based on their aspectual class and other infor- mation. In this way they supplement the information provided by larger-scale discourse segment structures. We now describe tense trees more precisely. Tense Tree Structure The form of a tense tree is illustrated in Figure 1. As an aid to intuition, the nodes in Figure 1 are annotated with simple sentences whose indexical LFs would lead to those nodes in the course of deindexing. A tense tree node may have up to three branches--a leftward past branch, a downward perfect branch, and a rightward future branch. Each node contains a stack-like list of recently introduced episode tokens (which we will often refer to simply as episodes). In addition to the three branches, the tree may have (horizontal) embedding links to the roots of embed- ded tense trees. There are two kinds of these embed- ding links, both illustrated in Figure 1. One kind, utterance pres node. .......... ~ h o m e 't "f He left i(" ~ ~ .... ~KP res Ho ,,,,o.vo \ He had left He wbuld He will She will think olo. He would have left Figure 1. A Tense Tree indicated by dashed lines, is created by subordinat- ing constructions such as VPs with that-complement clauses. The other kind, indicated by dotted lines, is derived from the surface speech act (e.g., telling, ask- ing or requesting) implicit in the mood of a sentence. On our view, the utterances of a speaker (or sentences of a text, etc.) are ultimately to be represented in terms of modal predications expressing these surface speech acts, such as [Speaker tell Hearer (That ~)] or [Speaker ask Hearer (Whether ~)]. Although these speech acts are not explicitly part of what the speaker uttered, they are part of what the hearer gathers from an utterance. Speaker and Hearer are indexical con- stants to be replaced by the speaker(s) and the hearer(s) of the utterance context. The two kinds of embedding links require slightly different tree traversal techniques as will be seen later. A set of trees connected by embedding links is called a tense tree structure (though we often refer loosely to tense tree structures as tense trees). This is in effect a tree of tense trees, since a tense tree can be embedded by only one other tree. At any time, exactly one node of the tense tree structure for a discourse is in focus, and the focal node is indicated by ~). Note that the "tense tree" in Figure 1 is in fact a tense tree structure, with the lowest node in focus. By default, an episode added to the right end of a list at a node is "oriented" by the episode which was previously rightmost. For episodes stored at different nodes, we can read off their temporal relations from the tree roughly as follows. At any given moment, for a pair of episodes e and e' that are rightmost at nodes n and n', respectively, where n' is a daughter of n, if the branch connecting the two nodes is a past branch, Is' 234 before e]3; if it is a perfect branch, [e' impinges-on e] (as we explain later, this yields entailments [e' before e] if e' is nonstative and [e' until e] if e' is stative, respec- tively illustrated by "John has left" and "John has been working"); if it is a future branch, [d after e]; and if it is an embedding link, [d at-about e]. These orienting relations and temporal relations are not extracted post hoc, but rather are automatically asserted in the course of deindexing using the rules shown later. As a preliminary example, consider the following pas- sage and a tense tree annotated with episodes derived from it by our deindexing rules: (3) John picked up the phone. (4) He had told Mary that uj,.,® ...... 2' Jpast epick, el ¢f perf etellCD - -/~ e2 ~:t he would call her. ecall u3 and u4 are utterance episodes for sentences (3) and (4) respectively. Intuitively, the temporal content of sentence (4) is that the event of John's telling, etdz, took place before some time el, which is at the same time as the event of John's picking up the phone, epiek; and the event of John's calling, eean, is located after some time e2, which is the at the same time as the event of John's telling, eteu. For the most part, this information can be read off directly from the tree: [eple~ orients el], [etett before el] and [eeatt after e2]. In addition, the deindexing rules yield [e2 same-time etell]. From this, one may infer [etell before epic~] and [ecau after eteu], assuming that the orients relation defaults to same-time here. How does [epiek orients el] default to [epiek same-time eli? In the tense tree, el is an episode evoked by the past tense operator which is part of the meaning of had in (4). It is a stative episode, since this past opera- tor logically operates on a sentence of form (perf &), and such a sentence describes a state in which & has occurred--in this instance, a state in which John has told Mary that he will call her. It is this stativity of el which (by default) leads to a same-time interpreta- tion of orients. 4 Thus, on our account, the tendency of past perfect "reference time" to align itself with a 3Or, sometimes, same-time (cf., "John noticed that Mary looked pale" vs. "Mary realized that someone broke her vase"). This is not decided in an ad hoc manner, but as a result of sys- tematically interpreting the context-charged relation belT. More on this later. 4 More accurately, the default interpretation is [(end-of epick ) same-time ell, in view of examples involving a longer preceding event, such as "John painted a picture. He was pleased with the result." previously introduced past event is just an instance of a general tendency of stative episodes to align themselves with their orienting episode. This is the same tendency noted previously for "John opened the door. Mary was sleeping." We leave further comments about particu- larizing the orients relation to a later subsection. We remarked that the relation [e2 same-time etett] is obtained directly from the deindexing rules. We leave it to the reader to verify this in detail (see Past and Futr rules stated below). We note only that e2 is evoked by the past tense component of would in (4), and de- notes a (possible) state in which John will call Mary. Its stativity, and the fact that the subordinate clause in (4) is "past-dominated, ''5 causes [e2 bef T eteu] to be deindexed to [e2 same-time etch]. We now show how tense trees are modified as dis- course is processed, in particular, how episode tokens are stored at appropriate nodes of the tense tree, and how deindexed LFs, with orients and temporal ordering relations incorporated into them, are obtained. Processin~ of Utterances The processing of the (indexical) LF of a new utter- ance always begins with the root node of the current tense tree (structure) in focus. The processing of the top-level operator immediately pushes a token for the surface speech act onto the episode list of the root node. Here is a typical indexical LF: ( decl (past [John know (That (past (', (perf [Mary leave]))))])) "John knew that Mary had not left." (decl stands for declarative; its deindexing rule intro- duces the surface speech act of type "tell"). As men- tioned earlier, our deindexing mechanism is a composi- tional one in which operators past, futr, perf, -,, That, decl, etc., contribute separately to the meaning of their operands. As the LF is recursively transformed, the tense and aspect operators encountered, past, perf and futr, in particular, cause the focus to shift "downward" along existing branches (or new ones if necessary). That is, processing a past operator shifts the current focus down to the left, creating a new branch if necessary. The resulting tense tree is symbolized as /T. Similarly perf shifts straight down, and futr shifts down to the right, with respective results t T and \ T. pres maintains the current focus. Certain operators embed new trees at the current node, written ~--~T (e.g., That), or shift focus to an existing embedded tree, written ¢--*T (e.g., decl). Focus shifts to a parent or embedding node are symbolized as T T and .--T respectively. As a final tree operation, OT denotes storage of episode token e T (a new episode symbol not yet used in T) at the current 5A node is past-domlnated if there is a past branch in its an- cestry (where embedding finks also count as ancestry links). 235 focus, as rightmost element of its episode list. As each node comes into focus, its episode list and the lists at certain nodes on the same tree path provide explicit ref- erence episodes in terms of which past, pres, futr, pert, time adverbials, and implicit "orienting" relations are rewritten nonindexically. Eventually the focus returns to the root, and at this point, we have a nonindexical LF, as well as a modified tense tree. Deindexin~ Rules Before we proceed with an example, we show some of the basic deindexing rules here. 6 In the following,"**" is an episodic operator that connects a formula with the situation it characterizes. Predicates are infixed and quantifiers have restrictions (following a colon), r Decl: (decl ¢)T Oer:[[er same-time So r] ^ [Last T immediately-precedes eT] ] [[Speaker tell Hearer (That ¢~OT)] ** er]) Tree transform: (decl ¢)- T -- ',--" (<D" (,---~OT)) Pres: (pres <b)T *-* (3eT:[[e T at-about EmbT] A [Last T orients eT] ] [+or ** er]) Tree transform: (pres <D)- T = (¢" (OT)) Past: (past <b)T (3eT:[[e T bet T EmbT] ^ [LaSt/T orients eT] ] [<bo r ** et]) Tree transform: (past <b)" T '- I (<b" (O/T)) Futr: (futr <b)T (3et:[[e t after F.mbT] A [Lastx, T orients eT] ] [%., ** et]) Tree transform: (futr <b)" r = , (<b" (O\ T)) Pert: (pert <b)T (3eT:[[e T impinges-on LaStT] A [LaStlT orients eT] ] [%,, ** Tree transform: (pert <b)" T = T (<b" (O 1 r)) That: (That <b)T ~ (That <D_T ) Tree transform: (That <b)"T = *-- (<b" (~-*T)) As mentioned earlier, Speaker and Hea~er in the Decl- rule are to he replaced by the speaker(s) and the hearer(s) of the utterance. Note that each equivalence pushes the dependence on context one level deeper into the LF, thus deindexing the top-level operator. The 6See [Hwang, 1992] for the rest of our deindexing rules. Some of the omitted ones are: Fpres ( "futural present," as in "John has a meeting tomorrow"), Prog (progressive aspect), Pred (predica- tion), K, Ka and Ke ("kinds"), those for deindexing various oper- ators (especially, negation and adverbials), etc. r For details of Episodic Logic, our semantic representation, see [Schubert and Hwang, 1989; Hwang and Schubert, 1991]. symbols NOWT, Last T and Emb T refer respectively to the speech time for the most recent utterance in T, the last- stored episode at the current focal node, and the last- stored episode at the current embedding node. bet T in the Past-rule will he replaced by either before or same-time, depending on the aspectual class of its first argument and on whether the focal node of T is past- dominated. In the Pert-rule, Last T is analogous to the Reichenbachian reference time for the perfect. The impinges-on relation confines its first argument e T (the situation or event described by the sentential operand of pert) to the temporal region preceding the second argu- ment. As in the case of orients, its more specific import depends on the aspectual types of its arguments. If e T is a stative episode, impinges-on entails that the state or process involved persists to the reference time (episode), i.e., [e T until LastT]. If e T is an event (e.g., an accom- plishment), impinges-on entails that it occurred some- time before the reference time, i.e., [e T before LaStT], and (by default) its main effects persist to the reference time. s An Example To see the deindexing mechanism at work, consider now sentences (ha) and (Ca). (5) a. John went to the hospital. b. (decl Ta (past Tb [John goto Hospital] ) ) Tc c. (3 el:tel same-time Now1] [[Speaker tell Hearer (That (3 e2:[e2 before ell [[John goto Hospital] ** e2]))] ** ell) (6) a. The doctor told John he had broken his ankle. b. (decl Td (past Te [Doctor tell John (That If (past Tg (pert Th [John break Ankle])))])) t$ c. (3 e3:[[e3 same-time Now21 ^ [el immediately-precedes e3]] [[Sp eaker tell Hearer (That (3 e4:[[e4 before e3] ^ [e2 orients e411 [[Doctor tell John (That (3 eh:[e5 same-time e4] [(3 e6:[e6 before eh] [[John break Ankle] ** e6]) ** es]))] ** e4]))] ** e3]) 8We have formulated tentative meaning postulates to this ef- fect hut cannot dwell on the issue here. Also, we are setting aside certain well-known problems involving temporal adverbials in perfect sentences, such as the inadmissibility of * "John has left yesterday." For a possible approach, see [Schubert and Hwang, 1990]. 236 The LFs before deindexing are shown in (5,6b) (where the labelled arrows mark points we will refer to); the final, context-independent LFs are in (5,6c). The trans- formation from (b)'s to (c)'s and the corresponding tense tree transformations are done with the deindex- ing rules shown earlier. Anaphoric processing is presup- posed here. The snapshots of the tense tree while processing (5b) and (6b), at points Ta-Ti, are as follows (with a null initial context). ata atb at ¢ atd at e el el el el, e3 el, £3 • ...... ~ "'"-'. ~ ...... . ...... . .-....--.-. at f at g at h at i el, e3 el, e3 el, e3 el, e3 e2, e4 e e4/(~6 e2, e4 ~ : The resultant tree happens to be unary, but additional branches would be added by further text, e.g., a future branch by "It will take several weeks to heal." What is important here is, first, that Reichenbach-like relations are introduced compositionally; e.g., [e6 before e5], i.e., the breaking of the ankle, e6, is before the state John is in at the time of the doctor's talking to him, e4. In addition, the recursive rules take correct account of embedding. For instance, the embedded present perfect in a sentence such as "John will think that Mary has left" will be correctly interpreted as relativized to John's (future) thinking time, rather than the speech time, as in a Reichenbachian analysis. But beyond that, episodes evoked by successive sen- tences, or by embedded clauses within the same sen- tence, are correctly connected to each other. In par- ticular, note that the orienting relation between John's going to the hospital, e2, and the doctor's diagnosis, e4, is automatically incorporated into the deindexed for- mula (6c). We can plausibly particularize this orienting relation to [e4 after e2], based on the aspectual class of "goto" and "tell" (see below). Thus we have established inter-clausal connections automatically, which in other approaches require heuristic discourse processing. This was a primary motivation for tense trees. Our scheme is easy to implement, and has been successfully used in the TRAINS interactive planning advisor at Rochester [Allen and Schubert, 1991]. More on ParCicularizin~ the ORIENTS Rela¢ion The orients relation is essentially an indicator that there could be a more specific discourse relation between the argument episodes. As mentioned, it can usually be particularized to one or more temporal, causal, or other "standard" discourse relation. Existing propos- als for getting these discourse relations right appear to be of two kinds. The first uses the aspectual classes of the predicates involved to decide on discourse re- lations, especially temporal ones, e.g., [Partee, 1984], [Dowty, 1986] and [Hinrichs, 1986]. The second ap- proach emphasizes inference based on world knowledge, e.g., [Hobbs, 1985] and [Lascarides and Asher, 1991; Lascarides and Oberlander, 1992]. The work by Las- carides et hi. is particularly interesting in that it makes use of a default logic and is capable of retracting previ- ously inferred discourse relations. Our approach fully combines the use of aspectual class information and world knowledge. For example, in "Mary got in her Ferrari. She bought it with her own money," the successively reported "achievements" are by default in chronological order. Here, however, this default interpretation of orients is reversed by world knowledge: one owns things after buying them, rather than before. But sometimes world knowledge is mute on the connection. For instance, in "John raised his arm. A great gust of wind shook the trees," there seems to be no world knowledge supporting temporal adjacency or a causal connection. Yet we tend to infer both, perhaps attributing magical powers to John (precisely because of the lack of support for a causal connection by world knowledge). So in this case default conclusions based on orients seem decisive. In particular, we would as- sume that if e and e' are nonstative episodes, 9 where e is the performance of a volitional action and e' is not, then [e orients e'] suggests [e right-before d] and (less firmly) [e cause-of d]. 1° 4 Beyond Sentence Pairs The tense tree mechanism, and particularly the way in which it automatically supplies orienting relations, is well suited for longer narratives, including ones with tense shifts. Consider, for example, the following (slightly simplified) text from [Allen, 1987, p400]: (7) a. Jack and Sue went{e~} to a hardware store b. as someone had{e~} stolen{~5} their lawnmower. c. Sue had{e4} seen{eh} a man take it 9Non-statives could be achievements, accomplishments, cul- minations, etc. Our aspectual class system is not entirely settled yet, but we expect to have one similar to that of [Moens and Steedman, 1988]. 1°Our approach to plausible inference in episodic logic in gen- eral, and to such default inferences in particular, is probabilistic (see [Schubert and Hwang, 1989; Hwang, 1992]). The hope is that we will be able to "weigh the evidence" for or against alternative discourse relations (as particularizations of orients). 237 d. and had{,,} chased{e,} him down the street, e. but he had{e,} driven{,g} away in a truck. f. After looking{,,o} in the store, they realized{,in} that they couldn't afford{,~} a new one. Even though {b-e} would normally be considered a sub- segment of the main discourse {a, f}, both the temporal relations within each segment and the relations between segments (i.e., that the substory temporally precedes the main one) are automatically captured by our rules. For instance, el and ell are recognized as successive episodes, both preceded at some time in the past by e3, es, eT, and eg, in that order. This is not to say that our tense tree mechanism ob- viates the need for larger-scale discourse structures. As has been pointed out by Webber [1987; 1988] and oth- ers, many subnarratives introduced by a past perfect sentence may continue in simple past. The following is one of Webber's examples: (8) a. I was{,l} at Mary's house yesterday. b. We talked{,2} about her sister Jane. c. She had{e3} spent{e,} five weeks in Alaska with two friends. d. Together, they climbed{,,} Mt. McKinley. e. Mary askedoe } whether I would want to go to Alaska some time. Note the shift to simple past in d, though as Web- bet points out, past perfect could have been used. The abandonment of the past perfect in favor of simple past signals the temporary abandonment of a perspective anchored in the main narrative - thus bringing read- ers "closer" to the scene (a zoom-in effect). In such eases, the tense tree mechanism, unaided by a notion of higher-level discourse segment structure, would derive incorrect temporal relations such as [e5 orients e6] or [e6 right-after es]. We now show possible deindexing rules for perspec- tive shifts, assuming for now that such shifts are inde- pendently identifiable, so that they can be incorporated into the indexical LFs. new-pets is a sentence operator initiating a perspective shift for its operand, and prey- pets is a sentence (with otherwise no content) which gets back to the previous perspective. Recent T is the episode most recently stored in the subtree immediately embedded by the focal node of T. New-pets: (new-pers ¢)T *'* [$,-. T A [Itecent T orients RecentT,]] where T' = $" (~-~ T) Tree transform : (new-pers ~)" T = ~" (~-* T) Prev-pe]:s: prev-pers T ---. T (True) Tree transform : prev-pers • T = ~ T When new-pers is encountered, a new tree is created and embedded at the focal node, the focus is moved to the root node of the new tree, and the next sentence is processed in that context. In contrast with other op- erators, new-pets causes an overall focus shift to the new tree, rather than returning the focus to the orig- inal root. Note that the predication [Recen*c T orients Recen'~T, ] connects an episode of the new sentence with an episode of the previous sentence, prey-pets produces a trivial True, but it returns the focus to the embed- ding tree, simultaneously blocking the link between the embedding and the embedded tree (as emphasized by use of ~ instead of ~---). We now illustrate how tense trees get modified over perspective changes, using (8) as example. We re- peat (Sd,e) below, augmenting them with perspective changes, and show snapshots of the tense trees at the points marked. In the trees, ul,...,u5 are utterance episodes for sentences a,..., e, respectively. (8) d. TTl(new-pers Together, they climbed{,s} Mt. McKinley.) TT 2 prev.pers TT 3 e. Mary asked{,,} whether I would want to go to Alaska some time. TT ~ TI: T2: U4 151 ~3 "S , U2, ~r(~ til, t/.2~ t13~- *2" T el , e2, e3 ?el ~ e2, e3 • e4 • e4 T3: u4 T4: u4 ul u2 u3 ~ ~" S ul,u2, ......X-~'"'~ /£3,/J'5 (~'"'~ # Qe4 °e4 Notice the blocked links to the embedded tree in T3 and T4. Also, note that RecentT1 = e4 and Recenl;T2 = e5. So, by Hew-pets, we get [e4 orients e5], which can be later particularized to [e5 during e4]. It is fairly obvi- ous that the placement of new-pers and prev-pers oper- ators is fully determined by discourse segment bound- aries (though not in general coinciding with them). So, as long as the higher-level discourse segment structure is known, our perspective rules are easily applied. In that sense, the higher-level structure supplements the "fine structure" in a crucial way. However, this leaves us with a serious problem: dein- dexing and the context change it induces is supposed to be independent of "plausible inferencing"; in fact, 238 it is intended to set the stage for the latter. Yet the determination of higher-level discourse structure--and hence of perspective shifts--is unquestionably a matter of plausible inference. For example, if past perfect is fol- lowed by past, this could signal either a new perspective within the current segment (see 8c,d), or the closing of the current subsegment with no perspective shift (see 7e,f). If past is followed by past, we may have either a continuation of the current perspective and segment (see 9a,b below), or a perspective shift with opening of a new segment (see 9b,c), or closing of the current seg- ment, with resumption of the previous perspective (see 9c,d). (9) a. Mary found that her favorite vase was broken. b. She was upset. c. She bought it at a special antique auction, d. and she was afraid she wouldn't be able to find anything that beautiful again. Only plausible inference can resolve these ambiguities. This inference process will interact with resolution of anaphora and introduction of new individuals, identifi- cation of spatial and temporal frames, the presence of modal/cognition/perception verbs, and most of all will depend on world knowledge. In (9), for instance, one may have to rely on the knowledge that one normally would not buy broken things, or that one does not buy things one already owns. As approaches to this general difficulty, we are think- ing of the following two strategies: (A) Make a best ini- tial guess about presence or absence of new-pers/prev- pres, based on surface (syntactic) cues and then use failure-driven backtracking if the resulting interpreta- tion is incoherent. A serious disadvantage would be lack of integration with other forms of disambiguation. (B) Change the interpretation of LaStT, in effect providing multiple alternative referents for the first argument of orients. In particular, we might use Last T = {ei [ ei is the last-stored episode at the focus of T, or was stored in the subtree rooted at the focus of T after the last- stored episode at the focus of T}. Subsequent processing would resemble anaphora disam- biguation. In the course of further interpreting the dein- dexed LF, plausible inference would particularize the schematic orienting relation to a temporal (or causal, etc.) relation involving just two episodes. The result would then be used to make certain structural changes to the tense tree (after LF deindexing). For instance, suppose such a schematic orienting re- lation is computed for a simple past sentence following a past perfect sentence (like 8c,d). Suppose further that the most coherent interpretation of the second sentence (i.e., 8d) is one that disambiguates the orienting rela- tion as a simple temporal inclusion relation between the successively reported events. One might then move the event token for the second event (reported in simple past) from its position at the past node to the right- most position at the past perfect node, just as if the sec- ond event had been reported in the past perfect. (One might in addition record a perspective shift, if this is still considered useful.) In other words, we would "re- pair" the distortion of the tense tree brought about by the speaker's "lazy" use of simple past in place of past perfect. Then we would continue as before. In both strategies we have assumed a general coherence-seeking plausible inference process. While it is clear that the attainment of coherence entails delin- eation of discourse segment structure and of all relevant temporal relations, it remains unclear in which direction the information flows. Are there independent principles of discourse and temporal structure operating above the level of syntax and LF, guiding the achievement of full understanding, or are higher-level discourse and tem- poral relations a mere byproduct of full understanding? Webber [1987] has proposed independent temporal fo- cusing principles similar to those in [Grosz and Sid- net, 1986] for discourse. These are not deterministic, and Song and Cohen [1991] sought to add heuristic con- straints as a step toward determinism. For instance, one constraint is based on the presumed incoherence of simple present followed by past perfect or posterior past. But there are counterexamples; e.g., "Mary is angry about the accident. The other driver had been drinking." Thus, we take the question about indepen- dent structural principles above the level of syntax and LF to be still open. 5 Conclusion We have shown that tense and aspect can be analyzed compositionally in a way that accounts not only for their more obvious effects on sentence meaning but also, via tense trees, for their cumulative effect on context and the temporal relations implicit in such contexts. As such, the analysis seems to fit well with higher-level analyses of discourse segment structure, though ques- tions remain about the flow of information between lev- els. Acknowledgements We gratefully acknowledge helpful comments by James Allen and Philip Harrison on an earlier draft and much useful feedback from the members of TRAINS group at the University of Rochester. This work was sup- ported in part by NSERC Operating Grant A8818 and 239 ONR/DARPA research contract no. N00014-82-K-0193, and the Boeing Co. under Purchase Contract W-288104. A preliminary version of this paper was presented at the AAAI Fall Symposium on Discourse Structure in Natural Language Understanding and Generation, Pa- cific Grove, CA, November 1991. References [Allen, 1987] J. Allen, Natural Language Understand- ing, Chapter 14. Benjamin/Cummings Publ. Co., Reading, MA. [Allen and Schubert, 1991] J. Allen and L. K. Schu- bert, "The TRAINS project," TR 382, Dept. of Comp. Sci., U. of Rochester, Rochester, NY. [Dowty, 1986] D. Dowty, "The effect of aspectual classes on the temporal structure of discourse: se- mantics or pragmatics?" Linguistics and Philosophy, 9(1):37-61. [Grosz and Sidner, 1986] B. J. Grosz and C. L. Sid- net, "Attention, intentions, and the structure of dis- course," Computational Linguistics, 12:175-204. [Hinrichs, 1986] E. Hinrichs, "Temporal anaphora in discourses of English," Linguistics and Philosophy, 9(1):63-82. [Hobbs, 1985] J. R. Hobbs, "On the coherence and structure of discourse," Technical Report CSLI-85- 37, Stanford, CA. [Hornstein, 1977] N. Hornstein, "Towards a theory of tense," Linguistic Inquiry, 3:521-557. [Hwang, 1992] C. H. Hwang, A Logical Framework for Narrative Understanding, PhD thesis, U. of Alberta, Edmonton, Canada, 1992, To appear. [Hwang and Schubert, 1991] C. H. Hwang and L. K. Schubert, "Episodic Logic: A situational logic for natural language processing," In 3rd Conf. on Sit- nation Theory and its Applications (STA-3), Oiso, Kanagawa, Japan, November 18-21, 1991. [Lascarides and Asher, 1991] A. Lascarides and N. Asher, "Discourse relations and defeasible knowl- edge," In Proc. 29th Annual Meeting of the ACL, pages 55-62. Berkeley, CA, June 18-21, 1991. [Lascarides and Oberlander, 1992] A. Lascarides and J. Oberlander, "Temporal coherence and defeasible knowledge," Theoretical Linguistics, 8, 1992, To ap- pear. [Leech, 1987] G. Leech, Meaning and the English Verb (2nd ed), Longman, London, UK. [Moens and Steedman, 1988] M. Moens and M. Steed- man, "Temporal ontology and temporal reference," Computational Linguistics, 14(2):15-28. [Nerbonne, 1986] J. Nerbonne, "Reference time and time in narration," Linguistics and Philosophy, 9(1):83-95. [Partee, 1984] B. Partee, "Nominal and Temporal Anaphora," Linguistics and Philosophy, 7:243-286. [Passonneau, 1988] R. J. Passonneau, "A Computa- tional model of the semantics of tense and aspect," Computational Linguistics, 14(2):44-60. [Reichenbach, 1947] H. Reichenbach, Elements of Sym- bolic Logic, Macmillan, New York, NY. [Reichman, 1985] R. Reichman, Getting Computers to Talk Like You and Me, MIT Press, Cambridge, MA. [Schubert and Hwang, 1989] L. K. Schubert and C. H. Hwang, "An Episodic knowledge representation for Narrative Texts," In Proc. 1st Inter. Conf. on Prin- ciples of Knowledge Representation and Reasoning (KR '89), pages 444-458, Toronto, Canada, May 15- 18, 1989. Revised, extended version available as TR 345, Dept. of Comp. Sci., U. of Rochester, Rochester, NY, May 1990. [Schubert and Hwang, 1990] L. K. Schubert and C. H. Hwang, "Picking reference events from tense trees: A formal, implementable theory of English tense-aspect semantics," In Proc. Speech and Natural Language, DARPA Workshop, pages 34-41, Hidden Valley, PA, June 24-27, 1990. [Smith, 1978] C. Smith, "The syntax and interpreta- tions of temporal expressions in English," Linguistics and Philosophy, 2:43-99. [Song and Cohen, 1991] F. Song and R. Cohen, "Tense interpretation in the context of narrative," In Proc. AAAI-91, pages 131-136. Anaheim, CA, July 14-19, 1991. [Webber, 1987] B. L. Webber, "The Interpretation of tense in discourse," In Proc. 25th Annual Meeting of the ACL, pages 147-154, Stanford, CA, July 6-9, 1987. [Webber, 1988] B. L. Webber, "Tense as discourse anaphor," Computational Linguistics, 14(2):61-73. 240
1992
30
CONNECTION RELATIONS AND QUANTIFIER SCOPE Long-in Latecki University of Hamburg Department of Computer Science Bodenstedtstr~ 16, 2000 Hamburg 50, Germany e-mail: [email protected] ABSTRACT A formalism will be presented in this paper which makes it possible to realise the idea of assigning only one scope-ambiguous representation to a sentence that is ambiguous with regard to quantifier scope. The scope determination results in extending this representation with additional context and world knowledge conditions. If there is no scope determining information, the formalism can work further with this scope-ambiguous representation. Thus scope information does not have to be completely determined. 0. INTRODUCTION Many natural language sentences have more than one possible reading with regard to quantifier scope. The most widely used methods for scope determination generate all possible readings of a sentence with regard to quantifier scope by applying all quantifiers which occur in the sentence in all combinatorically possible sequences. These methods do not make use of the inner structure and meaning of a quantifier. At best, quantifiers are constrained by external conditions in order to eliminate some scope relations. The best known methods are: determination of scope in LF in GB (May 1985), Cooper Storage (Cooper 1983, Keller 1988) and the algorithm of Hobbs and Shieber (Hobbs/Shieber 1987). These methods assign, for instance, six possible readings to a sentence with three quantifiers. Using these methods, a sentence must be disambiguated in order to receive a semantic representation. This means that a scope-ambiguous sentence necessarily has several semantic representations, since the formalisms for the representation do not allow for scope-ambiguity. It is hard to imagine that human beings disambiguate scope-ambiguous sentences in the same way. The generation of all possible combinations of sequences of quantifiers and the assignment of these sequences to various readings seems to be cognitively inadequate. The problem becomes even more complicated when natural language quantifiers can be interpreted distributively as well as collectively, which can also lead to further readings. Let us take the following sentence from Kempson/Cormack (1981) as an example: Two examiners marked six scripts. The two quantifying noun phrases can in this case be interpreted either distributively or collectively. The quantifier two examiners can have wide scope over the quantifier six scripts, or vice versa, which all in all can lead to various readings. Kempson and Cormack assign four possible readings to this sentence, 241 Davies (1989) even eight. (A detailed discussion will follow.) No one, however, will make the claim that people will first assign all possible representations with regard to the scope of the quantifiers and their distribution, and will then eliminate certain interpretations according to the context; but this is today's standard procedure in linguistics. In many cases, it is also almost impossible to determine a preferred reading. The difficulties that people have when they are forced to disambiguate such sentences (to explicate all possible readings) point to the fact that people only assign an under- determined scope-ambiguous representation in the first place. Such a representation of the example sentence would only contain the information that we are dealing with a marking-relation between examiners and scripts, and that we are always dealing with two examiners and six scripts. This representation does not contain any information about scope. On the basis of this representation one may in a given context derive a representation with a determined scope. But it may also be the case that this information is sufficient in order to understand the sentence if no scope-defining information is given in the context, since in many cases human beings do not disambiguate such sentences at all. They use underdetermined, scopeless interpretations, because their knowledge often need not be so precise. If a disambiguation is carried out, then this process is done in a very natural way on the basis of context and world knowledge. This points to the assumption that scope determination by human beings is performed on a semantic level and is deduced on the basis of acquired knowledge. I will present a formalism which works in a similar way. This formalism will also show that it is not necessary to work with many sequences of quantifiers in order to determine the various readings of a sentence with regard to quantifier scope. Within this formalism it is possible to represent an ambiguous sentence with an ambiguous representation which need not be disambiguated, but can be disambiguated at a later stage. The readings can either be specified more clearly by giving additional conditions, or they can be deduced from the basic ambiguous reading by inference. Here, the inner structure and the meaning of quantifiers play an important role. The process of disambiguation can only be performed when additional information that restricts the number of possible readings is available. As an example of such information, I will treat anaphoric relations. Intuitively speaking, the difference between assigning an undertermined representation to an ambiguous sentence and assigning a disjunction of all possible readings to this sentence corresponds to the difference between the following statements*: "Peter owns between 150 and 200 books." and "Peter owns 150 or 151 or 152 or ... or 200 books." It goes without saying that both statements are equivalent, since we can understand "150 or 151 or ... or 200" as a precise specification of "between 150 and 200". Nevertheless, there are procedural differences in processing the two pieces of information; and there are cognitive differences for human beings, since we would never explicitly utter the second sentence. If we could represent "between 150 and 200" directly by a simple formula and not by giving a disjunction of 51 elements, then we may certainly gain great procedural and representational advantages. The deduction of readings in semantics does not of course exclude a consideration of syntactic restrictions. They can be imported into the semantics, for example by passing syntactic information with special indices, as * The comparison stems from Christopher Habel. 242 described in Latecki (1991). Nevertheless, in this paper I will abstain from taking syntactic restrictions into consideration. 1. SCOPE-AMBIGUOUS REPRESENTATION AND SCOPE DETERMINATION The aims of the representation presented in this paper are as follows: 1. Assigning an ambiguous semantic representation to an ambiguous sentence (with regard to quantifier scope and distributivity), from which further readings can later be inferred. 2. The connections between the subject and objects of a sentence are explicitly represented by relations. The quantifiers (noun phrases) constitute restrictions on the domains of these relations. 3. Natural language sentences have more than one reading with regard to quantifier scope (and distributivity), but these readings are not independent of one another. The target representation makes the logical dependencies of the readings easily discernible. 4. The construction of complex discourse referents for anaphoric processes requires the construction of complex sums of existing discourse referents. In conventional approaches, this can lead to a combinatorical explosion (cf. Eschenbach et al. 1989 and 1990). In the representation which is presented here, the discourse referents are immediately available as domains of the relations. Therefore, we need not construe any complex discourse referents. Sometimes we have to specify a discourse referent in more detail, which in turn can lead to a reduction in the number of possible readings. I now present the formalism. The representational language used here is second-order predicate logic. However, I will mainly use set-theoretical notation (which can be seen as an abbreviation of the corresponding notation of second-order logic). I choose this notation because it points to the semantic content of the formulas and is thus more intuitive. Let R ~ XxY be a relation, that means, a sub-set of the product of the two sets X and Y. The domains of R will be called Dom R and Range R, with Dom R={x~ X: 3y~ Y R(x,y)} and Range R={y~ Y: 3x~ X R(x,y)}. I make the explicit assumption here that all relations are not empty. (This assumption only serves in this paper to make the examples simpler.) In the formalism, a verb is represented by a relation whose domain is defined by the arguments of verbs. Determiners constitute restrictions on the domains of the relation. These restrictions correspond to the role of determiners in Barwise's and Cooper's theory of generalized quantifiers (Barwise and Cooper 1981). This means for the following sentence: (1.1) Every boy saw a movie. that there is a relation of seeing between boys and movies. In the formal notation of second-order logic we can describe this piece of information as follows: (1.1.a) 3X2 (Vxy (X2(x,y) ~ Saw(x,y) & Boy(x) & M0vie(y) )) X2 is a second-order variable over the domain of the binary predicates; and Saw, Boy, and Movie are second-order constants which represent a general relation of seeing, the set of all boys, and the set of all movies, respectively. We will abbreviate the above formula by the following set-theoretical formula: 240 (1.1.b) 3saw (saw ~ Boy x Movie) In this formula, we view saw as a sorted variable of the sort of the binary seeing- relations. The variable saw corresponds to the variable X2 in (1.1.a). (1.1.b) describes an incomplete semantic representation of sentence (1.1). Part of the certain knowledge that does not determine scope in the case of sentence (1.1) is also the information that all boys are involved in the relation, which is easily describable as: Dom saw=Boy. We obtain this information from the denotation of the determiner every. In this way we have arrived at the scope- ambiguous representation of (1.1): (1.1.c) 3saw (saw ~ Boy x Movie & Dom saw=Boy) It may be that the information presented in (1.1.c) is sufficient for the interpretation of sentence (1.1). A precise determination of quantifier scope need not be important at all, since it may be irrelevant whether each boy saw a different movie (which corresponds to the wide scope of the universal quantifier) or whether all boys saw the same movie (which corresponds to the wide scope of the existential quantifier). Classic procedures will in this case immediately generate two readings with definite scope relations, whose notations in predicate logic are given below. (1.2.a) Vx(boy(x) --~ 3y(movie(y) & saw(x,y))) (1.2.b) 3y(movie(y) & Vx(boy(x) --~ saw(x,y))) We can also obtain these representations in our formalism by simply adding new conditions to (1.1.c), which force the disambigiuation of (1.1.c) with regard to quantifier scope. To obtain reading (1.2.b), we must come to know that there is only one movie, which can be formaly writen by I Range saw I =1, where I . I denotes the cardinality function. To obtain reading (1.2.a) from (1.1.c), we do not need any new information, since the two formulas are equivalent. This situation is due to the fact that (1.2.b) implies (1.2.a), which means that (1.2.b) is a special case of (1.2.a). This relation can be easly seen by comparing the resulting formulas, which correspond to readings (1.2.a) and (1.2.b): (1.3.a) 3saw (saw c Boy x Movie & Dom saw=Boy) (1.3.b) 3saw (saw ~ Boy x Movie & Dom saw=Boy & I Range saw I =1) So, we have (1.3.b) => (1.3.a). As I have stated above, however, it is not very useful to disambiguate representation (1.1.c) immediately. It makes more sense to leave representation (1.1.c) unchanged for further processing, since it may be that in the development a new condition may appear which determines the scope. For instance, we can obtain the additional condition in (1.3.b), when sentence (1.1) is followed by a sentence containing a pronoun refering to a movie, as in sentence (1.4). (1.4) It was "Gone with the Wind". Since it refers to a movie, the image of the saw-relation (a subset of the set of movies) can contain only one element. Thus, the resolution of the reference results in an extension of representation (1.1.c) by the condition I Range saw I = 1. Therefore, we get in this case only one reading (1.3.b) as a representation of sentence (1.1), which corresponds to wide scope of the existential quantifier. Thus in the context of (1.4) we have disambiguated sentence (1.1) with regard to quantifier scope without having first generated all possible readings (in our case these were (1.2.a) and (1.2.b)). 244 Let us now assume that sentence (1.5) follows (1.1). (1.5) All of them were made by Walt Disney Studios. Syntactic theories alone are of no help here for finding the correct discourse referent for them in sentence (1.1), since there is no number agreement between them and a movie. The plural noun them, however, refers to all movies the boys have seen. This causes great problems for standard anaphora theories and plural theories, since there is no explicit object of reference to which them could refer (cf. Eschenbach et al. 1990; Link 1986). Thus, the usual procedure would be to construe a complex reference object as the sum of all movies the boys have seen. With my representation, we do not need such procedures because the discourse referents are always available, namely as domains of the relations. In the context of (1.1) and (1.5), the pronoun them (just as it in (1.4)) refers to the image of the relation saw, which additionally serves the purpose of determining the quantifier scope. Here, just as in the preceding cases, the representation (1.1.c) has to be seen as the "starting representation" of (1.1). The information that them is a plural noun is represented by the condition I Range saw I > 1, which in turn leads to the following representation: (1.6) 3saw (saw ~ BOy x Movie & Dom saw=Boy & I Range saw I >1) The representation (1.6) is not ambiguous with regard to quantifier scope. The universal quantifier has wide scope over the whole sentence, due to the condition I Range saw I > 1. The reading presented in (1.6) is a further specification of (1.3.a), which at the same time excludes reading (1.3.b). Thus (1.6) contains more information that formula (1.2.a), which is equivalent to (1.3.a). A classical scope determining system can only choose one of the readings (1.2.a) and (1.2.b). However, if it chooses (1.2.a), it will not win any new information, since (1.2.b) is a special case of (1.2.a). So, quantifier scope can not be completely determined by such a system. In order to indicate further advantages of this representation formalism, let us take a look at the following sentence (cf. Link 1986): (1.7) Every boy saw a different movie. Its representation is generated in the same way as that of (1.1), the only difference being that the word different carries additional information about the relation saw. different requires that the relation be injective. Therefore, the formula (1.1.c) is extended by the condition 'saw is 1-1'. The formula (1.8) thus represents the only reading of sentence (1.7), in which scope is completely determined; the universal quantifier has wide scope. (1.8) 3saw (saw ~ Boy x Movie & Dom saw=Boy & saw is 1-1) 2. SCOPE-AMBIGUOUS REPRESENTATION FOR SENTENCES WITH NUMERIC QUANTIFIERS So far, I have not stated exactly how the representation of sentence (1.1) was generated. In order to do so, let us take an example sentence with numeric quantifiers: (2.1) Two examiners marked six scripts. It is certainly not a new observation that this sentence has many interpretations with regard to quantifier scope and distributivity, which can be summarized to a few main readings. However, their exact number is controversial. While Kempson and Cormack 245 (1981) assign four readings to this sentence (see also Lakoff 1972), Davies (1989) assigns eight readings to it. I quote here the readings from (Kempson/Cormack 1981): Uniformising: Replace "(Vx~ Xn)(3Y)" by "(3Y)(Vx~ Xn)" 10 There were two examiners, and each of them marked six scripts (subject noun phrase with wide scope). This interpretation could be true in a situation with two examiners and 12 scripts. 20 There were six scripts, and each of these was marked by two examiners (object noun phrase with wide scope). This interpretation could be true in a situation with twelve examiners and six scripts. 30 The incomplete group interpretation: Two examiners as a group marked a group of six scripts between them. 40 The complete group interpretation: Two examiners each marked the same set of six scripts. Kempson and Cormack represent these readings with the help of quantifiers over sets in the following way: 10 (3X2)(Vx~ X2)(3S6)(Vs~ S6)Mxs 20 (3S6)(Vs~ S6)(3X2)(Vx~ X2)Mxs 30 (3X2)(3S6)(Vx~ X2)(Vs~ S6)Mxs 40 (3X2)(3S6)(Vx~ X2)(3s~ S6)Mxs & (Vs~ $6)(3x~ X2)Mxs Here, X 2 is a sorted variable which denotes a two-element set of examiners, and S 6 is a sorted variable that denotes a six-element set of scripts. Kempson and Cormack derive these readings from an initial formula in the conventional way by changing the order and distributivity of quantifiers. This fact is discernible from their derivational rules and the following quotation: Generalising: Replace "(3x~ Xn)" by "(Vx~ Xn)" "What we are proposing, then, as an alternative to the conventional ambiguity account is that all sentences of a form corresponding to (42) [here: 2.1] have a single logical form, which is then subject to the procedure of generalising and uniformising to yield the various interpretations of the sentence in use." (Kempson/Cormack (1981), p. 273) Only in reading 40 the relation between examiners and scripts is completely characterized. For the other formulas there are several possible assignments between examiners and scripts which make these formulas valid. At this point I want to make an important observation, namely that these four readings are not totally independent of one another. I am, however, not concerned with logical implications between these readings alone, but rather with the fact that there is a piece of information which is contained in all of these readings and which does not necessitate a determinated quantifier scope. This is the information which - cognitively speaking - can be extracted from the sentence by a listener without determining the quantifier scope. The difficulties which people have when they are forced to disambiguate a sentence containing numeric quantifiers such as (2.1) without a specific context point to the fact that only such a scopeless representation is assigned to the sentence in the first place. On the basis of this representation one can then, within a given context, derive a representation with a definite scope. We can describe the scopeless piece of information of sentence (2.1), which all readings have in common, as follows. We know that we are dealing with a marking- 246 relation between examiners and scripts, and that we are always dealing with two examiners or with six scripts. In the formalism described in this paper this piece of information is represented as: (2.2) 3mark ( mark c Examiner x Script & (IDommarkl=2 v IRangemarkl--6)) It may be that this piece of information is sufficient in order to understand sentence (2.1). If there is no scope-determining information in the given context, people can understand the sentence just as well. If, for example, we hear the following utterance, (2.3) In preparation for our workshop, two examiners corrected six scripts. it may be without any relevance what the relation between examiners and scripts is exactly like. The only important thing may be that the examiners corrected the scripts and that we have an idea about the number of examiners and the number of scripts. Therefore, we have assigned an under- determined scope-ambiguous representation (2.2) to sentence (2.1), which constitutes the maximum scopeless content of information of this sentence. The lower line of (2.2) represents a scope-neutral part of the information which is contained in the meaning of the quantifiers two examiners and six scripts. This fact indicates that the meaning of a quantifier has to be structured internally, since a quantifier contains scope-neutral as well as scope- determining information. Distributivity is an example of scope-determining information. Then what happens in a context which contains scope-determining information? This context just provides restrictions on the domains of the relation. These restrictions in turn contribute to scope determination. We may, for instance, get to know in a given context that there were twelve scripts in all, which excludes the condition I Range mark I =6 in the disjunction of (2.2). We then know for certain that there were two examiners and that each of them marked six different scripts. Consequently, the quantifier two examiners acquires wide scope, and we are dealing with a distributive reading. Thus, in this context we have completely disambiguated sentence (2.1) with regard to quantifier scope; and that simply on the basis of the scopeless, incomplete representation (2.2). On the other hand, standard procedures (the most important were listed at the beginning) first have to generate all representations of this sentence by considering all combinatorically possible scopes together with distributive and collective readings. 3. CONCLUDING REMARKS A cognitively adequate method for dealing with sentences that are ambiguous with regard to quantifier scope has been described in this paper. An underdetermined scope-ambiguous representation is assigned to a scope-ambiguous sentence and then extended by additional conditions from context and world knowledge, which further specify the meaning of the sentence. Scope determination in this procedure can be seen as a mere by- product. The quantifier scope is completely determined when the representation which was generated in this way corresponds to an interpretation with a fixed scope. Of course, this only works if there is scope-determining information; if not, one continues to work with the scope-ambiguous representation. I use the language of second-order predicate logic here, but not the whole second- order logic, since I need deduction rules for scope derivation, but not deduction rules for second-order predicate logic (which cannot be completely stated). One could even use the formalism for scope determination alone and then translate the obtained readings into a first-order formalism. However, the formalism lends itself very easily to 247 representation and processing of the derived semantic knowledge as well. ACKNOWLEDGMENTS I would like to thank Christopher Habel, Manfred Pinkal and Geoff Simmons. BIBLIOGRAPHY Barwise, Jon / Cooper, Robin (1981): Generalized Quantifiers and Natural Language. Linguistics and Philosophy 4, 159-219. Cooper, Robin (1983): Quantification and Semantic Theory. D. Reidel, Dordrecht: Holland. Davies, Martin (1989)~ "Two examiners marked six scripts." Interpretations of Numeric- ally Quantified Sentences. Linguistics and Philosophy 12, 293-323. Eschenbach, Carola / Habel, Christopher / Herweg, Michael / Rehk/imper, Klaus (1989): Remarks on plural anaphora. Proceedings of the EACL in Manchester, England. Eschenbach, Carola / Habel, Christopher / Herweg, Michael / Rehk/imper, Klaus (1990): Rekonstruktion fiir plurale Diskursanaphern. In S. Felix at al. (eds.): Sprache und Wissen. West- deutscher Verlag, Opladen. Habel, Christopher (1986): Prinzipien der Referentialit/it. Imformatik Fach- berichte 122. Springer-Verlag, Berlin. Habel, Christopher (1986a): Plurals, Cardinalities, and Structures of Deter- mination. Proceedings of COLING-86. Hobbs, Jerry R. / Shieber, Stuart M. (1987): An Algorithm for Generating Quantifier Scopings. Computational Linguistics, Volume 13, Numbers 1-2. Kadmon, Nirit (1987): Asymmetric Quantification. In Groenendijk, J. / Stokhof, M. / Veltman, F. (eds.): Proceedings of the Sixth Amsterdam Colloquium. Keller, William R. (1988): Nested Cooper Storage: The Proper Treatment of Quantification in Ordinary Noun Phrases. In U. Reyle and C. Rohrer (eds.), Natural Language Parsing and Linguistic Theories, 432-447, D. Reidel, Dordrecht. Kempson, Ruth M. / Cormack, Annabel (1981): Ambiguity and Quantification. Linguistics and Philosophy 4, 259-309. Lakoff, George (1972): Linguistics and Natural Logic. In Harman, G. and Davidson, D. (eds.): Semantics of Natural Language. Reidel, 545-665. Latecki, Longin (1991): An Indexing Technique for Implementing Command Relations. Proceedings of the EACL in Berlin. Link, Godehard (1983): The logical analysis of plurals and mass terms: A lattice- theoretical approach. In Baeuerle, R. et al. (eds.), Meaning, Use, and Interpretation of Language. de Gruyter, Berlin, 302-323. Link, Godehard (1986): Generalized Quantifiers and Plurals. In G/irdenfors, P. (ed.): Generalized Quantifiers: Studies in Lingusitics and Philosophy. Dordrrecht, The Netherlands, Reidel. May, Robert (1985): Logical form. Its Structure and Derivation. Linguistic Inquiry Monographs. The MIT Press: Cambridge Massachusetts. 248
1992
31
Estimating Upper and Lower Bounds on the Performance of Word-Sense Disambiguation Programs William Gale Kenneth Ward Church David Yarowsky AT&T Bell Laboratories 600 Mountain Ave. Murray Hill, NJ 07974 [email protected] Abstract We have recently reported on two new word-sense disambiguation systems, one trained on bilingual material (the Canadian Hansards) and the other trained on monolingual material (Roget's Thesaurus and Grolier's Encyclopedia). After using both the monolingual and bilingual classifiers for a few months, we have convinced ourselves that the performance is remarkably good. Nevertheless, we would really like to be able to make a stronger statement, and therefore, we decided to try to develop some more objective evaluation measures. Although there has been a fair amount of literature on sense-disambiguation, the literature does not offer much guidance in how we might establish the success or failure of a proposed solution such as the two systems mentioned in the previous paragraph. Many papers avoid quantitative evaluations altogether, because it is so difficult to come up with credible estimates of performance. This paper will attempt to establish upper and lower bounds on the level of performance that can be expected in an evaluation. An estimate of the lower bound of 75% (averaged over ambiguous types) is obtained by measuring the performance produced by a baseline system that ignores context and simply assigns the most likely sense in all cases. An estimate of the upper bound is obtained by assuming that our ability to measure performance is largely limited by our ability obtain reliable judgments from human informants. Not surprisingly, the upper bound is very dependent on the instructions given to the judges. Jorgensen, for example, suspected that lexicographers tend to depend too much on judgments by a single informant and found considerable variation over judgments (only 68% agreement), as she had suspected. In our own experiments, we have set out to find word-sense disambiguation tasks where the judges can agree often enough so that we could show that they were outperforming the baseline system. Under quite different conditions, we have found 96.8% agreement over judges. 1. Introduction: Using Massive Lexicographic Resources Word-sense disambiguation is a long-standing problem in computational linguistics (e.g., Kaplan (1950), Yngve (1955), Bar-I-Iillel (1960), Masterson (1967)), with important implications for a number of practical applications including text-to-speech (TI'S), machine translation (MT), information retrieval (IR), and many others. The recent interest in computational lexicography has fueled a large body of recent work on this 40-year-old problem, e.g., Black (1988), Brown et al. (1991), Choueka and Lusignan (1985), Clear (1989), Dagan et al. (1991), Gale et al. (to appear), Hearst (1991), Lesk (1986), Smadja and McKeown (1990), Walker (1987), Veronis and Ide (1990), Yarowsky (1992), Zemik (1990, 1991). Much of this work offers the prospect that a disambiguation system might be able to input unrestricted text and tag each word with the most likely sense with fairly reasonable accuracy and efficiency, just as part of speech taggers (e.g., Church (1988)) can now input unrestricted text and assign each word with the most likely part of speech with fairly reasonable accuracy and efficiency. The availability of massive lexicographic databases offers a promising route to overcoming the knowledge acquisition bottleneck. More than thirty years ago, Bar- I-Iillel (1960) predicted that it would be "futile" to write expert-system-like rules by-hand (as they had been doing at Georgetown at the time) because there would be no way to scale up such rules to cope with unrestricted input. Indeed, it is now well-known that expert-system- like rules can be notoriously difficult to scale up, as Small and Reiger (1982) and many others have observed: "The expert for THROW is currently six pages long.., but it should be 10 times that size." Bar-Hillel was very early in realizing the scope of the problem; he observed that people have a large set of facts at their disposal, and it is not obvious how a computer could ever hope to gain access to this wealth of knowledge. 249 " 'But why not envisage a system which will put this knowledge at the disposal of the translation machine?' Understandable as this reaction is, it is very easy to show its futility. What such a suggestion amounts to, if taken seriously, is the requirement that a translation machine should not only be supplied with a dictionary but also with a universal encyclopedia. This is surely utterly chimerical and hardly deserves any further discussion. Since, however, the idea of a machine with encyclopedic knowledge has popped up also on other occasions, let me add a few words on this topic. The number of facts we human beings know is, in a ceaain very pregnant sense, infinite." (Bar-Hillel, 1960) Ironically, much of the research cited above is taking exactly the approach that Bar-Hillel ridiculed as utterly chimerical and hardly deserving of any further discussion. Back in 1960, it may have been hard to imagine how it would be possible to supply a machine with both a dictionary and an encyclopedia. But much of the recent work cited above goes much further; not only does it supply a machine with a dictionary and an encyclopedia, but many other extensive references works as well, including Roget's Thesaurus and numerous large corpora. Of course, we are using these reference works in a very superficial way; we are certainly not suggesting that the machine should attempt to solve the "AI Complete" problem of "understanding" these reference works. 2. A Brief Summary of Our Previous Work Our own work has made use of many of these lexical resources. In particular, (Gale et al., to appear) achies'ed considerable progress by using well-understood statistical methods and very large datasets of tens of millions of words of parallel English and French text (e.g., the Canadian Hansards). By aligning the text as we have, we were able to collect a large set of examples of polysemous words (e.g., sentence) in each sense (e.g., judicial sentence vs. syntactic sentence), by extracting instances from the corpus that were translated one way or the other (e.g, peine or phrase). These data sets were then analyzed using well-understood Bayesian discrimination methods, which have been used very successfully in many other applications, especially author identification (Mosteller and Wallace, 1964, section 3.1) and information retrieval (IR) (van Rijsbergen, 1979, chapter 6; Salton, 1989, section 10.3), though their application to word-sense disambiguation is novel. In author identification and information retrieval, it is customary to split the discrimination process up into a testing phase and a training phase. During the training phase, we are given two (or more) sets of documents and are asked to construct a discriminator which can distinguish between the two (or more) classes of 250 documents. These discriminators are then applied to new documents during the testing phase. In the author identification task, for example, the training set consists of several documents written by each of the two (or more) authors. The resulting discriminator is then tested on documents whose authorship is disputed. In the information retrieval application, the training set consists of a set of one or more relevant documents and a set of zero or more irrelevant documents. The resulting discriminator is then applied to all documents in the library in order to separate the more relevant ones from the less relevant ones. There is an embarrassing wealth of information in the collection of documents that could be used as the basis for discrimination. It is common practice to treat documents as "merely" a bag of words, and to ignore much of the linguistic structure, especially dependencies on word order and correlations between pairs of words. In other words, one assumes that there are two (or more) sources of word probabilities, rel and irrel, in the IR application, and author t and author 2 in the author identification application. During the training phase, we attempt to estimate Pr(wlsource) for all words w in the vocabulary and all sources. Then during the testing phase, we score all documents as follows and select high scoring documents as being relatively likely to have been generated by the source of interest. Pr(wl rel) Information Retreival (IR) w ~ Pr(wl irrel) Pr( w l author l ) w Eoe Pr(wlauthor2) Author Identification In the sense disambiguation application, the 100-word context surrounding instances of a polysemous word (e.g., sentence) are treated very much like a document. 1 Pr( w l sense t ) w in el~Iontext Pr(wlsensez) sense Disambiguation That is, during the testing phase, we are given a new instance of a polysemous word, e.g., sentence, and asked to assign it to one or more senses. We score the words in the 100-word context using the formula given above, and assign the instance to sense t if the score is large. I. It is common to use very small contexts (e.g., 5-words) based on the observation that people seem to be able to disambiguate word- senses based on very little context. We have taken a different approach. Since we have been able to find useful information out to 100 words (and measurable information out to 10,000 words), we feel we might as well make use of the the larger contexts. This task is very difficult for the machine; it needs all the help it can get. The conditional probabilities, Pr(wlsense), are determined during the training phase by counting the number of times that each word in the vocabulary was found near each sense of the polysemous word (and then smoothing these estimates in order to deal with the sparse-data problems). See Gale et al. (to appear) for further details. At first, we thought that the method was completely dependent on the availability of parallel corpora for training. This has been a problem since parallel text remains somewhat difficult to obtain in large quantity, and what little is available is often fairly unbalanced and unrepresentative of general language. Moreover, the assumption that differences in translation correspond to differences in word-sense has always been somewhat suspect. Recently, Yarowsky (1992) has found a way to extend our use of the Bayesian techniques by training on the Roget's Thesaurus (Chapman, 1977) 2 and G-rolier's Encyclopedia (1991) instead of the Canadian Hansards, thus circumventing many of the objections to our use of the Hansards. Yarowsky (1992) inputs a 100-word context surrounding a polysemous word and scores each of the 1042 Roget Categories by: 1-[ Pr(wlRoget Categoryi) w in context The program can also be run in a mode where it takes unrestricted text as input and tags each word with its most likely Roget Category. Some results for the word crane are presented below, showing that the program can be used to sort a concordance by sense. Input Output Treadmills attached to cranes were used to lift heavy TOOLS for supplying power for cranes, hoists, and lifts rOOl.S Above this height, a tower crane is often used .SB This TOO~ elaborate courtship rituals cranes build a nest of vegetation A~aAL are more closely related to cranes and rails .SB They range ANIMAL low trees .PP At least five crane species are in danger of ! AN~t~ After using both the monolingual and bilingual classifiers for a few months, we have convinced ourselves that the performance is remarkably good. Nevertheless, we would really like to be able to make a stronger statement, and therefore, we decided to try to develop some more objective evaluation measures. 2. Note that this edition of the Roger's Thesaurus is much more extensive than the 1911 version, though somewhat more difficult to obtain in eleclxonie form. 3. The Literature on Evaluation Although there has been a fair amount of literature on sense-disambiguation, the literature does not offer much guidance in how we might establish the success or failure of a proposed solution such as the two described above. Most papers tend to avoid quantitative evaluations. Lesk (1986), an extremely innovative and commonly cited reference on the subject, provides a short discussion of evaluation, but fails to offer any very satisfying solutions that we might adopt to quantify the performance of our two disambiguation algorithms. 3 Perhaps the most common evaluation technique is to select a small sample of words and compare the results of the machine with those of a human judge. This method has been used very effectively by Kelly and Stone (1975), Black (1988), Hearst (1991), and many others. Nevertheless, this technique is not without its problems, perhaps the worst of which is that the sample may not be very representative of the general vocabulary. Zernik (1990, p. 27), for example, reports 70% performance for the word interest, and then acknowledges that this level of performance may not generalize very well to other words. 4 Although we agree with Zernik's prediction that interest is not very representative of other words, we suspect that interest is actually more difficult than most other words, not less difficult. Table 1 shows the performance of Yarowsky (1992) on twelve words which have been previously discussed in the literature. Note that interest is at the bottom of the list. The reader should exercise some caution in interpreting the numbers in Table 1. It is natural to try to use these numbers to predict performance on new words, but the study was not designed for that purpose. The test words were selected from the literature in order to make comparisons over systems. If the study had been intended to support predictions on new words, then the study should have used a random sample of such words, rather than a sample of words from the literature. 3. "What is the current performance of this program? Some very brief experimentation with my program has yielded accuracies of 50-70% on short samples of Pride and Prejudice and an Associated Press news story. Considerably more work is needed both to improve the program and to do more thorough evaluation... There is too much subjectivity in these measurements." (Lesk, 1986, p. 6) 4. "For all 4 senses of INTEREST, both recall and precision are over 70%... However, not for all words are the obtained results that positive... The fact is that almost any English word possesses multiple senses. (Zernik, 1990, p. 27) 251 Table 1: Comparison over Systems Word Yarowsky (1992) Previous Systems bow 91% < 67% (Clear, 1989) bass 99% 100% (Hearst, 1991) galley 99% 50-70% (Lesk, 1986) mole 99% N/A (Hirst, 1987) sentence 98% 90% (Gale et al.) slug 97% N/A (Hirst, 1987) star 96% N/A (Hirst, 1987) duty 96% 96% (Gale et al.) issue 94% < 70% (Zernik, 1990) taste 93% < 65% (Clear, 1989) cone 77% 50-70% (Lesk, 1986) interest 72% 72% (Black, 1988); 70% (Zernik, 1990) AVERAGE 92% N/A In addition to the sampling questions, one feels uncomfortable about comparing results across experiments, since there are many potentially important differences including different corpora, different words, different judges, differences in treatment of precision and recall, and differences in the use of tools such as parsers and part of speech taggers, etc. In short, there seem to be a number of serious questions regarding the commonly used technique of reporting percent correct on a few words chosen by hand. Apparently, the literature on evaluation of word-sense disambiguation algorithms fails to offer a clear role model that we might follow in order to quantify the performance of our disambiguation algorithms. 4. What is the State-of-the-Art, and How Good Does It Need To Be? Moreover, there doesn't seem to be a very clear sense of what is possible. Is interest a relatively easy word or is it a relatively hard word? Zernik says it is relatively easy; we say it is relatively hard. 5 Should we expect the next word to be easier than interest or harder than interest? One might ask if 70% is good or bad. In fact, both Black (1988) and Yarowsky (1992) report 72% performance on this very same word. Although it is dangerous to compare such results since there are many potentially important differences (e.g., corpora, judges, 5. As evidence that interest is relatively difficult, we note that both the Oxford Advanced Learner's Dictionary (OALD) (Crowie et al., 1989, p. 654) and COBUILD (Sinclair et al., 1987), for example, devote more than a full column to this word, indicating that it is an extremely complex word, at least by their standards. etc.), it appears that Zernik's 70% figure is fairly representative of the state of the art. 6 Should we be happy with 70% performance? In fact, 70% really isn't very good. Recall that Bar-Hillel (1960, p. 159) abandoned the machine translation field when he couldn't see how a machine could possibly do a decent job in translating text if it couldn't do better than this in disambiguating word senses. Bar-Hillel's real objection was an empirical one. Using his numbers, 7 it appears that programs, at the time, could disambiguate only about 75% of the words in a sentence (e.g., 15 out of 20). If interest is a relatively easy word, as Zernik (1990) suggests, then it would seem that Bar-Hillel's argument remains as true today as it was in 1960, and we ought to follow his lead and find something more productive to do with our time. On the other hand, if we are correct and interest is a relatively difficult word, then it is possible that we have made some progress over the past thirty years... 5. Upper and Lower Bounds 5.1 Lower Bounds We could be in a better position to address the question of the relative difficulty of interest if we could establish a rough estimate of the upper and lower bounds on the level of performance that can be expected. We will estimate the lower bound by evaluating the performance of a straw man system, which ignores context and simply assigns the most likely sense in all cases. One might hope that reasonable systems should generally 7. In fact, Zemik's 70% figure is probably significantly inferior to the 72% reported by Black and Yarowsky, because Zernik reports precision and recall separately, whereas the others report a single figure of merit which combines both Type I (false rejection) and Type II (false acceptance) errors by reporting precision at 100% recall. Gale et al. show that error rates for 70% recall were half of those for 100% recall, on their test sample. "Let me state rather dogmatically that there exists at this moment no method of reducing the polysemy of the, say, twenty words of an average Russian sentence in a scientific article below a remainder of, I would estimate, at least five or six words with multiple English renderings, which would not seriously endanger the quality of the machine output. Many tend to believe that by reducing the number of initially possible renderings of a twenty word Russian sentence from a few tens of thousands (which is the approximate number resulting from the assumption that each of the twenty Russian words has two renderings on the average, while seven or eight of them have only one rendering) to some eighty (which would be the number of renderings on the assumption that sixteen words are uniquely rendered and four have three renderings apiece, forgetting now about all the other aspects such as change of word order, etc.) the main bulk of this kind of work has been achieved, the remainder requiring only some slight additional effort." (Bar-Hillel, 1960, p. 163) 252 outperform this baseline system, though not all such systems actually do. In fact, Yarowsky (1992) falls below the baseline for one of the twelve words (issue), although perhaps, we needn't be too concerned about this one deviation. 8 There are, of course, a number of problems with this estimate of the baseline. First, the baseline system is not operational, at least as we have defined it. Ideally, the baseline system ought to try to estimate the most likely sense for each word in the vocabulary and then assign that sense to each instance of the word in the test set. Unfortunately, since it isn't clear just how this estimation should be accomplished, we decided to "cheat" and let the baseline system peek at the test set and "estimate" the most likely sense for each word as the more frequent sense in the test set. Consequently, the performance of the baseline cannot fall below chance (100/k% for a particular word with k senses). 9 In addition, the baseline system assumes that Type I (false rejection) errors are just as bad as Type II (false acceptance) errors. If one desires extremely high recall and is willing to sacrifice precision in order to obtain this level of recall, then it might be sensible to tune a system to produce behavior which might appear to fall below the baseline. We have run into such situations when we have attempted to help lexicographers find extremely unusual events. In such a case, a lexicographer might be quite happy receiving a long list of potential candidates, only a small fraction of which are actually the case of interest. One can come up with quite a number of other scenarios where the baseline performance could be somewhat misleading, especially when there is an unusual trade-off between the cost of a Type I error and the cost of a Type II error. Nevertheless, the proposed baseline does seem to provide a usable rough estimate of the lower bound on performance. Table 2 shows the baseline performance for each of the twelve words in Table 1. Note that performance is generally above the baseline as we would 8. Many of the systems mentioned in Table 2 including Yarowsky (1992) do not currently take advantage of the prior probabilities of the senses, so they would be at a disadvantage relative to the baseline if one of the senses had a very high prior, as is the case for the test word issue. 9. In addition, the baseline doesn't deal as well as it could with skewed distributions. One could almost certainly improve the model of the baseline by making use of a notion like entropy that could deal more effectively with skewed distributions. Nevertheless, we will stick with our simpler notion of the baseline for expository convenience. hope. Table 2: The Baseline Word Baseline Yarowsky (1992) issue 96% 94% duty 87% 96% galley 83% 99% star 83% 96% taste 74% 93% bass 70% 99% slug 62% 97% sentence 62% 98% interest 60% 72% mole 59% 99% cone 51% 77% bow 48% 91% AVERAGE 70% 92% As mentioned previously, the test words in Tables 1 and 2 were selected from the literature on polysemy, and therefore, tend to focus on the more difficult cases. In another experiment, we selected a random sample of 97 words; 67 of them were unambiguous and therefore had a baseline performance of 100%) 0 The remaining thirty words are listed along with the number of senses and baseline performance: virus (2, 98%), device (3, 97%), direction (2, 96%), reader (2, 96%), core (3, 94%), hull (2, 94%), right (5, 94%), proposition (2, 89%), deposit (2, 88%), hour (4, 87%), path (2, 86%), view (3, 86%), pyramid (3, 82%), antenna (2, 81%), trough (3, 77%), tyranny (2, 75%), figure (6, 73%), institution (4, 71%), crown (4, 64%), drum (2, 63%), pipe (4, 60%), processing (2, 59%), coverage (2, 58%), execution (2, 57%), rain (2, 57%), interior (4, 56%), campaign (2, 51%), output (2, 51%), gin (3, 50%), drive (3, 49%). In studying these 97 words, we found that the average baseline performance is much higher than we might have guessed (93% averaged over tokens, 92% averaged over types). In particular, note that this baseline is well above the 75% figure that we associated with Bar-Hillel above. Of course, the large number of unambiguous words contributes greatly to the baseline. If we exclude the unambiguous words, then the average baseline 10. The 67 unambiguous words were: acid, annexation, benzene, berry, capacity, cereal clock, coke, colon, commander, consort, contract, cruise, cultivation, delegate, designation, dialogue, disaster, equation, esophagus, fact, fear;, fertility, flesh, fox, gold, interface, interruption, intrigue, journey, knife, label landscape, laurel Ib, liberty, lily, locomotion, lynx, marine, memorial menstruation, miracle, monasticism, mountain, nitrate, orthodoxy, pest, planning, possibility, pottery, projector, regiment, relaxation, reunification, shore, sodium, specialty, stretch, summer, testing, tungsten, universe, variant, vigor, wire, worship. 253 performance falls to 81% averaged over tokens and 75% averaged over types. 5.2 Upper Bounds We will attempt to estimate an upper bound on performance by estimating the ability for human judges to agree with one another (or themselves). We will find, not surprisingly, that the estimate varies widely depending on a number of factors, especially the definition of the task. Jorgensen (1990) has collected some interesting data that may be relevant for estimating the agreement among judges. As part of her dissertation under George Miller at Princeton, she was interested in assessing "the extent of psychologically real polysemy in the mental lexicon for nouns." Her experiment was designed to study one of the more commonly employed methods in lexicography for writing dictionary definitions, namely the use of citation indexes. She was concerned that lexicographers and computational linguists have tended to depend too much on the intuitions of a single informant. Not surprisingly, she found considerable variation across judgements, just as she had suspected. This finding could have serious implications for evaluation. How do we measure performance if we can't depend on the judges? Jorgensen selected twelve high frequency nouns at random from the Brown Corpus, six were highly polysemous (head, life, world, way, side, hand) and six were less so (fact, group, night, development, something, war). Sentences containing each of these words were drawn from the Brown Corpus and typed on filing cards. Nine subjects where then asked to cluster a packet of these filing cards by sense. A week or two later, the same nine subjects were asked to repeat the experiment, but this time they were given access to the dictionary definitions. Jorgensen reported performance in terms of the "Agreement-Disagreement" (A-D) ratio (Shipstone, 1960) for each subject and each of the twelve test words. We have found it convenient to transform the A-D ratio into a quantity which we call the percent agreement, the number of observed agreements over the total number of possible agreements. The grand mean percent agreement over all subjects and words is only 68%. In other words, at least under these conditions, there is considerable variation across judgements, perhaps so much so that it would be hard to show that a proposed system was outperforming the baseline system (75%, averaged over ambiguous types). Moreover, if we accept Bar-Hillel's argument that 75% is not-good- enough, then it would be hard to show that a system was doing well-enough. 254 6. A Discrimination Experiment For evaluation purposes, it is important to find a task that is somewhat easier for the judges. If the task is too hard (as Jorgensen's classification task may he), then there will be almost no room between the limits of the measurement and the baseline. In other words, there won't be enough dynamic range to measure differences between better systems and worse systems. In contrast, if we focus on easier tasks, then we might have enough dynamic range to show some interesting differences. Therefore, unlike Jorgensen who was interested in highlighting differences among judgments, we are much more interested in highlighting agreements. Fortunately, we have found in (Gale et al., 1992) that the agreement rate can be very high (96.8%), which is well above the baseline, under very different experimental conditions. Of course, it is a fairly major step to redefine the problem from a classification task to a discrimination one, as we are proposing. One might have preferred not to do so, but we simply don't know how one could establish enough dynamic range in that case to show any interesting differences. It has been our experience that it is very hard to design an experiment of any kind which will produce the desired agreement among judges. We are very happy with the 96.8% agreement that we were able to show, even if it is limited to a much easier task than the one that Jorgensen was interested in. We originally designed the experiment in Gale et al. (1992) to test the hypothesis that multiple uses of a polysemous word tend to have the same sense within a common discourse. A simple (but non-blind) pilot experiment provided some suggestive evidence confirming the hypothesis. A random sample of 108 nouns (which included the 97 words previously mentioned) was extracted for further study. A panel of three judges (the three authors of this paper) were given 100 sets of concordance lines containing one of the test words selected from a single article in Grolier's. The judges were asked to indicate if the set of concordance lines used the same sense or not. Only 6 of 300 article- judgements were judged to contain multiple senses of one of the test words. All three judges were convinced after grading 100 articles that there was considerable validity to the hypothesis. With this promising preliminary verification, the following blind test was devised. Five subjects (the three authors and two of their colleagues) were given a questionnaire starting with a set of definitions selected from OALD (Crowie et al., 1989) and followed by a number of pairs of concordance lines, randomly selected from Grolier's Encyclopedia (1991). The subjects were asked to decide for each pair, whether the two concordance lines corresponded to the same sense or not. antenna 1. jointed organ found in pairs on the heads of insects and crustaceans, used for feeling, etc. ---> the illus at insect. 2. radio or TV aerial. lack eyes, legs, wings, antennae, and distinct mouthparts and The Brachycera have short antennae and include the more evolved silk moths passes over the antennae .SB Only males that detect relatively simple form of antenna is the dipole, or doublet The questionnaire contained a total of 82 pairs of concordance lines for 9 polysemous words: antenna, campaign, deposit, drum, hull, interior, knife, landscape, and marine. The results of the experiment are shown below in Table 3. With the exception of judge 2, all of the judges agreed with the majority opinion in all but one or two of the 82 cases. The agreement rate was 96.8%, averaged over all judges, or 99.1%, averaged over the four best judges. In either case, the agreement rate is well above the previously described ceiling. Table 3 Judge n % 1 82 100.0% 2 72 87.8% 3 81 98.7% 4 82 100.0% 5 80 97.6% Average 96.8% Average (without Judge 2) 99.1% Incidentally, the experiment did, in fact, confirm the hypothesis that multiple uses of a polysemous word will generally take on the same sense within a discourse. Of the 82 judgments, 54 were selected from the same discourse and were judged to have the same sense by the majority in 96.9% of the cases. (The remaining 28 of the 82 judgments were used as a control to force the judges to say that some pairs were different.) Note that the tendency for multiple uses of a polysemous word to have the same sense is extremely strong; 96.9% is much greater than the baseline, and indeed, it is considerably above the level of performance that might be expected from state-of-the-art word-sense disambiguation systems. Since it is so reliable and so easy to compute, it might be used as a quick-and-dirty measure for testing such systems. Unfortunately, we also need a complementary measure that would penalize a system like the baseline system that simply assigned all instances of a polysemous word to the same sense. 255 At present, we have yet to identify a quick-and-dirty measure that accomplishes this control, and consequently, we are forced to continue to depend on the relatively expensive panel of judges. But, at least, we have been able to establish that it is possible to design a discrimination experiment such that the panel of judges can agree with themselves often enough to be useful. In addition, we have established that the discourse constraint on polysemy is extremely strong, much stronger than our ability to tag word-senses automatically. Consequently, it ought to be possible to use this constraint in our next word-sense tagging algorithm to produce even better performance. 7. Conclusions We began this discussion with a review of our recent work on word-sense disambiguation, which extends the approach of using massive lexicographic resources (e.g., parallel corpora, dictionaries, thesauruses and encyclopedia) in order to attack the knowledge- acquisition bottleneck that Bar-Hillel identified over thirty years ago. After using both the monolingual and bilingual classifiers for a few months, we have convinced ourselves that the performance is remarkably good. Nevertheless, we would really like to be able to make a stronger statement, and therefore, we decided to try to develop some more objective evaluation measures. A survey of the literature on evaluation failed to identify an attractive role model. In addition, we found it particularly difficult to obtain a clear estimate of the state-of-the-art. In order to address this state of affairs, we decided to try to establish upper and lower bounds on the level of performance that we could expect to obtain. We estimated the lower bound by positing a simple baseline system which ignored context and simply assigned the most likely sense in all cases. Hopefully, most reasonable systems would outperform this system. The upper bound was approximated by trying to estimate the limit of our ability to measure performance. We assumed that this limit was largely dominated by the ability for the human judges to agree with one another. The estimate depends very much, not surprisingly, on the particular experimental design. Jorgensen, who was interested in highlighting differences among informants, found a very low estimate (68%), well below the baseline (75%), and also well below the level that Bar- Hillel asserted as not-good-enough. In our own work, we have attempted to highlight agreements, so that there would more dynamic range between the baseline and the limit of our ability to measure performance. In so doing, we were able to obtain a much more usable estimate of (96.8%) by redefining the task from a classification task tO a discrimination task. In addition, we also made use of the constraint that multiple instances of a polysemous word in the same discourse have a very strong tendency to take on the same sense. This constraint will probably prove useful for improving the performance of future word-sense disambiguation algorithms. Similar attempts to establish upper and lower bounds on performance have been made in other areas of computational linguistics, specifically part of speech tagging. For that application, it is generally accepted that the baseline part-of-speech tagging performance is about 90% (as estimated by a similar baseline system that ignores context and simply assigns the most likely part of speech to all instances of a word) and that the upper bound (imposed by the limit for judges to agree with one another) is about 95%. Incidentally, most part of speech algorithms are currently performing at or near the limit of our ability to measure performance, indicating that there may be room for refining the experimental conditions along similar lines to what we have done here, in order to improve the dynamic range of the evaluation. References Bar-Hillel (1960), "Automatic Translation of Languages," in Advances in Computers, Donald Booth and R. E. Meagher, eds., Academic, NY. Black, Ezra (1988), "An Experiment in Computational Discrimination of English Word Senses," IBM Journal of Research and Development, v 32, pp 185-194. Brown, Peter, Stephen Della Pietra, Vincent Delia Pietra, and Robert Mercer (1991), "Word Sense Disambiguation using Statistical Methods," ACL, pp. 264-270. Chapman, Robert (1977). Roger's International Thesaurus (Fourth Edition), Harper and Row, NY. Choueka, Yaacov, and Serge Lusignan (1985), "Disambiguation by Short Contexts," Computers and the Humanities, v 19. pp. 147-158. Church, Kenneth (1988), "A Stochastic Parts Program an Noun Phrase Parser for Unrestricted Text," Applied ACL Conference, Austin, Texas. Clear, Jeremy (1989). "An Experiment in Automatic Word Sense Identification," Internal Document, Oxford University Press, Oxford. Crowie, Anthony et al. (eds.) (1989), "Oxford Advanced Learner's Dictionary," Fourth Edition, Oxford University Press. Dagan, Ido, Alon Itai, and Ulrike Schwall (1991), "Two Languages are more Informative than One," ACL, pp. 130-137. Gale, William, Kenneth Church, and David Yarowsky (to appear) "A Method for Disambiguating Word Senses in a Large Corpus," Computers and Humanities. Gale, William, Kenneth Church, and David Yarowsky (1992) "One Sense Per Discourse," Darpa Speech and Natural Language Workshop. Gove, Philip et al. (eds.) (1975) "Webster's Seventh New Collegiate Dictionary," G. & C. Merriam Company, Springfield, MA. Grolier's Inc. ( 1991 ) New Grolier's Electronic Encyclopedia. Hanks, Patrick (ed.) (1979), Collins English Dictionary, Collins, London and Glasgow. 256 Hearst, Marti (1991), "Noun Homograph Disambiguation Using Local Context in Large Text Corpora," Using Corpora, University of Waterloo, Waterloo, Ontario. Hirst, Graerae. (1987), Semantic Interpretation and the Resolution of Ambiguity, Cambridge University Press, Cambridge. Jorgensen, Julia (1990) "The Psychological Reality of Word Senses," Journal of Psychalinguistic Research, v. 19, pp 167-190. Kaplan, Abraham (1950), "An Experimental Study of Ambiguity in Context," cited in Mechanical Translation, v. I, nos. I-3. Kelly, Edward, and Phillip Stone (1975), Computer Recognition of English Word Senses, North-Holland, Amsterdam. Lesk, Michael (1986), "Automatic Sense Disambiguation: How to tell a Pine Cone from an Ice Cream Cone," Proceeding of the 1986 SIGDOC Conference, ACM, NY. Masterson, Margaret (1967), "Mechanical Pidgin Translation," in Machine Translation, Donald Booth, ed., Wiley, 1967. Mosteller, Fredrick, and David Wallace (1964) Inference and Disputed Authorship: The Federalist, Addison-Wesley, Reading, Massachusetts. Procter, P., R. Ilson, J. Ayto, et al. (1978), Longman Dictionary of Contemporary English, Longman, Harlow and London. Salton, G. (1989) Automatic Text Processing, Addison-Wesley. Shipstone, E. (1960) "Some Variables Affecting Pattern Conception," Psychological Monographs, General and Applied, v. 74, pp. 1-4 I. Sinclair, I., Hanks, P., Fox, G., Moon, R., Stock, P. et al. (eds.) (1987) Collins Cobuild English Language Dictionary, Collins, London and Glasgow. Smadja, F. and K. McKoown (1990), "Automatically Extracting and Representing Collocations for Language Generation," ACL, pp. 252- 259. Small, S. and C. Rieger (1982), "Parsing and Comprehending with Word Experts (A Theory and its Realization)," in Strategies for Natural Language Processing, W. Lehnert and M. Ringle, eds., Lawrence Erlbanm Associates, Hillsdale, NJ. van Rijsbergen, C. (1979) Information Retrieval, Second Editional, Butterworths, London. Veronis, Jean and Nancy Ide (1990), "Word Sense Disambiguation with Very Large Neural Networks Extracted from Machine Readable Dictionaries," in Proceedings COLING-90, pp 389-394. Walker, Donald (1987), "Knowledge Resource Tools for Accessing Large Text Files," in Machine Translation: Theoretical and Methodological Issues, Sergei Nirenberg, ed., Cambridge University Press, Cambridge, England. Weiss, Stephen (1973), "Learning to Disambiguate," Information Storage and Retrieval, v. 9, pp 33-41. Yarowsky, David (1992), "Word-Sense Disambiguation Using Statistical Models of Roget's Categories Trained on Large-Corpora," Proceedings COLING-92. Yngve, Victor (1955), "Syntax and the Problem of Multiple Meaning," in Machine Translation of languages, William Locke and Donald Booth, eds., Wiley, NY. Zernik, Uri (1990) "Tagging Word Senses in Corpus: The Needle in the Haystack Revisited," in Text-Based Intelligent Systems: Current Research in Text Analysis, Information Extraction, and Retrieval, P.S. Jacobs, ed., GE Research & Development Center, Schenectady, NY. Zernik, Uri (1991) "Trainl vs. Train2: Tagging Word Senses in Corpus," in Zemik (ed.) Lexical Acquisition: Exploiting On-Line Resources to Build a Lexicon, Lawrence Erlbaum, Hillsdale, NJ.
1992
32
A PARAMETERIZED APPROACH TO INTEGRATING ASPECT WITH LEXICAL-SEMANTICS FOR MACHINE TRANSLATION Bonnie J. Dorr* Institute for Advanced Computer Studies A.V. Williams Building University of Maryland College Park, MD 20742 [email protected] ABSTRACT This paper discusses how a two-level knowledge rep- resentation model for machine translation integrates as- pectual information with lexical-semantic information by means of parameterization. The integration of aspect with lexical-semantics is especially critical in machine translation because of the lexical selection and aspec- tual realization processes that operate during the pro- duction of the target-language sentence: there are of- ten a large number of lexical and aspectual possibili- ties to choose from in the production of a sentence from a lexical semantic representation. Aspectual informa- tion from the source-language sentence constrains the choice of target-language terms. In turn, the target- language terms limit the possibilities for generation of aspect. Thus, there is a two-way communication chan- nel between the two processes. This paper will show that the selection/realization processes may be parame- terized so that they operate uniformly across more than one language and it will describe how the parameter- based approach is currently being used as the basis for extraction of aspectual information from corpora. INTRODUCTION This paper discusses how the two-level knowledge representation model for machine translation presented by Dorr (1991) integrates aspectual information with lexical-semantic information by means of parameteriza- tion. The parameter-based approach borrows certain ideas from previous work such as the lexical-semantic model of Jackendoff (1983, 1990) and models of as- pectual representation including Bach (1986), Comrie (1976), Dowty (1979), Mourelatos (1981), Passonneau (1988), Pustejovsky (1988, 1989, 1991), and Vendler (1967). However, unlike previous work, the current approach examines aspectual considerations within the context of machine translation. More recently, Bennett *This paper describes research done in the Institute for Advanced Computer Studies at the University of Maryland. A special thanks goes to Terry Gaasterland and Ki Lee for helping to close the gap between properties of aspectual in- formation and properties of lexical-semantic structure. In addition, useful guidance and commentary during this re- search were provided by Bruce Dawson, Michael Herweg, Jorge Lobo, Paola Merlo, Norbert Hornstein, Patrick Saint- Dizier, Clare Voss, and Amy Weinberg. (1) Syntactic: (a) Null Subject divergence: E: I have seen Mary 4. S: He vlsto a Marls (Have seen (to) Mary) (b) Constituent Order divergence, E: I have seen Mary 4. G: Ich habe Marie gesehen (I have Mar~" seen) (2) Lexicel-Semantic: (a) Thematic divergence: E: I like Mary 4. $: Marls me gusts a mf (Mary pleases me) (b) Structural divergence: E: John entered the house 4. S: Juan entr6 en la cas& (John entered in the house) (c) Cat esorlal divergence: E: Yo ten~o hambre 4* S: Ich habe Hun~er (I have hun~er) (3) Aepectuah (a) lterative Divergence: E: John stabbed Mary 4. S: Juan le dio una puflaJada a Marls (John gave a knife-wound to Mary) S: Juan le dio pufialadas a Marls (John gave knife-wounds to Mary) (b) Duratlve Divergence, E: John met/knew Mary 4* S: Juan coaoci6 a Marls (John met Mary) S: Juan conoci£ a M&rfa (John knew Merit) Figure 1: Three Levels of MT Divergences et el. (1990) have examined aspect and verb semantics within the context of machine translation in the spirit of Moens and Steedman (1988). This paper borrows from, and extends, these ideas by demonstrating how this theoretical framework might be adapted for cross- linguistic applicability. The framework has been tested within the context of an interlingual machine transla- tion system and is currently being used as the basis for extraction of aspectual information from corpora. The integration of aspect with lexical-semantics is es- pecially critical in machine translation because of the lexical selection and aspectual realization processes that operate during the production of the target-language sentence: there are often a large number of lexical and aspectual possibilities to choose from in the production of a sentence from a lexical semantic representation. As- pectual information from the source-language sentence constrains the choice of target-language terms. In turn, the target-language terms limit the possibilities for gen- eration of aspect. Thus, there is a two-way communica- tion channel between the two processes. Figure 1 shows some of the types of parametric diver- 9ences (Dorr, 1990a) that can arise cross-linguistically. 257 We will focus primarily on the third type, aspectual dis- tinctions, and show how these may be discovered through the extraction of information in a monolingual corpus. We adopt the viewpoint that the algorithms for extrac- tion of syntactic, lexical-semantic, and aspectual infor- mation must be well-grounded in linguistic theory. Once the information is extracted, it may then be used as the basis of parameterized machine translation. Note that we reject the commonly held assumption that the use of corpora necessarily suggests that statistical or example- based techniques be used as the basis for a machine translation system. The following section discusses how the two levels of knowledge, aspectual and lexical-semantic, are used in an interlingual model of machine translation. We then describe how this information may be parameterized. Fi- nally, we discuss how the automatic acquisition of new lexical entries from corpora is achieved within this frame- work. TWO-LEVEL Kit MODEL: ASPECTUAL AND LEXICAL-SEMANTIC KNOWLEDGE The hypothesis proposed by Tenny (1987, 1989) is that the mapping between cognitive structure and syn- tactic structure is governed by aspectual properties. The implication is that lexical-semantic knowledge ex- ists at a level that does not include aspectual infor- mation (though these two types of knowledge may de- pend on each other in some way). This hypothesis is consistent with the view adopted here: we assume that lexical semantic knowledge consists of such notions as predicate-argument structure, well-formedness condi- tions on predicate-argument structures, and procedures for lexical selection of surface-sentence tokens; all other types of knowledge must be represented at some other level. Figure 2 shows the overall design of the UNITRAN machine translation system (Dorr, 1990a, 1990b). The system includes a two-level model of knowledge represen- tation (KR) (see figure 2(a)) in the spirit of Dorr (1991). The translation example shown here illustrates the fact that the English sentence John went to the store when Mary arrived can be translated in two ways in Spanish. This example will be revisited later. The lexical-semantic representation that is used as the interlingua for this system is an extended version of lexi. cal conceptual structure (henceforth, LCS) (see Jackend- off (1983, 1990)). This representation is the basis for the lexical-semantic level that is included in the KR compo- nent. The second level that is included in this component is the aspectual structure. The KR component is parameterized by means of se- lection charts and coercion functions. The notion of se- lection charts is described in detail in Dorr and Gaaster- land (submitted) and will be discussed in the context of machine translation in the section on the Selection of Temporal Connectives. The notion of coercion func- tions was introduced for English verbs by Bennett et al. (1990). We extend this work by parameterizing the coer- cion functions and setting the parameters to cover Span- ish; this will be discussed in the section on Selection and (~) (b) I Lexical- Semantic Structure I Aspectual Structure I Syntactic Structure Selection ~nd Coercion P&r&meters for English Selection and Coercion P~r~meters for Spanish John went to the store when Mary •rrived Juan fue 8 Is tiend• ..~ cu•ndo M•rf• lleg6 -4~ Ju•n fue • 18 fiend& 81 llegar Marf• Figure 2: Overall Design of UNITRAN Aspectual Realization of Verbs. An example of the type of coercion that will be con- sidered in this paper is the use of durative adverbials: { foranhour. } (4) (i) John ransacked the house until Jack arrived. { foranhour. } (ii) John destroyed the house until Jack arrived. (iii), John obliterated the house{ for an hour.until Jack arrived. } Durative adverbials (e.g., for an hour and until...) are viewed as anti-cuiminators (following Bennett et al. (1990)) in that they change the main verb from an ac- tion that has a definite moment of completion to an ac- tion that has been stopped but not necessarily finished. For example, the verb ransack is allowed to be modified by a durative adverbial since it is inherently durative; thus, no coercion is necessary in order to use this verb in the durative sense. In contrast, the verb destroy is inherently non-durative, but it is coerced into a durative action by means of adverbial modification; this accounts for the acceptability of sentence (4)(ii). 1 The verb oblit- erate must necessarily be non-durative (i.e., it is inher- ently non-durative and non-coercible), thus accounting for the ill-formedness of sentence (4)(iii). In addition to the KR component, there is also a syn- tactic representation (SR) component (see figure 2(b)) that is used for manipulating the syntactic structure of a sentence. We will omit the discussion of the SR compo- nent of UNITRAN (see, for example, Dorr (1987)) and will concern ourselves only with the KR component for the purposes of this paper. The remainder of this section defines the dividing line between lexical knowledge (i.e., properties of predicates 1 Some native speakers consider sentence (4)(ii) to be odd, at best. This is additional evidence for the existence of in- herent features and suggests that, in some cases (i.e., for some native speakers), the inherent features are considered to be absolute overrides, even in the presence of modifiers that might potentially change the aspectual features. 258 and their arguments) and non-lexical knowledge (i.e., aspect), and discusses how these two types of knowledge are combined in the Kit component. Lexlcal-Semantic Structure. Lexical-semantic struct- ure exists at a level of knowledge representation that is distinct from that of aspect in that it encodes infor- mation about predicates and their arguments, plus the potential realization possibilities in a given language. In terms of the representation proposed by Jackendoff (1983, 1990), the lexical-semantic structures for the two events of figure 2 would be the following: (5) (i) [Event GOLoc ([Thing John], [Position TOboc ([Thing John], [Location Storel)l)] (ii) [Event GOLoc ([Thin s Mary], [Position TOLoc ([Thing Mary], [Location el)])] 2 Although temporal connectives are not included in Jack- endoff's theory, it is assumed that these two structures would be related by means of a lexical-semantic token corresponding to the temporal relation between the two events. The lexical-semantic representation provided by Jack- endoff distinguishes between events and states; however, this distinction alone is not sufficient for choosing among similar predicates that occur in different aspectual cat- egories. In particular, events can be further subdivided into more specific types so that non-cnlminative events (i.e., events that do not have a definite moment of com- pletion) such as ransack can be distinguished from cul- minative events (i.e., events that have a definite moment of completion) such as obliterate. This is a crucial dis- tinction given that these two similar words cannot be used interchangeably in all contexts. Such distinctions are handled by augmenting the lexical-semantic frame- work so that it includes aspectual information, which we will describe in the next section. Aspectual Structure. Aspect is taken to have two components, one comprised of inherent features (i.e., those features that distinguish between states and events) and another comprised of non-inherent features (i. e., those features that define the perspective, e.g., sim- ple, progressive, and perfective). This paper will focus primarily on inherent features, z Previous representational frameworks have omitted aspectual distinctions among verbs, and have typically merged events under the single heading of dynamic (see, e.g., Yip (1985)). However, a number of aspectually oriented lexical-semantic representations have been pro- posed that more readily accommodate the types of as- pectual distinctions discussed here. The current work borrows extends these ideas for the development of an interlingual representation. For example, Dowty (1979) and Vendler (1967) have proposed a four-way aspectual classification system for verbs: states, activities, achieve- ments, and accomplishments, each of which has a dif- ferent degree of telicity (i.e., culminated vs. nonculmi- 2The empty location denoted by e corresponds to an un- realized argument of the predicate arrive. aSee Dorr and Gaasterland (submitted) for a discussion about non-inherent aspectua] features. nated), and/or atomicity (i.e., point vs. extended). 4 A similar scheme has been suggested by Bach (1986) and Pustejovsky (1989) (following Mourelatos (1981) and Comrie (1976)) in which actions are classified into states, processes, and events. The lexical-semantic structure adopted for UNITRAN is an augmented form of Jackendoff's representation in which events are distinguished from states (as be- fore), but events are further subdivided into activities, achievements, and accomplishments. The subdivision is achieved by means of three features proposed by Ben- nett etal. (1990) following the framework of Moens and Steedman (1988): -t-dynamic (i.e., events vs. states, as in the Jackendoff framework), +telic (i.e., culmina- tive events (transitions) vs. noneulminative events (ac- tivities)), and -I-atomic (i.e., point events vs. extended events). We impose this system of features on top of the current lexical-semantic framework. For example, the lexical entry for all three verbs, ransack, obliterate, and destroy, would contain the following lexical-semantic representation: (6) [Event CAUSE ([Thing X], [Event GOLoc ([Thing X], [Position TOLoc ([X John], [Property DESTROYED])])])] The three verbs would then be distinguished by annotat- ing this representation with the aspectual features [+d,- t,-a] for the verb ransack, [+d,+t,-a] for the verb destroy, and [+d,+t,+a] for the verb obliterate, thus providing the appropriate distinction for cases such as (4). 5 In the next section, we will see how the lexical- semantic representation and the aspeetual structure are combined parametrically to provide the framework for generating a target-language surface form. CROSS-LINGUISTIC APPLICABILITY: PARAMETERIZATION OF THE TWO-LEVEL MODEL Although issues concerning lexical-semantics and as- pect have been studied extensively, they have not been examined sufficiently in the context of machine trans- lation. Machine translation provides an appropriate testbed for trying out theories of lexical semantics and aspect. The problem of lexical selection during genera- tion of the target language is the most crucial issue in this regard. The current framework facilitates the se- lection of temporal connectives and the aspectual real- ization of verbs. We will discuss each of these, in turn, 4Dowty's version of this classification collapses achieve- ments and accomplishments into a single event type called a transition, which covers both the point and extended ver- sions of the event type. The rationale for this move is that all events have some duration, even in the case of so-called punctual events, depending on the granulaxity of time in- volved. (See Passonneau (1988) for an adaptation of this scheme as implemented in the PUNDIT system.) For the purposes of this discussion, we will maintain the distinction between achievements and accomplishments. 5This system identifies five distinct categories of predi- State: i-d] (llke, know) Activity (point): i-t-d, -t, -I-a] (tap, wink) cates: Activity (extended): i-I-d, -t, -a I (ransack, swim) Achievement: [+d, +t, h-a] (obliterate, kill) Accomplishment: i-I-d, -I-t, -a] (destroy, 8rrlve) 259 Matrix Adjunct Selected Features Perspective Type Perspective Word [4-d,-t,4-a pelf [+d,+t,4- a/ simp, perf When [4-d,-t,:l: a 1 perfeetive l+d,+t,-I-a I strop, perf Cuando [4-d,-t-t,4- ~ perf [+d,+t,+a] romp, perf AI Figure 3: Selection Charts for When, Cuando, and Al showing how selection charts and coercion functions are used as a means of parameterization for these processes. Selection of Temporal Connectives: Selection Charts. In order to ensure that the framework pre- sented here is cross-linguistically applicable, we must provide a mechanism for handling temporal connective selection in languages other than English. For the pur- poses of this discussion, we will examine distinctions be- tween English and Spanish only. Consider the following example: (7) (i) John went to the store when Mary arrived. (it) John had gone to the store when Mary arrived. In Dorr (1991), we discussed the selection of the lexical connective when on the basis of the temporal relation between the main or matrix clause and the subordinate or adjunct clause. 6 For the purposes of this paper, we will ignore the temporal component of word selection and will focus instead on how the process of word selec- tion may be parameterized using the aspectual features described in the last section. To translate (7)0) and (it) into Spanish, we must choose between the lexical tokens cuando and al in or- der to generate the equivalent temporal connective for the word when. In the case of (7)(i), there are two pos- sible translations, one that uses the connective cuando, and one that uses the connective ai: (S) (i) Juan fue a la tienda euando Maria lleg6. (it) Juan fue a la tienda al llegar Maria. Either one of these sentences is an acceptable translation for (7)0). However, the same is not true of (7)(it): 7 (9) (i) Juan habfa ido a la tienda euando Maria lleg6. (it) Juan habia ido a la tienda al Ilegar Maria. Sentence (9)(i) is an acceptable translation of (7)(it), but (9)(it) does not mean the same thing as (7)(it). This second sentence implies that John has already gone to the store and come back, which is not the preferred read- ing. In order to establish an association between these con- nectives and the aspectual interpretation for the two events (i.e., the matrix and adjunct clause), we com- pile a table, called a selection chart, for each language that specifies the contexts in which each connective may be used. Figure 3 shows the charts for when, cuando, and al. s The selection charts can be viewed as inverted dic- tionary entries in that they map features to words, not SThis work was based on theories of tense/time by Horn- stein (1990) and Allen (1983, 1984). rI am indebted to Jorge Lobo (personal communication, 1991) for pointing this out to me. aThe perfective and simple aspects are denoted as per] and strop, respectively. words to features. 9 The charts serve as a means of pa- rameterization for the program that generates sentences from the interlingual representation in that they are al- lowed to vary from language to language while the pro- cedure for choosing temporal connectives applies cross- linguistically, l° The key point to note is that the chart for the Spanish connective al is similar to that for the English connective when except that the word al requires the matrix event to have the +telic feature (i.e., the ma- trix action must reach a culmination). This accounts for the distinction between cuando and al in sentences (9)(i) and (9)(it) above. 11,1~ These tables are used for the selection of temporal connectives during the generation process (for which the relevant index into the tables would be the aspectual features associated with the interlingual representation). The selection of a temporal connective, then, is simply a table look-up procedure based on the aspectual features associated with the events. Selection and Aspectual Realization of Verbs: Coercion Functions. Above, we considered the se- lection of temporal connectives without regard to the selection and aspectual realization of the lexical items that were being connected. Again, to ensure that the framework presented here is cross-linguistically applica- ble, we must provide a mechanism for handling lexical se- lection and aspectual realization in languages other than English. Consider the English sentence I stabbed Mary. This may be realized in at least two ways in Spanish: 13 (10) (i) Juan le dio pufialadaa a Maria (it) Juan le dio una pufialada a Maria 9 Note, however, that the features correspond to the events connected by the words, not to the words themselves. 1°Because we are not discussing the realization of temporal information (i.e., the time relations between the matrix and adjunct events), an abbreviated form of the actual chart is being used. Specifically, the chart shown in figure 3 assumes that the matrix event occurs before the adjunct event. See Dorr (1991) and Dorr and Gaasterland (submitted) for more details about the relationship between temporal information and aspectual information and the actual procedures that are used for the selection of temporal connectives. 11 It has recently been pointed out by Michael Herweg (per- sonal communication, 1991b) that the telic feature is not traditionally used to indicate a revoked consequence state (e.g., the consequence state that results after returning from the "going to the store" event), but it is generally intended to indicate an irrevocable, culminative, consequence state. Thus, it has been suggested that al acts more as a com- plementizer than as a "pure" adverbial connective such as cuando; this would explain the realization of the adjunct not as a tensed adverbial clause, but as an infinitival subordinate clause. This possibility is currently under investigation. 12Space limitations do not permit the enumeration of the other selection charts for temporal connectives, but see Dorr and Gaasterland (submitted) for additional examples. Some of the connectives that have been compiled into tables are: after, as soon as, at the moment that, before, between, during, since, so long as, until, while, etc. 13Many other possibilities are available that are not listed here (e.g., Juan le acuchill6 a Maria). 260 Both of these sentences translate literally to "John gave stab wound(s) to Mary." However, the first sentence is the repetitive version of the action (i.e., there were multiple stab wounds), whereas the second sentence is the non-repetitive version of the action (i.e., there was only one stab wound). This distinction is character- ized by means of the atomicity feature. In (10)(i), the event is associated with the features [+d,+t,-a], whereas, in (10)(it) the event is associated with the features [+d,+t,+a]. According to Bennett et al. (1990), predicates are al- lowed to undergo an atomicity "coercion" in which an inherently non-atomic predicate (such as dio) may be- come atomic under certain conditions. These conditions are language-specific in nature, i.e., they depend on the lexical-semantic structure of the predicate in question. Given the current featural scheme that is imposed on top of the lexical-semantic framework, it is easy to spec- ify coercion functions for each language. We have devised a set of coercion functions for Spanish analogous to those proposed for English by Bennett et al. The feature coercion parameters for Spanish differ from those for English. For example, the atomicity function does not have the same applicability in Spanish as it does for English. We saw this earlier in sentence (10), in which a singular NP verbal object maps a [-a] predicate into a [+a] predicate, i.e., a non-atomic event becomes atomic if it is associated with a singular NP object. The parameterized mappings that we have constructed for Spanish are shown in figure 4(a). For the purposes of comparison, the analogous English functions proposed by Bennett et al. (1990) are shown in figure 4(b). 14 Using the functions, we are able to apply the notion of feature-based coercion cross-linguistically, while still accounting for parametric distinctions. Thus, feature coercion provides a useful foundation for a model of in- terlingual machine translation. A key point about the aspectual features and coercion functions is that they allow for a two-way communica- tion channel between the two processes of lexical selec- tion and aspectual realization, is To clarify this point, we return to our example that compares the three English verbs, ransack, destroy, and obliterate (see example (4) above). Recall that the primary distinguishing feature among these three verbs was the notion of telicity (i.e., culminated vs. nonculminated). The lexical-semantic representation for all three verbs is identical, but the telicity feature differs in each case. The verb ransack is +telic, obliterate is -telic, and destroy is inherently -telic, although it may be coerced to +telic through the use of a durative adverbial phrase. Because destroy is a "co- 14Figure 4(b) contains a subset of the English functions. The reader is referred to Bennett et al. (1990) for additional functions. The abbreviations C and AC stand for culminator, and anti-culminator, respectively. lSBecause the focus of this paper is on the lexical-semantic representation and associated aspectual parameters, the de- tails of the algorithms behind the implementation of the two- way communication channel are not presented here; these are presented in Dorr and Gaasterland (submitted). We will il- lustrate the intuition here by means of example. (a) (b) Mapping Telicity (C) f(-t)-.+t Telicity (AC) f(+t)-*-t Atomicity f(+a)--.*-a Parameters singular NP complements ' preterit past progressive morpheme imperfect past progressive morpheme plural NP complements Spanish Examples Juan le dio una pufialada a Marts 'John stabbed Mary (once)' Juan conoci6 a Marts 'John met Mary (once)' Lee estaba pintando un cuadro 'Lee was painting a picture (~r some time)' Lee conocfa a Maria 'Lee knew Mary (for some time)' Chris est£ estornudan¢lo 'Chris is sneezing (repeatedly)' Juan le dio pufialadas a Maria 'John stabbed Mary (repeatedly)' Mapping Telicity (C) f(-t)--*+t Telicity (AC) f(+t)-*-t Atomicity f(+a)--*-a Enl$1ish Parameters singular NP complements eulminative duratives progressive morpheme non-culminative duratives progressive morpheme frequency adverbials Examples John ran a mile John ran until 6pro Lee was painting a picture Lee painted the pict'ure for an hour Chris is sneezing Chris ate a sandwich everyday Figure 4: Parameterization of Coercion Functions for English and Spanish ercible" verb, it is stored in the lexicon as +telic with a flag that forces -telic to be the inherent (i. e., default) set- ting. Thus, if we are generating a surface sentence from an interlingual form that matches these three verbs but we know the value of the telic feature from the context of the source-language sentence (i.e., we are able to de- termine whether the activity reached a definite point of completion), then we will choose ransack, if the setting is +telic, or obliterate or destroy, if the setting is -telic. In this latter case, only the word destroy will be selected if the interlingua includes a component that will be re- alized as a durative adverbial phrase. Once the aspectual features have guided the lexical selection of the verbs, we are able to use these selections to guide the aspectual realizations that will be used in the surface form. For example, if we have chosen the word obliterate we would want to realize the verb in the simple past or present (e.g., obliterated or obliter- ate) rather than in the progressive (e.g., was obliterating or is obliterating). Thus, the aspectual features (and co- ercion functions) are used to choose lexical items, and the choice of lexical items is used to realize aspectual features. The coercion functions are crucial for this two-way channel to operate properly. In particular, we must take care not to blindly forbid non-atomic verbs from being realized in the progressive since point activities, which are atomic (e.g., tap), are frequently realized in the pro- gressive (e.g., he was tapping the table). In such cases the progressive morpheme is being used as an iterator of several identical atomic events as defined in the func- tions shown in figure 4. Thus, we allow "coercible" verbs 261 (i.e., those that have a +<feature> specification) to be selected and realized with the non-inherent feature set- ting if coercion is necessary for the aspectual realization of the verb. ACQUISITION OF NOVEL LEXICAL ENTRIES: DISCOVERING THE LINK BETWEEN LCS AND ASPECT In evaluating the parameterization framework pro- posed here, we will focus on one evaluation metric, namely the ease with which lexical entries may be au- tomatically acquired from on-line resources. While test- ing the framework against this metric, a number of re- suits have been obtained, including the discovery of a fundamental relationship between aspectual information and lexical-semantic information that provides a link be- tween the primitives of Jackendoff's LCS representations and the features of the aspectual scheme described here. Approach. A program has been developed for the au- tomatic acquisition of novel lexical entries for machine translation. 16 We are in the process of building an En- glish dictionary, and intend to use the same approach for building dictionaries in other languages, (e.g., Span- ish, German, Korean, and Arabic). The program au- tomatically acquires aspeetual representations from cor- pora (currently the Lancaster/Oslo-Bergen 17 (LOB) cor- pus) by examining the context in which all verbs occur and then dividing them into four groups: state, activity, accomplishment, and achievement. As we noted earlier, these four groups correspond to different combinations of aspectual features (i.e., telic, atomic, and dynamic) that have been imposed on top of the lexieal-semantic frame- work. Thus, if we are able to isolate these components of verb meaning, we will have made significant progress toward our ultimate goal of automatically acquiring full lexical-semantic representations of verb meaning. The division of verbs into these four groups is based on several syntactic tests that are well-defined in the linguis- tic literature such as those by Dowty (1979) shown in fig- ure 5. is Some tests of verb aspect shown here could not be implemented in the acquisition program because they require human interpretations. These tests are marked by asterisks (*). For example, Test 2 requires human interpretation to determine whether or not a verb has habitual interpretation in simple present tense. The algorithm for determining the aspectual category of verbs is shown in figure 6. Note that step 3 applies Dowty's tests to a set of sentences corresponding to a particular verb until a unique category has been iden- tified. In order for this step to succeed, we must en- sure that Dowty's tests allow the four categories to be uniquely identified. However, a complication arises for the state category: out of the six tests that have been implemented from Dowty's table, only Test 1 uniquely 16The implementation details of this program are reported in Dorr and Lee (1992). lrICAME -- Norwegian Computing Center for the Human- ities (tagged version). lSThis table is presented in Bennett et al. (1990), p. 250, based on Dowry (1979). Test STA ACT ACC ACH 1. X-ing Is grammatical no yes yes yes * 2. has habitual interpretation no yes yes yes in simple present tense 3. spend an hour X-ing, yes yes yes no X for an hour 4. take an hour X-ing, no no yes yes X in an hour * 5. X for an hour entails yes yes no no X at all times in the hour * 6. Y is X-ing entails no yes no no Y has X-ed 7. complement of stop yes yes yes no 8. complement of finish no no yes no * 9. ambiguity with almost no no yes no *10. Y X-ed in an hour entails no no yes no Y was X-ing during that hour 11. occurs with no yes yes no studiously, carefully, etc. Figure 5: Dowty's Eleven Tests of Verb Aspect 1. Pick out main verbs from all sentences in the corpus and store them in a list called VERBS. 2. For each verb v in VERBS, find all sentences containing v and store them in an array SENTENCES[i] (where i is the indexical position of v in VERBS). 3. For each sentence set Sj in SENTENCE[j], loop through each sentence s in Sj: (a) Loop through each test t in figure 5. (b) See if t applies to s; if so, eliminate all aspectual categories with a NO in the row of figure 5 corresponding to test t. (c) Eliminate possibilities until a unique aspectual category is identified or until all sentences in SENTENCES have been exhausted. Figure 6: Algorithm for Determining Aspectual Cate- gories sets states apart from the other three aspectual cate- gories. That is, Test 1 is the only implemented test that has a value in the first column that is different from the other three columns. Note, however, that the value in this column is NO, which poses a problem for the above algorithm. Herein lies one of the major stumbling blocks for the extraction of information from corpora: it is only possible to derive new information in cases where there is a YES value in a given column. By definition, a cor- pus only provides positive evidence; it does not provide negative evidence. We cannot say anything about sen- tences that do not appear in the corpus. Just because a given sentence does not occur in a particular sample of English text does not mean that it can never show up in English. This means we are relying solely on the information that does appear in the corpus, i.e., we are only able to learn something new about a verb when it corresponds to a YES in one of the rows of figure 5.19 Given that the identification of stative verbs could not be achieved by Dowty's tests alone, a number of hypothe- ses were made in order to identify states by other means. A preliminary analysis of the sentences in the corpus re- veals that progressive verbs are generally preceded by verbs such as be, like, hate, go, stop, start, etc. These 19 Note that this is consistent with principles of recent mod- els of language acquisition. For example, the Subset Principle proposed by Berwick (1985, p. 37) states that "the learner should hypothesize languages in such a way that positive ev- idence can refute an incorrect guess." 262 Verbs Jackendoff Primitive be BE like BE hate BE go GO stop GO start GO finish GO avoid STAY continue STAY keep STAY Aspectual Category state ~STA) state (STA) state (STA) non-state q ACH) non-state ~ ACH) non-state q ACH) non-state q ACH) non-state ACT) non-state ACT) non-state ACT) Aspectual Features [-d l +d, +t, +a] +d, +t, +a l +d, +t, +a] +d, +t, -t-a] l+d, -t l [+d, -t] [+d, -t] Figure 7: Circumstantial Verbs Categorized By Jackend- off's Primitives Test to see if X appears in the progressive. 1. If YES, then apply one of the tests that distinguishes ac- tivities from achievements (i.e., Test 3, Test 4, or Test 7). 2. If NO, apply Test 3 to rule out achievement or Test 4 to uniquely identify as an achievement. 3. Finally, if the aspectual category is not yet uniquely iden- tified, either apply Test 11 to rule out activity or assume state. Figure 8: Algorithm for Identifying Stative Verbs verbs fall under a lexical-semantic category identified by Jackendoff (1983, 1990) as the circumstantial category. Based on this observation, the following hypothesis has been made: Hypothesis 1: The only types of verbs that are allowed to precede progressive verbs are circumstantial verbs. Circumstantial verbs subsume stative verbs, but they also include verbs in other categories. In terms of the lexical-semantic primitives proposed by Jackendoff (1983, 1990), the circumstantial verbs found in a sub- set of the corpus are categorized as shown in figure 7. An intriguing result of this categorization is that the circumstantial verbs provide a systematic partitioning of Dowty's aspectual categories (i.e., states, activities, and achievements) into primitives of Jackendoff's system (i.e., BE, STAY, and GO). Thus, the analysis of the cor- pora has provided a crucial link between the primitives of Jackendoff's LCS representation and the features of the aspectual scheme described earlier. If this is the case, then the framework has proven to be well-suited to the task of automatic construction of conceptual structures from corpora. Assuming this partitioning is correct and complete, Hypothesis 1 can be refined as follows: Hypothesis 1'~ The only types of verbs that are allowed to precede progressive verbs are states, achievements, and activi- ties. If this hypothesis is valid, the program is in a better posi- tion to identify stative verbs because it corresponds to a test that requires positive evidence rather than negative evidence. The hypothesis can be described by adding the following line to figure 5: Verbs Aspectual Category(s) doing (ACC) facing (ACC ACT) asking (ACC ACT) made (ACC) drove ~ACC ACT) welcome (STA ACC ACT ACH) emphasized (STA ACC ACT ACH) thanked (ACC ACT STA) staged (ACC) make (ACC) continue ~ACC ACT) writes ~ACC) building ~ACC) running (ACC ACT) paint { ACC) finds ( ACC ACT) arrives { ACC ACT) jailed {ACC ACT STA) nominating (ACH ACT ACC read ( ACC ACT) ) ensure (STA ACC ACT ACH) act ( ACT ACC) carry (ACC) exercise (ACC) impose (STA ACC ACT ACH) contain ~STA ACC ACT ACH) infuriate (ACC ACT) Figure 9: Aspectual Classification Results whether X is stative. 2° Another hypothesis that has been adopted pertains to the distribution of progressives with respect to the verb go: Hypothesis ~z The only types of progressive verbs that are allowed to follow the verb go are activities. This hypothesis was adopted after it was discovered that constructions such as go running, go skiing, go swimming, etc. appeared in the corpus, but not construc- tions such as go eating, go writing, etc. The hypothesis can be described by adding the following line to figure 5: [ Test [ STA [ ACT [ ACC ] ACH [ 13. go X-ing is grammatical no yes no no The combination of Dowty's tests and these hypoth- esized tests allows the four aspectual categories to be more specifically identified. Results and Future Work. Preliminary results have been obtained from running the program on 219 sen- tences of the LOB corpus (see figure 9). 21 Note that the program was not able to pare down the aspectual cate- gory to one in every case. We expect to have a significant improvement in the classification results once the sample size is increased. Presumably more tests would be needed for additional improvements in results. For example, we have not pro- posed any tests that would guarantee the unique identi- fication of accomplishments. Such tests are the subject of future research. I Te., i I I Ace i AC. I 12. X <verb>-in~ is ~rammatical yes yes no yes Because there is a YES in the column headed by STA, verbs satisfying this test are potentially stative. Thus, once a verb X is found that satisfies this test, we apply the (heuristic) algorithm shown in figure 8 to determine 2°Note that this algorithm does not guarantee that states will be correctly identified in all cases given that step 3 is a heuristic assumption. However, if Test 12 has applied, and state is still an active possibility, it is considerably safer to assume the verb is a state than it would be otherwise because we have eliminated accomplishments. 21 For brevity, only a subset of the verbs are shown here. 263 In addition, research is currently underway to deter- mine the restrictions (analogous to those shown in fig- ure 5) that exist for other languages (e.g., Spanish, Ger- man, Korean, and Arabic). Because the program is para- metrically designed, it is expected to operate uniformly on corpora in other languages as well. Another future area of research is the automatic ac- quisition of parameter settings for the construction of selection charts and aspectual coercion mappings on a per-language basis. SUMMARY This paper has examined a two-level knowledge repre- sentation model for machine translation that integrates aspectual information based on theories by Bach (1986), Comrie (1976), Dowty (1979), mourelatos (1981), Pas- sonneau (1988), Pustejovsky (1988, 1989, 1991), and Vendler (1967), and more recently by Bennett et al. (1990) and Moens and Steedman (1988), with lexical- semantic information based on Jackendoff (1983, 1990). We have examined the question of cross-linguistic ap- plicability showing that the integration of aspect with lexical-semantics is especially critical in machine transla- tion when there are a large number of temporal connec- tives and verbal selection/realization possibilities that may be generated from a lexical semantic representa- tion. Furthermore, we have illustrated that the se- lection/realization processes may be parameterized, by means of selection charts and coercion functions, so that the processes may operate uniformly across more than one language. Finally, we have discussed the application of the theoretical foundations to the automatic acquisi- tion of aspectual representations from corpora in order to augment the lexical-semantic representations that have already been created for a large number of verbs. REFERENCES Allen, James. F. (1983) "Maintaining Knowledge about Temporal In- tervals," Communications ol the ACM 26:11,832-843. Allen, James. F. (1984) "Towards a General Theory of Action and Time," Artificial Intelligence 23:2, 123-160. Bach, Emmon (1986) "The Algebra of Events," Linguistics and Phi- losophy 9, 5-16. Bennett, Winfield S., Tangs Herlick, Katherine Hoyt, Joseph Liro and Ana Santistebem (1990) "A Computational Model of Aspect and Verb Semantics," Machine Translation 4:4, 247-280. Berwick, Robert C. (1985) The Acquisition of Syntactic Knowledge, MIT Press, Cambridge, MA. Cowrie, Bernard (1976) Aspect, Cambridge University Press, Cam- bridge, England. Dorr, Bonnie J. (1987) "UNITRAN: A Principle-Ba~ed Approach to Machine Translation," AI Technical Report 1000, Master of Science thesis, Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology. Dorr, Bonnie J. (1990a) "Solving Thematic Divergences in Machine Translation," Proceedings of the ~Sth Annual Conference of the Association for Computational Linguistics, University of Pitts- burgh, Pittsburgh, PA, 127-134. Dorr, Bonnie J. (1990b) "A Cross-Linguistic Approach to Machine Translation," Proceedings of the Third International Conference on Theoretical and Methodological Issues in Machine Translation of Natural Languages, Linguistics Research Center, The University of Texas, Austin, TX, 13-32. Dorr, Bonnie J. (1991) "A Two-Level Knowledge Representation for Machine Translation: Lexical Semantics and Tense/Aspect," Pro- ceedings of the Lexical Semantics and Knowledge Representation Workshop, ACL-91, University of California, Berkeley, CA, 250- 263. 264 Dorr, Bonnie J. and Ki Lee (1992) "Building a Lexicon for Machine Translation: Use of Corpora for Aspectual Classification of Verbs," Institute for Advanced Computer Studies, University of Maryland, UMIACS TR 92-41, CS TR 2876. Dorr, Bonnie J., and Terry Gaasterland (submitted) "Using Temporal and Aspectual Knowledge to Generate Event Combinations from a Temporal Database," Third International Conference on Prin- ciples of Knowledge Representation and Reasoning, Cambridge, MA, 1992. Dowty, David (1979) Word Meaning and Montague Grammar, Reidel, Dordrecht, Netherlands. Herweg, Michael (1991a) "Aspectual Requirements of Temporal Con- nectives: Evidence for a Two-level Approach to Semantics," Pro- ceedings of the Lexical Semantics and Knowledge Representation Workshop, ACL-91, University of California, Berkeley, CA, 152- 164. Hornstein, Norbert (1990) As Time Goes By, MIT Press, Cambridge, MA. ICAME -- Norwegian Computing Center for the Humanities (tagged version) Laneaster/Oslo-Bergen Corpus, Bergen University, Nor- way. Jackendoff, Hay S. (1983) Semantics and Cognition, MIT Press, Cam- bridge, MA. Jackendoff, Ray S. (1990) Semantic Structures, MIT Press, Cam- bridge, MA. Lobs, Jorge (1991) personal communication. Moens, Marc and Mark Steedman (1988) "Temporal Ontology and Temporal Reference," Computational Linguistics 14:2, 15-28. Mourelatos, Alexander (1981) "Events, Processes and States," in Tense and Aspect, P. J. Tedeschi and A. Zaenen (eds.), Academic Press, New York, NY. Passonneau, Rebecca J. (1988) "A Computational Model of the Seman- tics of Tense and Aspect," Computational Linguistics 14:2, 44-60. Pustejovsky, James (1988) "The Geometry of Events," Center for Cog- nitive Science, Massachusetts Institute of Technology, Cambridge, MA, Lexicon Project Working Papers #24. Pustejovsky, James (1989) "The Semantic Representation of Lexicai Knowledge," Proceedings of the First Annual Workshop on Lexieal Acquisition, IJCAI.89, Detroit, Michigan. Pustejovsky, James (1991) "The Syntax of Event Structure," Cogni- tion. Tenny, Carol (1987) "Grammatiealizing Aspect and Affectedness," Ph.D. thesis, Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology. Tenny, Carol (1989) "The Aspectual Interface Hypothesis," Center for Cognitive Science, Massachusetts Institute of Technology, Cam- bridge, MA, Lexicon Project Working Papers #31. Vendler, Zeno (1967) "Verbs and Times," Linguistics in Philosophy, 97-121. Yip, Kenneth M. (1985) "Tense, Aspect and the Cognitive Represen- tation of Time," Proceedings of the 23rd Annual Conference of the Association for Computational Linguistics, Chicago, IL, 18-26.
1992
33
USING CLASSIFICATION TO GENERATE TEXT Ehud Reiter* and Chris Mellish t Department of Artificial Intelligence University of Edinburgh 80 South Bridge Edinburgh EH1 1HN BRITAIN ABSTRACT The IDAS natural-language generation system uses a KL-ONE type classifier to perform content determination, surface realisation, and part of text planning. Generation-by-classification allows IDAS to use a single representation and reasoning com- ponent for both domain and linguistic knowledge, which is difficult for systems based on unification or systemic generation techniques. Introduction Classification is the name for the procedure of automatically inserting new classes into the cor- rect position in a KL-ONE type class taxonomy [Brachman and Schmolze, 1985]. When combined with an attribute inheritance system, classifica- tion provides a general pattern-matching and uni- fication capability that can be used to do much of the processing needed by NL generation sys- tems, including content-determination, surface- realisation, and portions of text planning. Classi- fication and inheritance are used in this manner by the IDAS natural language generation system [Re- iter et al., 1992], and their use has allowed IDAS to use a single knowledge representation system for both linguistic and domain knowledge. IDAS and I1 IDAS IDAS is a natural-language generation system that generates on-line documentation and help mes- sages for users of complex equipment. It supports user-tailoring and has a hypertext-like interface that allows users to pose follow-up questions. The input to IDAS is a point in question space, which specifies a basic question type (e.g., What-is-it), a component the question is being asked about (e.g., Computer23), the user's task (e.g. Replace-Part), the user's expertise-level *E-mail address is E. ReiterQed. ac .uk rE-mail address is C.NellishQed.ac.uk 265 (e.g., Skilled), and the discourse in-focus list. The generation process in IDAS uses the three stages described in [Grosz et al., 1986]: • Content Determination: A content-determin- ation rule is chosen based on the inputs; this rule specifies what information from the KB should be communicated to the user, and what overall format the response should use. • Text Planning: An expression in the ISI Sentence Planning Language (SPL) [Kasper, 1989] is formed from the information speci- fied in the content-determination rule. • Surface Realisation: The SPL is converted into a surface form, i.e., actual words interspersed with text-formatting commands. I1 I1 is the knowledge representation system used in IDAS to represent domain knowledge, grammar rules, lexicons, user tasks, user-expertise models, and content-determination rules. The I1 system includes: • an automatic classifier; • a default-inheritance system that inherits properties from superclass to subclass, us- ing Touretsky's [1986] minimal inferential dis- tance principle to resolve conflicts; • various support tools, such as a graphical browser and editor. An I1 knowledge base (KB) consists of classes, roles, and user-expertise models. User-expertise models are represented as KB overlays, in a simi- lar fashion to the FN system [Reiter, 1990]. Roles are either definitional or assertional; only defini- tional roles are used in the classification process. Roles can be defined as having one filler or an arbi- trary number of fillers, i.e., as having an inherent 'number restriction' of one or infinity. An I1 class definition consists of at least one ex- plicitly specified parent class, primitive? and in- dividual? flags, value restrictions for definitional roles, and value specifications for assertional roles. I1 does not support the more complex definitional constructs of KL-ONE, such as structural descrip- tions. The language for specifying assertional role values is richer than that for specifying definitional role value restrictions, and allows, for example: measurements that specify a quantity and a unit; references that specify the value of a role in terms of a KL-ONE type role chain; and templates that specify a parametrized class definition as a role value. The general design goal of I1 is to use a very simple definitional language, so that classification is computationally fast, but a rich assertional lan- guage, so that complex things can be stated about entities in the knowledge base. An example I1 class definition is: (define-class open-door : parent open : type defined : prop ((actor animate-object) (actee door) (decomposition ( (*template* grasp (actor = actor *self*) (actee = (handle part) actee *self*)) (*template* turn (actor = actor *self*) (actee = (handle part) actee *self*)) (*template* pull (actor ffi actor *self*) (actee = (handle part) actee *self*)) )))) This defines the class Open-door to be a defined (non-primitive and non-individual) child of the class Open. Actor and Actee are defini- tional roles, so the values given for them in the above definition are treated as definitional value restrictions; i.e., an Open-Door entity is any Open entity whose Actor role has a filler sub- sumed by Animate-Object, and whose Actee role has a filler subsumed by Door. Decomposition is an assertional role, whose value is a list of three templates. Each tem- plate defines a class whose ancestor is an action (Grasp, Turn, Pull) that has the same Actor as the Open-Door action and that has an Actee that is the filler of the Part role of the Actee of the Open-Door action which is subsumed by Handle (i.e., (handle part) is a differentiation of Part onto Handle). For example, if Open-12 was defined as an Open action with role fillers Actor:Sam and Actee:Door-6, then Open-12 would be classified beneath Open-Door by the classifier on the basis of its Actor and Actee values. If an inquiry was issued for the value of Decomposition for Open- 12, the above definition from Open-Door would be inherited, and, if Door-6 had Handle-6 as one of its fillers for Part, the templates would be expanded into a list of three actions, (Grasp-12 Turn-12 Pull-12), each of which had an Actor of Sam and an Actee of Handle-6. Using Classification in Generation Content Determination The input to IDAS is a point in question space, which specifies a basic question, component, user- task, user-expertise model, and discourse in-focus list. The first three members of this tuple are used to pick a content-determination rule, which specifies the information the generated response should communicate. This is done by forming a rule-instance with fillers that specify the basic- question, component, and user-task; classifying this rule-instance into a taxonomy of content-rule classes, and reading off inherited values for vari- ous attributive roles. A (simplified) example of a content-rule class definition is: (define-class what-operat ions-rule :parent content-rule :type defined : prop ( (rule-question .hat) (rule-task operations) (rule-rolegroup (manufacturer model-number colour) ) (rule-funct ion ' (identify-schema :bullet? nil)))) Rule-question and Rule-Task are definitional roles that specify which queries a content rule applies to; What-Operations-Rule is used for "What" questions issued under an Operations task (for any component). Rule-Rolegroup specifies the role fillers of the target component that the response should communicate to the user; What- Operatlons-Rule specifies that the manufac- turer, model-number, and colour of the target component should be communicated to the user. Rule-Functlon specifies a Lisp text-planning func- tion that is called with these role fillers in or- der to generate SPL. Content-rule class defini- tions can also contain attributive roles that spec- ify a human-readable title for the query; followup queries that will be presented as hypertext click- able buttons in the response window; objects to be added to the discourse in-focus list; and a testing function that determines if a query is answerable. Content-determination in IDAS is therefore done entirely by classification and feature inheritance; 266 once the rule-instance has been formed from the input query, the classifier is used to find the most specific content-rule which applies to the rule- instance, and the inheritance mechanism is then used to obtain a specification for the KB informa~ tion that the response should communicate, the text-planning function to be used, and other rele- vant information. IDAS's content-determination system is primar- ily designed to allow human domain experts to rel- atively easily specify the desired contents of short (paragraph or smaller) responses. As such, it is quite different from systems that depend on deeper plan-based reasoning (e.g. [Wahlster et al., 1991; Moore and Paris, 1989]). Authorability is stressed in IDAS because we believe this is the best way to achieve IDAS'S goal of fairly broad, but not neces- sarily deep, domain coverage; short responses are stressed because IDAS's hypertext interface should allow users to dynamically choose the paragraphs they wish to read, i.e., perform their own high- level text-planning [Reiter et al., 1992]. Text Planning Text planning is the only part of the generation process that is not entirely done by classification in IDAS, The job of IDAS'S text-planning system is to produce an SPL expression that communi- cates the information specified by the content- determination system. This involves, in partic- ular: • Determining how many sentences to use, and what information each sentence should com- municate (text structuring). • Generating referring expressions that identify domain entities to the user. • Choosing lexical units (words) to express do- main concepts to the user. Classification is currently used only in the lexical- choice portion of the text-planning process, and even there it only performs part of this task. Text structuring in IDAS is currently done in a fairly trivial way; this could perhaps be im- plemented with classification, but this would not demonstrate anything interesting about the capa- bilities of classification by generation. More so- phisticated text-structuring techniques have been discussed by, among others, Mann and Moore [1981], who used a hill-climbing algorithm based on an explicit preference function. We have not to date investigated whether classification could be used to implement this or other such text- structuring algorithms. Referring expressions in IDAS are generated by the algorithm described in [Reiter and Dale, 1992]. This algorithm is most naturally stated iteratively in a conventional programming language; there does not seem to be much point in attempting to re-express it in terms of classification. Lexical choice in IDAS is based on the ideas pre- sented in [Reiter, 1991]. When an entity needs to he lexicalized, it is classified into the main domain taxonomy, and all ancestors of the class that have lexical realisations in the current user-expertise model are retrieved. Classes that are too general to fulfill the system's communicative goal are re- jected, and preference criteria (largely based on lexical preferences recorded in the user-expertise model) are then used to choose between the re- maining lexicalizable ancestors. For example, to lexicalize the action (Activate with role fillers Actor:Sam and Actee:Toggle- Switch-23) under the Skilled user-expertise model, the classifier is called to place this action in the taxonomy. In the current IDAS knowledge base, this action would have have two realisable ancestors that are sufficiently informative to meet an instructional communicative goal, 1 Activate (realisation "activate") and (Activate with role filler Actee:Switch) (realisation "flip"). Prefer- ence criteria would pick the second ancestor, be- cause it is marked as basic-level [Rosch, 1978] in the Skilled user-expertise model. Hence, if "the switch" is a valid referring expression for Toggle- Swltch-23, the entire action will be realised as "Flip the switch". In short, lexical-choice in IDAS use8 classification to produce a set of possible lexicMizations, but other considerations are used to choose the most appropriate member of this set. The lexical-choice system could be made entirely classification-based if it was acceptable to always use the most spe- cific realisable class that subsumed an entity, but ignoring communicative goals and the user's pref- erences in this way can cause inappropriate text to be generated [Reiter, 1991]. In general, it may be the case that an entirely classification-based approach is not appropriate for tasks which require taking into consideration complex pragmatic criteria, such as the user's lex- ical preferences or the current discourse context (classification may still be usefully used to per- form part of these tasks, however, as is the case in IVAS's lexical-choice module). It is not clear to the authors how the user's lexical preferences or the discourse context could even be encoded in a manner that would make them easily accessi- ble to a classifier-based generation algorithm, al- though perhaps this simply means that more re- search needs to be done on this issue. 1The general class Action is an example of an an- cestor class that is too general to meet the communica- tive goal; if the user is simply told "Perform an action on the switch", he will not know that he is supposed to activate the switch. 267 Surface Realisation Surface realisation is performed entirely by clas- sification in IDAS. The SPL input to the surface realisation system is interpreted as an I1 class def- inition, and is classified beneath an ,pper model [Bateman et al., 1990]. The upper model dis- tinguishes, for example, between Relational and Nonrelational propositions, and Animate and Inanimate objects. 2 A new class is then created whose parent is the desired grammatical unit (typ- ically Complete-Phrase), and which has the SPL class as a filler for the definitional Semantics role. This class is classified, and the realisation of the sentence is obtained by requesting the value of its Realisatlon role (an attributive role). A simplified example of an I1 class that defines a grammatical unit is: (define-class sentence :parent complete-phrase :type defined : prop ((semantics predication) (realisation ( (*reference* realisation subject •self•) (*reference• realisation predicate •self*))) (number (•reference• number subject •self•)) (subject (•template• noun-phrase (semantics = actor semantics •self*))) (predicate ...) ...)) Semantics is a definitional role, so the above definition is for children of Complete-Phrase whose Semantics role is filled by something clas- sifted beneath Predication in the upper model. It states that • the Realisatlon of the class is formed by con- catenating the realisation of the Subject of the class with the realisation of the Predicate of the class; • the Number of the class is the Number of the Subject of the class; • the Subject of the class is obtained by creat- ing a new class beneath Noun-Phrase whose semantics is the Actor of the Semantics of the class; this in essence is a recursive call to realise a semantic constituent. If some specialized types of Sentence need dif- ferent values for Reallsatlon, Number, Subject, 2The IDAS upper model is similar to a subset of the PENMAN upper model. 268 or another attributive role value, this can be spec- ified by creating a child of Sentence that uses II's default inheritance mechanism to selectively override the relevant role fillers. For example, (define-class imperative :parent sentence :type defined :prop ((semantics command) (realisation (•refer ence• real~sation predicate •self•)))) This defines a new class Imperative that ap- plies to Sentences whose Semantics filler is clas- sifted beneath Command in the upper model (Command is a child of Predication). This class inherits the values of the Number and Sub- ject fillers from Sentence, but specifies a new filler for Realisation, which is just the Realisation of the Predicate of the class. In other words, the above class informs the generation system of the grammatical fact that imperative sentences do not contain surface subjects. The classification system places classes beneath their most specific parent in the taxonomy, so to-be-realised classes always in- herit realisation information from the most specific grammatical-unit class that applies to them. The Role of Conflict Resolution In general terms, a classification system can be thought of as supporting a pattern-matching pro- cess, in which the definitional role fillers of a class represent the pattern (e.g. (semantics command) in Imperative), and the attributive roles (e.g., R.ealisation) specify some sort of action. In other words, a classification system is in essence a way of encoding pattern-action rules of the form: ~1 -'+~1 ~2 ---~ ~2 If several classes subsume an input, then clas- sification systems use the attributive roles speci- fied (or inherited by) the most specific subsuming class; in production rule terminology, this means that if several c~i's match an input, only the ~i as- sociated with the most specific matching crl is trig- gered. In other words, classification systems use the conflict resolution principle of always choosing the most specific matching pattern-action rule. This conflict-resolution principle is used in dif- ferent ways by different parts of ]DAS. The content-determination system uses it as a prefer- ence mechanism; if several content-determination rules subsume an input query, any of these rules can be used to generate a response, but presum- ably the most appropriate response will be gener- ated by the most specific subsuming rule. The lexical-choice system, in contrast, effectively ig- nores the 'prefer most specific' principle, and in- stead uses its own preference criteria to choose among the lexemes that subsume an entity. The surface-generation system is different yet again, in that it uses the conflict-resolution mechanism to exclude inapplicable grammar rules. If a partic- ular term is classified beneath Imperative, for example, it also must be subsumed by Sentence, but using the Realisation specified in Sentence to realise this term would result in text that is incorrect, not just stylistically inferior. The 'use most specific matching rule' conflict- resolution principle is thus just a tool that can he used by the system designer. In some cases it can be used to implement preferences (as in IDAS's content-determination system); in some cases it can be used to exclude incorrect rules which would cause an error if they were used (as in IDAS's surface-generation system); and in some cases it needs to be overridden by a more appropriate choice mechanism (as in IDAS's lexical choice sys- tem). Classification vs. Other Approaches Perhaps the most popular alternative approaches to generation are unification (especially functional unification) and systemic grammars. As with clas- sification, the unification and systemic approaches can be applied to all phases of the generation pro- cess [McKeown et al., 1990; Patten, 1988]. 3 How- ever, most of the published work on unification and systemic systems deals with surface realisa- tion, so it is easiest to focus on this task when making a comparison with classification systems. Like classification, unification and systemic sys- tems can be thought of as supporting a recursive pattern-matching process. All three frameworks allow grammar rules to be written declaratively. They also all support unrestricted recursion, i.e., they all allow a grammar rule to specify that a constituent of the input should be recursively pro- cessed by the grammar (IDAS does this with II's template mechanism). In particular, this means that all three approaches are Turing-equivalent. There are differences in how patterns and actions are specified in the three formalisms, but it is prob- ably fair to say that all three approaches are suf- ficiently flexible to be able to encode most desir- able grammars. The choice between them must therefore be made on the basis of which is easiest to incorporate into a real NL generation system. 3Although it is unclear whether unification or sys- temic systems can do any better at the text-planning tasks that are difficult for classification systems, such as generating referring expressions. We believe that classification has a significant ad- vantage here because many generation systems al- ready include a classifier to support reasoning on a domain knowledge base; hence, using classifi- cation for generation means the same knowledge representation (KR) system can be used to sup- port both domain and linguistic knowledge. Thus, IDAS uses only one KR system -- I1 -- whereas systems such as COMET (unification) [McKeown et al., 1990] and PENMAN (systemic) [Penman Natural Language Group, 1989] use two different KR systems: a classifier-based system for domain knowledge, and a unification or systemic system for grammatical knowledge. Unification Systems The most popular unification formalism for gener- ation up to now has probably been functional uni- fication (FUG) [Kay, 1979]. FUG systems work by searching for patterns (alternations) in the gram- mar that unify with the system's input (i.e., uni- fication is used for pattern-matching); inheriting syntactic (output) feature values from the gram- mar patterns (the actions); and recursively pro- cessing members of the constituent set (the recur- sion). That is, pattern-action rules of the above kind are encoded as something like: v v ... If a unification system is based on a typed feature logic, then its grammar can include classification- like subsumption tests [Elhadad, 1990], and thus be as expressive in specifying patterns as a classi- fication system. An initial formal comparison of unification with classification is given in the Appendix. Perhaps the most important practical differences are: • Classification grammars cannot be used bidi- rectionally, while unification grammars can [Sheiber, 1988]. • Unification systems produce (at least in prin- ciple) all surface forms that agree (unify) with the semantic input; classification systems pro- duce a single surface form output. These differences are in a sense a result of the fact that unification grammars represent general map- pings between semantic and surface forms (and hence can be used bidirectionally, and produce all compatible surface forms), while classification systems generate a single surface form from a se- mantic input. In McDonald's [1983] terminology, classification-based generation systems determin- istically and indelibly make choices about alter- nate surface-form constructs as the choices arise, with no backtracking; 4 unification-based systems, 4McDonald claims, incidentally, that indelible decision-making is more plausible than backtracking from a psycholinguistic perspective. 269 in contrast, produce the set of all syntactically cor- rect surface-forms that are compatible with the semantic input. 5 In practice, all generation systems must possess a 'preference filter' of some kind that chooses a single output surface-form from the set of possi- bilities. In unification approaches, choosing a par- ticular surface form to output tends to be regarded (at least theoretically) as a separate task from gen- erating the set of syntactically and semantically correct surface forms; in classification approaches, in contrast, the process of making choices between possible surface forms is interwoven with the main generation algorithm. Systemic approaches Systemic grammars [Halliday, 1985] are another popular formalism for generation systems. Sys- temic systems vary substantially in the input lan- guage they accept; we will here focus on the NIGEL system [Mann, 1983], since it uses the same in- put language (SPL) as IDAS'S surface realisation system, s Other systemic systems (e.g., [Patten, 1988]) tend to use systemic features as their in- put language (i.e., they don't have an equivalent of NIGEL'S chooser mechanism), which makes com- parisons more difficult. NIGEL works by traversing a network of systems, each with an associated chooser. The choosers de- termine features, by performing tests on the se- mantic input. Choosers can be arbitrary Lisp code, which means that NIGEL can in principle use more general 'patterns' in its rules than IDAS can; in practice it is not clear to what extent this ex- tra expressive power is used in NIGEL, since many choosers seem to be based on subsumption tests between semantic components and the system's upper model. In any case, once a set of features has been chosen, these features trigger gates and their associated realisation rules; these rules as- sert information about the output text. From the pattern-matching perspective, choosers and gates provide the patterns ai of rules, while realisation rules specify the actions 13i to be performed on the output text. Like classification systems (but unlike unifica- tion systems), systemic generation systems are, in McDonald's terminology, deterministic and in- delible choice-makers; NmEL makes choices about 50f course these differences are in a sense more theoretical than practical, since one can design a uni- fication system to only return a single surface form instead of a set of surface forms, and one can include backtracking-like mechanisms in a classification-based system. SStrictly speaking, SPL is an input language to PEN- MAN, not NIGEL; we will here ignore the difference be- tween PENMAN and NIGEL. alternative surface-form constructs as they arise during the generation process, and does not back- track. Systemic generation systems are thus prob- ably closer to classification systems than unifica- tion systems are; indeed, in a sense the biggest difference between systemic and classification sys- tems is that systemic systems use a notation and inference system that was developed by the lin- guistic community, while classification systems use a notation and inference system that was devel- oped by the AI community. Other Related Work RSsner [1986] describes a generation system that uses object-oriented techniques. SPL-like input specifications are converted into objects, and then realised by activating their To-Realise methods. RSsner does not use a declarative grammar; his grammar rules are implicitly encoded in his Lisp methods. He also does not use classification as an inference technique (his taxonomy is hand-built). DATR [Evans and Gazdar, 1989] is a system that declaratively represents morphological rules, using a representation that in some ways is similar to I1. In particular, DATR allows default inheritance and supports role-chain-like constructs. DATR does not include a classifier, and also has no equivalent of II's template mechanism for specifying recursion. PSI-KLONE [Brachman and Schmolze, 1985, appendix] is an NL understanding system that makes some use of classification, in particular to map surface cases onto semantic cases. Syntactic forms are classified into an appropriate taxonomy, and by virtue of their position inherit semantic rules that state which semantic cases (e.g., Actee) correspond to which surface cases (e.g., Object). Conclusion In summary, classification can be used to perform much of the necessary processing in natural-language generation, including content- determination, surface-realisation, and part of text-planning. Classification-based generation al- lows a single knowledge representation system to be used for both domain and linguistic knowledge; this means that a classification-based generation system can have a significantly simpler overall ar- chitecture than a unification or systemic genera- tion system, and thus be easier to build and main- tain. Acknowledgements The IDAS project is partially funded by UK SERC grant GR/F/36750 and UK DTI grant IED 4/1/1072, and we are grateful to SERC and DTI for their support of this work. We would also like 270 to thank the IDAS industrial collaborators -- Infer- ence Europe, Ltd.; lgacal Instruments, Ltd.; and Racal Researdh Ltd. -- for all the help they have given us in performing this research. Appendix: A Comparison of Classification and Unification FUG is only one of a number of grammar for- malisms based on feature logics. The logic under- lying FUG is relatively simple, but much more ex- pressive logics are now being implemented [Emele and Zajac, 1990; D6rre and Seiffert, 1991; D/Srre and Eisele, 1991]. Here we provide an initial for- mal characterisation of the relation between classi- fication and unification, but abstracting away from the differences between the different unification systems. Crucial to all approaches in unification-based generation (or parsing) is the idea that at every level an input description (i.e. logical form or sim- ilar) 7 is combined with a set of axioms (type spec- ifications, grammar functional descriptions, rules) and the resulting logical expression is then reduced to a normal form that can be used straightfor- wardly to construct the set of models for the com- bined axioms and description. Classification is an appropriate operation to use in normal form construction when the axioms take the form oq ~ fit, with ~ interpreted as logical implication, and where each ai and/~i is a term in a feature logic. If the input description is 'com- plete' with respect to the conditions of these ax- ioms (that is, if 7 ^ ai ~ J- exactly when 7 _C ~i, where _ is subsumption), then it follows that for every model A4: u iff M I= _c u {v} (the relationship is more complex if the gram- mar is reeursive, though the same basic principle holds). The first step of the computation of the models of 7 and the axioms then just needs quick access to {fli17 _Coti}. The classification approach is to have the different ai ordered in a subsump- tion taxonomy. An input description 7 is placed in this taxonomy and the fll corresponding to its ancestors are collected. Input descriptions are 'complete' if every input description is fully specified as regards the condi- tions that will be tested on it. This implies a rigid distinction between 'input' and 'output' informa- tion which, in particular, means that classification will not be able to implement bidirectional gram- mars. If all the axioms are of the above form, input descriptions are complete and conjunctive, and the fli's are conjunctive (as is the case in IDAS) then there will always only be a single model. The above assumption about the form of ax- ioms is clearly very restrictive compared to what is allowed in many modern unification formalisms. In IDAS, the notation is restricted even further by requiring the c~i and /~i to be purely con- junctive. In spite of these restrictions, the sys- tem is still in some respects more expressive than the simpler unification formalisms. In Definite Clause Grammars (DCGs) [Pereira and Warren, 1980], for instance, it is not possible to specify al --"/~1 and also c~z --*/~, whilst allowing that (al AO¢2) ~ (~1A~2) (unless aland as are related by subsumption) [Mellish, 19911. The comparison between unification and clas- sification is, unfortunately, made more complex when default inheritance is allowed in the classifi- cation system (as it is in IDAS). Partly, the use of defaults may be viewed formally as simply a mech- anism to make it easier to specify 'complete' in- put descriptions. The extent to which defaults are used in an essential way in IDAS still remains to be investigated. Certainly for the grammar writer the ability to specify defaults is very valuable, and this has been widely acknowledged in grammar frame- works and implementations. References [Bateman et al., 1990] John Bateman, Robert Kasper, Johanna Moore, and Richard Whitney. A general organization of knowledge for nat- ural language processing: the Penman upper model. Technical report, Information Sciences Institute, Marina del Rey, CA 90292, 1990. [Brachman and Schmolze, 1985] Ronald Brachman and James Schmolze. An overview of the KL-ONE knowledge representa- tion system. Cognitive Science, 9:171-216, 1985. [DSrre and Eisele, 1991] Jochen D6rre and An- reas Eisele. A comprehensive unification for- malism, 1991. Deliverable R3.1.B, DYANA - ESPRIT Basic Research Action BR3175. [D6rre and Seiffert, 1991] Jochen D6rre and Roland Seiffert. Sorted feature terms and re- lational dependencies. IWBS Report 153, IBM Deutschland, 1991. [Elhadad, 1990] Michael Elhadad. Types in func- tional unification grammars. In Proceedings of the 28th Annual Meeting of the Association for Computational Linguistics (,4 CL-1990), pages 157-164, 1990. [Emele and Zajac, 1990] Martin Emele and R~mi Zajac. Typed unification grammars. In Pro- ceedings of the 13th International Conference on Computational Linguistics (COLING-1990), volume 3, pages 293-298, 1990. 271 [Evans and Gazdar, 1989] Roger Evans and Ger- ald Gazdar. Inference in DATR. In Proceedings of Fourth Meeting of the European Chapter of the Association for Computational Linguistics (EACL-1989), pages 66-71, 1989. [Grosz el al., 1986] Barbara Grosz, Karen Sparck Jones, and Bonnie Webber, editors. Readings in Natural Language Processing. Morgan Kauf- mann, Los Altos, California, 1986. [Halliday, 1985] M. A. K. Halliday. An Introduc- tion to Functional Grammar. Edward Arnold, London, 1985. [Kasper, 1989] Robert Kasper. A flexible interface for linking applications to Penman's sentence generator. In Proceedings of the 1989 DARPA Speech and Natural Language Workshop, pages 153-158, Philadelphia, 1989. [Kay, 1979] Martin Kay. Functional grammar. In Proceedings of the Fifth Meeting of the Berke- ley Linguistics Society, pages 142-158, Berkeley, CA, 17-19 Febuary 1979. [Mann, 1983] William Mann. An overview of the NIGEL text generation grammar. In Proceed- ings of the ~Ist Annual Meeting of the As- sociation for Computational Linguistics (ACL- 1983), pages 79-84, 1983. [Mann and Moore, 1981] William Mann and James Moore. Computer generation of multi- paragraph English text. American Journal of Computational Linguistics, 7:17-29, 1981. [McDonald, 1983] David McDonald. Description directed control. Computers and Mathematics, 9:111-130, 1983. [McKeown et ai., 1990] Kathleen McKeown, Michael Elhadad, Yumiko Fukumoto, Jong Lim, Christine Lombardi, Jacques Robin, and Frank Smadja. Natural language generation in COMET. In Robert Dale, Chris Mellish, and Michael Zock, editors, Current Research in Nat- ural Language Generation, pages 103-139. Aca~ demic Press, London, 1990. [Mellish, 1991] Chris Mellish. Approaches to re- alisation in natural language generation. In E. Klein and F. Veltman, editors, Natural Lan- guage and Speech. Springer-Verlag, 1991. [Moore and Paris, 1989] Johanna Moore and Ce- cile Paris. Planning text for advisory dialogues. In Proceedings of the 27th Annual Meeting of the Association for Computational Linguistics (ACL-1989), pages 203-211, 1989. [Patten, 1988] Terry Patten. Systemic Text Gen- eration as Problem Solving. Cambridge Univer- sity Press, 1988. 272 [Penman Natural Language Group, 1989] Penman Natural Language Group. The Pen- man user guide. Technical report, Information Sciences Institute, Marina del Rey, CA 90292, 1989. [Pereira and Warren, 1980] Fernando Pereira and David Warren. Definite clause grammars for language analysis. Artificial Intelligence, 13:231-278, 1980. [Reiter, 1990] Ehud Reiter. Generating descrip- tions that exploit a user's domain knowledge. In Robert Dale, Chris Mellish, and Michael Zock, editors, Current Research in Natural Language Generation, pages 257-285. Academic Press, London, 1990. [Reiter, 1991] Ehud Reiter. A new model oflexical choice for nouns. Computational Intelligence, 7(4), 1991. [Reiter and Dale, 1992] Ehud Reiter and Robert Dale. A fast algorithm for the generation of re- ferring expressions. In Proceedings of the Four- teenth International Conference on Computa- tional Linguistics (COLING-199~), 1992. [Reiter et al., 1992] Ehud Reiter, Chris Mellish, and John Levine. Automatic generation of on-line documentation in the IDAS project. In Proceedings of the Third Conference on Applied Natural Language Processing (ANLP- 1992), pages 64-71, 1992. [Rosch, 1978] Eleanor Rosch. Principles of cat- egorization. In E. Rosch and B. Lloyd, edi- tors, Cognition and Categorization, pages 27- 48. Lawrence Erlbaum, Hillsdale, N J, 1978. [RSsner, 1986] Dietmar RSsner. FAn System zur Generierung yon deutschen Texten aus seman- tischen Repr~sentationen. PhD thesis, Institut fiir Informatik, University of Stuttgart, 1986. [Sheiber, 1988] Stuart Sheiber. A uniform archi- tecture for parsing and generation. In Pro- ceedings of the 12th International Conference on Computational Linguistics (COLING-88), pages 614-619, 1988. [Touretzky, 1986] David Touretzky. The Mathe- matics of Inheritance Systems. Morgan Kauf- mann, Los Altos, California, 1986. [Wahlster et al., 1991] Wolfgang Wahlster, Elis- abeth Andre, Sore Bandyopadhyay, Winfried Graf, and Thomas Rist. WIP: The coordinated generation of multimodal presentations from a common representation. In Oliverio Stock, John Slack, and Andrew Ortony, editors, Compu- tational Theories of Communication and their Applications. Springer-Verlag, 1991.
1992
34
CORRECTING ILLEGAL NP OMISSIONS USING LOCAL FOCUS Linda Z. Suri 1 Department of Computer and Information Sciences University of Delaware Newark DE 19716 Internet: [email protected] 1 INTRODUCTION The work described here is in the context of de- veloping a system that will correct the written En- liSh of native users of American Sign Language SL) who are learning English as a second lan- guage. In this paper we focus on one error class that we have found to be particularly prevalent: the illegal omission of NP's. Our previous analysis of the written English of ASL natives has led us to conclude that language transfer (LT) can explain many errors, and should thus be taken advantage of by an instructional sys- tem (Suri, 1991; Suri and McCoy, 1991). We be- lieve that many of the omission errors we have found are among the errors explainable by LT. Lillo-Martin (1991) investigates null argument structures in ASL. She identifies two classes of ASL verbs that allow different types of null argument structures. Plain verbs do not carry morphological markings for subject or object agreement and yet allow null argument structures in some contexts. These structures, she claims, are analogous to the null argument structures found in languages (like Chinese) that allow a null argument if the argument co-specifies the topic of a previous sentence (ttuang, 1984). Such languages are said to be discourse- oriented languages. As it turns out, our writing samples collected from deaf writers contain many instances of omit- ted NP's where those NP's are the topic of a pre- vious sentence and where the verb involved would be a plain verb in ASL. We believe these errors can be explained as a result of the ASL native carry- ing over conventions of (discourse-oriented) ASL to (sentence-oriented) English. If this is the case, then these omissions can be corrected if we track the topic, or, in computa- tional linguistics terms, the local focus, and the actor focus. 2 We propose to do this by develop- ing a modified version of Sidner's focus tracking algorithm (1979, 1983) that includes mechanisms for handling complex sentence types and illegally omitted NP's. 1Thls research was supported in part by NSF Grant ~IRI-9010112. Support was also provided by the Nemours Fotuldation. We thank Gallaudet U~fiversity, the National Technical Institute for the Deaf, the Pennsylvalfia School for the Deaf, the Margaret S. Sterck School, and the Bicultural Center for providing us with writing samples. 2 Grosz, Joshi had Weinstein (1983) use the notion of cen- tering to track something similar to local focus and argue against the use of a separate actor focus. However, we think that the example they use does not argue against a separate actor focus, but illustrates the need for extensions to Sial- her's algorithm to specify how complex sentences should be processed. 273 2 FOCUS TRACKING Our focusing algorithm is based on Sidner's fo- cusing algorithm for tracking local and actor foci (Sidner 1979; Sidner 1983). 3 In each sentence, the actor focus (AF) is identified with the (thematic) agent of the sentence. The Potential Actor Focus List (PAFL) contains all NP's that specify an ani- mate element of the database but are not the agent of the sentence. Tracking local focus is more complex. The first sentence in a text can be said to be about some- thing. That something is called the current focus (.CF) of the sentence and can generally be identified via syntactic means, taking into consideration the thematic roles of the elements in the sentence. In addition to the CF, an initial sentence introduces a number of other items (any of which can become the focus of the next sentence). Thus, these items are recorded in a potential focus list (PFL). At any given point in a well-formed text, after the first sentence, the writer has a number of op- tions: • Continue talking about the same thing; in this case, the CF doesn't change. • Talk about something just introduced; in this case, the CF is selected from the previous sen- tence's PFL. • Return to a topic of previous discussion; in this case, that topic must have been the CF of a previous sentence. • Discuss an item previously introduced, but which was not the topic of previous discussion; in this case, that item must have been on the PFL of a previous sentence. The decision (by the reader/hearer/algorithm) as to which of these alternatives was chosen by the speaker is based on the thematic roles (with par- ticular attention to the agent role) held by the anaphora of the current sentence, and whether their co-specification is the CF, a previous CF, or a member of the current PFL or a previous PFL. Confirmation of co-specifications requires inferenc- ing based on general knowledge and semantics. At each sentence in the discourse, the CF and PFL of the previous sentence are stacked for the possibility of subsequent return. 4 When one of these items is returned to, the stacked CF's and PFL's above it are popped, and are thus no longer available for return. 3 Carter.(1987) extended Sichler s work to haaldle in- trasententlal anaphora, but for space reasons we do not dis- cuss these extensions. 4Sidner did not stack PFL's. Our reasons for stacking PFL's are discussed in section 4. 2.1 FILLING IN A MISSING NP We propose extending this algorithm to iden- tify an illegally omitted NP. To do this, we treat the omitted NP as an anaphor which, like Sidner's treatment of full definite NP's and personal pro- nouns, co-specifies an element recorded by the fo- cusing algorithm. This approach is based on the belief that an omitted NP is likely to be the topic of a previous sentence. We define preferences among the focus data structures which are similar to Sid- ner's preferences. More specifically, when we encounter an omit- ted NP that is not the agent, we first try to fill the deleted NP with the CF of the immediately preceding sentence. If syntax, semantics or infer- encing based on general knowledge cause this co- specification to be rejected, we then consider mem- bers of the PFL of the previous sentence as fillers for the deleted NP. If these too are rejected, we con- sider stacked CF's and elements of stacked PFL's, taking into account preferences (yet to be deter- mined) among these elements. When we encounter an omitted agent NP, in a simple sentence or a sentence-initial clause, we first test the AF of the previous sentence as co-specifier, then members of the PAFL, the previous CF, and finally stacked AF's, CF's and PAFL's. To iden- tify a missing agent NP in a non-sentence-initial clause, our algorithm will first test the AF of the previous clause, and then follow the same prefer- ences just given. Further preferences are yet to be determined, including those between the stacked AF, stacked PAFL, and stacked CF. 2.2 COMPUTING THE CF To compute the CF of a sentence without any illegally omitted NP's, we prefer the CF of the last sentence over members of the PFL, and PFL mem- bers over members of the focus stacks. Exceptions to these preferences involve picking a non-agent anaphor co-specifying a PFL member over an agent co-specifying the CF, and preferring a PFL member co-specified by a pronoun to the CF co-specified by a full definite description. To compute the CF of a sentence with an illegally omitted NP, our algorithm treats illegally omitted NP's as anaphora since they (implicitly) co-specify something in the preceding discourse. However, it is important to remember that discourse-oriented languages allow deletions of NP's that are the topic of the discourse. Thus, we prefer a deleted non- agent as the focus, as long as it closely ties to the previous sentence. Therefore, we prefer the co- specifier of the omitted non-agent NP as the (new) CF if it co-specifies either the last CF or a member of the last PFL. If the omitted NP is the thematic agent, we prefer for the new CF to be a pronomi- nal (or, as a second choice, full definite description) non-agent anaphor co-specifying either the last CF or a member of the last PFL (allowing the deleted agent NP to be the AF and keeping the AF and CF different). 5 If no anaphor meets these criteria, then 5As future work, we will explore how to resolve more than one non-agent anaphor in a sentence co-specifying PFL elements. 274 the members of the CF and PFL focus stacks will be considered, testing a co-specifier of the omitted NP before co-specifiers of pronouns and definite de- scriptions at each stack level. 3 EXAMPLE Below, we describe the behavior of the extended algorithm on an example from our collected texts containing both a deleted non-agent and agent. Example: "($1) First, in summer I live at home with my parenls. ($2) I can budget money easily. ($3) I did not spend lot of money at home because al home we have lot of good foods, I ate lot of foods. (S4) While living at college I spend lot of money because_ go out to eat almost everyday. ($5) At home, sometimes my parents gave me some money right away when I need_. " After S1, the AF is I, the CF is I, and the PFL contains SUMMER, HOME, and the LIVE VP. For $2, I is the only anaphor, so it becomes the CF, the PFL contains HONEY and the BUDGET VP, and the focus stack contains I and the previous PFL. $3 is a complex sentence using the conjunction "because." Such sentences are not explicitly han- dled by Sidner's algorithm. Our analysis so far suggests that we should not split this sentence into two 6, and should prefer elements of the main clause as focus candidates. Thus, we take the CF from the first clause, and rank other elements in that clause before elements in the second clause on the PFL. 7 In this case, we have several anaphora: I, money, at home .... The AF remains I. The CF be- comes MONEY since it co-specifies a member of the PFL and since the co-specifier of the last CF is the agent. Ordering the elements of the first clause be- fore the elements in the second results in the PFL containing HOME, the NOT SPEND VP, GOOD FOOD, and the HAVE VP. We stack the CF and the PFL of $2. Note that $4 has a missing agent in the sec- ond clause. To identify the missing agent in a non-sentence-initiM clause, our algorithm will first test the AF of the preceding clause for possible co- specification. Because this co-specification would cause no contradiction, the omitted NP is filled with 'T', which is eventually taken as the AF of $4. The CF is computed by first considering the first clause of $4, since the X clause is the pre- ferred clause of an X BECAUSE Y construct. Since "money" co-specifies the CF of $3, and nothing else in the preferred clause co-specifies a member of the PFL, MONEY remains the CF. The PFL contains COLLEGE, the SPEND VP, EVER.Y DAY, the TO EAT VP, and the GO OUT TO EAT VP. We stack the CF and PFL of $3. $5 contains a subordinate clause with a miss- ing non-agent. Our algorithm first considers the 6If we were to split the sentence up, then tile focus would shift away from MONEY when we process the second clause (which contradicts our intuition of what the focus is in this paragraph). 7The appropriateness of placing elements from both clauses in one PFL and ranking them according to clause menlbership will be further investigated. This construct ("X BECAUSE Y") is further discussed in section 4. CF, MONEY, as the co-specifier of the omitted NP; syntax, semantics and general knowledge inferenc- ing do not prevent this co-specification, so it is adopted. MONEY is also chosen as the CF since it is the co-specifier of the omitted NP occurring in the verb complement clause which is the preferred clause in this type of construct. 4 DISCUSSION OF EXTENSIONS One of the major extensions needed in Sidner's algorithm is a mechanism for handling complex sen- tences. Based on a limited analysis of sample texts, we propose computing the CF and PFL of a com- plex sentence based on a classification of sentence types. For instance, for a sentence of the form "X BECAUSE Y" or "BECAUSE Y, X", we prefer the expected focus of the effect clause as CF, and or- der elements of the X clause on the PFL before el- ements of the Y clause. Analogous PFL orderings apply to other sentence types described here. For a sentence of the form "X CONJ Y", where X and Y are sentences, and CONJ is "and", "or", or "but", we prefer the expected focus of the Y clause. For a sentence of the form "IF X (THEN) Y", we prefer the expected focus of the THEN clause, while for "X, IF Y", we prefer the expected focus of the X clause. Further study is needed to determine other preferences and actions (including how to further order elements on the PFL) for these and other sentence types. These preferences will likely de- pend on thematic roles and syntactic criteria (e.g., whether an element occurs in the clause containing the expected CF). The decisions about how these and other exten- sions should proceed have been or will be based on analysis of both standard written English and the written English of deaf students. The algorithm will be developed to match the intuitions of native English speakers as to how focus shifts. A second difference between our algorithm and Sidner's is that we stack the PFL's as well as the CF's. We think that stacking the PFL's may be needed for processing standard English (and not just for our purposes) since focus sometimes re- volves around the theme of one of the clauses of a complex sentence, and later returns to revolve around items of another clause. Further investiga- tion may indicate that we need to add new data structures or enhance existing ones to handle focus shifts related to these and other complex discourse patterns. We should note that while we prefer the CF as the co-specifier of an omitted NP, Sidner's recency rule suggests that perhaps we should prefer a mem- ber of the PFL if it is the last constituent of the previous sentence (since a null argument seems sim- ilar to pronominal reference). However, our studies show that a rule analogous to the recency rule does not seem to be needed for resolving the co-specifier of an omitted NP. In addition, Carter (1987) feels the recency rule leads to unreliable predictions for co-specifiers of pronouns. Thus, we do not expect to change our algorithm to reflect the recency rule. (We also believe we will abandon the recency rule for resolving pronouns.) 275 Another task is to specify focus preferences among stacked PFL's and stacked CF's, perhaps using thematic and syntactic information. An important question raised by our analy- sis is how to handle a paragraph-initial, but not discourse-initial, sentence. Do we want to treat it as discourse-initial, or as any other non-discourse- initial sentence? We suggest (based on analysis of samples) that we should treat the sentence as any non-discourse-initial sentence, unless its sentence type matches one of a set of sentence types (which often mark focus movement from one element to a new one). In this latter case, we will treat the sen- tence as discourse-initial by calculating the CF and PFL in the same manner as a discourse-initial sen- tence, but we will retain the focus stacks. We have identified a number of sentence types that should be included in the set of types which trigger the latter treatment; we will explore whether other sen- tence types should be included in this set. 5 CONCLUSIONS We have discussed proposed extensions to Sid- ner's algorithm to track local focus in the pres- ence of illegally omitted NP's, and to use the ex- tended focusing algorithm to identify the intended co-specifiers of omitted NP's. This strategy is rea- sonable since LT may lead a native signer of ASL to use discourse-oriented strategies that allow the omission of an NP that is the topic of a preceding sentence when writing English. REFERENCES David Carter (1987). Interpreting Anaphors in Natural Language Texts. John Wiley and Sons, New York. Barbara J. Grosz, Aravind K. Joshi and Scott We- instein (1983). Providing a unified account of definite noun phrases in discourse. In Proceed- ings of the 21st Annual Meeting of the Associa- tion for Computational Linguistics, 44-50. C.-T. James Huang (1984). On the distribution and reference of empty pronouns. Linguistic In- quiry, 15(4):531-574. Diane C. Lillo-Martin (1991). Universal Grammar and American Sign Language. Kluwer Academic Publishers, Boston. Candace L. Sidner (1979). Towards a Computa- tional Theory of Definite Anaphora Comprehen- sion in English Discourse. Ph.D. thesis, M.I.T., Cambridge, MA. Candace L. Sidner (1983). Focusing in the com- prehension of definite anaphora. In Robert C. Berwick and Michael Brady, eds., Computational Models of Discourse, chapter 5,267-330. M.I.T. Press, Cambridge, MA. Linda Z. Suri and Kathleen F. McCoy (1991). Language transfer in deaf writing: A correction methodology for an instructional system. TR- 91-20, Dept. of CIS, University of Delaware. Linda Z. Suri (1991). Language transfer: A foun- dation for correcting the written English of ASL signers. TR-91-19, Dept. of CIS, University of Delaware.
1992
35
SOME PROBLEMATIC CASES OF VP ELLIPSIS Daniel Hardt Department of Computer and Information Science University of Pennsylvania Philadelphia, PA 19104 Internet: hardt~linc.cis.upenn.edu INTRODUCTION It has been widely assumed that VP ellipsis is gov- erned by an identity condition: the elided VP is in- terpreted as an identical copy of another expression in surrounding discourse. For example, Sag (76) imposes an identity condition on Logical Form rep- resentations of VP's. A basic feature of this ac- count is the requirement that a syntactic VP be available as the antecedent. This requirement is re- flected in most subsequent accounts as well. In this paper I examine three cases of VP ellipsis in which the antecedent cannot be identified with any VP. These cases, which are illustrated using naturally- occurring examples, present a fundamental problem for any of the standard approaches. I will argue that they receive a natural treatment in the system I have developed, in which VP ellipsis is treated by storing VP meanings in a discourse model. I will address the following three problems: • Combined Antecedents: The antecedent may be a combination of more than one previous property. • Passive Antecedents: the antecedent in a passive clause may not be associated with any VP, but, rather, the property associated with the active counterpart of that clause. • NP Antecedents: the antecedent may be a prop- erty associated with an NP. In what follows, I sketch my general approach to VP ellipsis, after which I show how each of the above phenomena can be treated in this approach. BACKGROUND VP ellipsis, I suggest, is to be explained along the lines of familiar accounts of pronominal anaphora (e.g., Kamp 80, Heim 81). A discourse model is posited, containing various semantic objects, includ- ing (among other things) entities and properties that have been evoked in preceding discourse. Typ- ically, entities are evoked by NP's, and properties by VP's. The interpretation of a pronoun involves a selection among the entities stored in the discourse model. Similarly, the interpretation of an elliptical VP involves a selection among the properties stored 276 in the discourse model. 1 I have described an imple- mentation along these lines in Hardt 91, based on some extensions to the Incremental Interpretation System (Pereira and Pollack 91). There are two rules governing VP ellipsis: one allowing the introduction of properties into the dis- course model, and another allowing the recovery of properties from the discourse model. These two rules are given below. In general, I assume the form of grammar in Pereira and Pollack 91, in which all semantic rules take the input and output discourse models as arguments. That is, all semantic rules define relations on discourse models, or "file change potentials", in Heim's terms. The (simplified) rule for recovering a property from the discourse model is: AUX =~ P where P e DMi,,. That is, an auxiliary verb is replaced by some property P stored in the input discourse model. Secondly, properties are introduced into the dis- course model by the following rule: Upon encountering a property-denoting seman- tic object of the form: P(-, al) that is, a predicate with the first argument slot empty, we have: DMout = DMin U {P(-, at)} This indicates that the property is added to the output discourse model. Typically, the property- denoting expression is associated with a VP, al- though other types of expressions can also introduce properties into the discourse model. I have argued elsewhere (Hardt 91, 91a) that such a system has certain important advantages over alternative approaches, such as those of Sag (76) and Williams (77). 2 1To be precise, it is not properties that are stored as VPE antecedents, but relations involving an input and output discourse context as well as a property. 2The DRT-based account of Klein (87) essentially du- In what follows, I will briefly examine the phe- nomena listed above, which present fundamental problems for all accounts of VP ellipsis of which I am aware a. For each problem, I will suggest that the current approach provides a solution. COMBINED ANTECEDENTS There are cases of VP ellipsis in which the an- tecedent is combined from two or more separate VP's. This presents a problem for most accounts of VP ellipsis, since there is no syntactic object con- sisting of the combination of two separate VP's. If antecedents are stored in the discourse model, as I am suggesting, this is not surprising. For example, it is well known that combinations of entities can be- come the antecedent for a plural pronoun. Consider the following example: After the symmetry between left-handed particles and right-handed anti- particles was broken by the kaons in the 1960s, a new symme- try was introduced which everybody swears is unbreakable. This is between left-handed par- ticles moving forwards in time, and right- handed anti-particles moving backwards in time (none do, in any practical sense, but that does not worry theorists too much). From: The Economist, ~ August 1990, p.69. Bonnie Webber, p.c. The meaning of the elided VP ("none do") is, I take it, "none do move forwards or move back- . wards in time". So the antecedent must consists of a combination of properties associated with two VP's: "moving forwards in time" and "moving backwards in time". Such an example indicates the necessity for a rule allowing the set of properties in the discourse model to be expanded, as follows: {P...Q...} :~ {P...Q...[P OP Q]} That is, if the discourse model contains two properties P and Q, it may also contain the property resulting from a combination of P and Q by some operator (I assume that the operators include AND and OR). Another example is the following: So I say to the conspiracy fans: leave him alone. Leave us alone. But they won't. From: The Welcomat, 5 Feb 92, p.25 Here the meaning of the elliptical VP is: "they won't leave him alone or leave us alone". plicates the Sag/Williams approach in DRT. Of partic- ulax relevance here is Klein's requirement that the an- tecedent be a DRT-representation of a syntactic VP. 3The recent account of Dadrymple, Shieber and Pereira (91) does treat the "Passive Antecedent" prob- lem. However, no treatment of the "Combined An- tecedent" or "NP Antecedent" problems is given. 277 This phenomenon has been noted in the liter- ature, in particular by Webber (?8), in which the following examples were given: I can walk, and I can chew gum. Gerry can too, but not at the same time. Wendy is eager to sail around the world and Bruce is eager to climb KiHmanjaro, but neither of them can because money is too tight. By the rule given above, this example could be given the interpretation "neither of them can sail around the world or climb Kilimanjaro". It is clear that the combining operation is highly constrained. In all the examples mentioned, either P and Q have the same subject, or the subject of the elliptical VP refers to the two subjects of P and Q. In future work, I will attempt to formulate con- straints on this operation. PASSIVE ANTECEDENTS The next problem is illustrated by the following example, cited by Dalrymple, Shieber and Pereira (91): A lot of this material can be presented in a fairly informal and accessible fashion, and often I do. From: Noam Chow_sky on the Generative En- terprise, Foris Publications, Dordrecht. 1982. The antecedent for the elliptical VP is "present a lot of this material in a fairly informal and acces- sible fashion". This is not associated with a VP, al- though the active counterpart of the sentence would contain such a VP. This is not surprising from a se- mantic point of view, since it is traditionally held that a 'passive transformation' preserves semantic equivalence. Another example of this is following: Business has to be developed and de- fended differently than we have in the past. From: NPR interview, 24 May 91 The most straightforward treatment of such phenomena in the current framework is to assume that the syntactic derivation of a passive antecedent such as "this material can be presented" corre- sponds to a semantic object present(_, this material) More generally, for a syntactic expression SUBJ be VP+en the corresponding semantic object is VP'(-, SUB:V) That is, the denotation of the "surface subject" becomes the second argument of the VP-denotation. This semantic object, then, satisfies the condition on the rule for introducing properties given above. Thus, under such a treatment of the passive, these examples are accommodated in the present system without further stipulations. NP ANTECEDENTS In many casgs~ the antecedent property is intro- duced by a NP rather than a VP. This would be difficult to explain for a syntactic or logical form theory. From a semantic point of view, it is not sur- prising, since many NP's contain a common noun which is standardly analyzed semantically as denot- ing a property. Consider the following (naturally occurring) example: We should suggest to her that she officially appoint us as a committee and invite fac- ulty participation/input. They won't, of course,... From: email message. (Bonnie Webber, p.c.) In this example, the meaning of the elided VP is '%hey won't participate". The source is the NP "faculty participation". Another example is the following: [Many Chicago-area cabdrivers] say their business is foundering because the riders they depend on - business people, downtown work- ers and the elderly - are opting for the bus and the elevated train, or are on the unemployment line. Meanwhile, they sense a drop in visitors to the city. Those who do, they say, are not taking cabs. From: Chicago Tribune front page, ~/6/92. Gregory Ward, p.c. Here, the meaning of the elided VP is %hose who do visit", where the source is the NP "visitors". In the current framework, such examples could be treated as follows. Assume, following Chierchia (84), that there is a class of nouns that are semanti- cally correlated with properties. For any such noun, the associated property can be added to the dis- course model, just as is done for verbs. CONCLUSIONS The cases investigated constitute strong evidence that VP ellipsis must be explained at a seman- tic/discourse level. I have argued that the examples can be dealt with in the system I have developed. In future work, I will formulate constraints on the operations described here. ACKNOWLEDGEMENTS Thanks to Aravind Joshi, Shalom Lappin, Gregory Ward, and Bonnie Webber. This work was sup- ported by the following grants: ARO DAAL 03- 89-C-0031, DARPA N00014-90-J-1863, NSF IRI 90- 16592, and Ben Franklin 91S.3078C-1. 278 REFERENCES Gennaro Chierchia. Formal Semantics and the Grammar of Predication. Linguistic Inquiry, Vol. 16, no. 3. Summer 1984. Mary Dalrymple, Stuart Shieber and Fernando Pereira. Ellipsis and Higher-Order Unification. Lin- guistics and Philosophy. Vol. 14, no. 4, August 1991. Daniel Hardt. A Discourse Model Account of VP Ellipsis. Proceedings AAAI Symposium on Dis- course Structure in Natural Language Understand- ing and Generation. Asilomar, CA., November 1991. Daniel Hardt. Towards a Discourse Model Ac- count of VP Ellipsis. Proceedings ESCOL 1991. Baltimore, MD. Irene Heim. The Semantics of Definite and In- definite Noun Phrases. Ph.D. thesis, University of Massachusetts-Amherst. 1981. Hans Kamp. A Theory of Truth and Semantic Representation. In Groenendijk, J, Janssen, T.M.V. and Stokhof, M. (eds.) Formal Methods in the Study of Language, Volume 136, pp. 277-322. 1980. Ewan Klein. VP Ellipsis in DR Theory. In J. Groenendijk, D. de Jongh and M. Stokhof, eds. Studies in Discourse Representation Theory and the Theory of Generalized Quantifiers, Foris Publica- tions. Dordrecht, The Netherlands. 1987. Fernando Pereira and Martha Pollack. Incre- mental Interpretation. Artificial Intelligence. Vol. 50. no. 1, pp. 37-82. June 1991. Ivan A. Sag. Deletion and Logical Form. Ph.D. thesis, MIT. 1976. Bonnie Lynn Webber. A Formal Approach to Discourse Anaphora. Ph.D. thesis, Harvard Univer- sity. 1978. Edwin Williams. Discourse and Logical Form. Linguistic Inquiry, 8(1):101-139. 1977.
1992
36
UNDERSTANDING REPETITION IN NATURAL LANGUAGE INSTRUCTIONS - THE SEMANTICS OF EXTENT Sheila Rock Department of Artificial Intelligence,Edinburgh University* 80 South Bridge, Edinburgh EH1 1HN, Scotland, United Kingdom sheilaraisb.ed.ac.uk Introduction Natural language instructions, though prevalent in many spheres of communication, have only recently begun to receive attention within computational linguistics[5]. Instructions are often accompanied by language intended to signal repetition of the ac- tion that they instruct. In order to develop a sys- tem that is able to understand instructions, with the goal of executing them, it is necessary to inves- tigate what is meant by various types of repetition, and the different ways in which repetition can be expressed. We focus on sentences that are instructing that some action is to be performed and that this action is to be performed more than once 1. There are two aspects to consider - scope (what part of the action that is instructed in the dialogue is to be repeated) and extent (how much repetition is to be done). This is illustrated by examples (1) and (2). Place a chunk of rhubarb into each tart. (1) Continue to layer in this way until all the (2) fruit is used. The repetition in (1) has scope on place a chunk of rhubarb (into a tart) and extent across all tarts. (2) has scope over layer in this way and extent until all the fruit used. Within this framework of scope and extent that I have described only informally, I discuss the issue of extent in more detail s . Karlin [3], presents a semantic analysis of verbal modifiers in the domain of cooking tasks. Much of this is pertinent to an examination of extent, in par- tieular the relation of different modifiers to the as- peetual category of an event (according to Moens & Steedman [4]). This has formed an important start- ing point for my work in understanding instructions for repetition. However, there are aspects where a different approach to Karlin's is required, and some of these are discussed in the rest of this paper. Semantics of verbal modifiers In analysing the semantics of verbal modifiers, Karlin[3] identifies three kinds of modifiers, which are themselves divided further. The primary cate- gorisations are *Thanks to Chris Mellish, Robert Dale and Graeme Ritchie for discussion about the ideas in this paper. 1 This paper deals only with instructions, and uses the words sentence and instruction interchangeably. 2A central theme of my thesis is that both scope and extent must be accounted for in a full treatment of repetition, but a discussion of scope is outwith the scope of this paper. 279 1 The number of repetitions of an action. 2 The duration of an action. 3 The speed of an action. It is clear that Karlin's first two primary cate- gories describe modifiers that are concerned with the repetition of an action 3, and these are exam- ined in detail in the next sections. First, though, it is useful to consider that with any action, we have a time interval, during which the action is to be performed - once or more than once. We can then characterise the extent of repetition in terms of this time interval. Modifiers of Karlin's category 2 tell us how long the time interval is, while modifiers of category 1 tell us how to carve up the time interval. One instruction may give information for both cat- egories, but this usually is for two different actions, such as Roast for 45 minutes, basting twice. (3) Number of repetitions - carving the interval In this category, Karlin enumerates classes of mod- ifier as follows: • cardinal count adverbials - turn the fish twice • frequency adverbials - turn the fish occasionally • plural objects - turn the pieces offish In the discussion of frequency adverbials, Karlin describes frequency as a continuous scale with grad- able terms, such as occasionally, often. This class should include explicit frequency in time units, as in every 5 minutes. Duration of an action - delimiting the interval Here, Karlin enumerates the following kinds of modifier: • explicit duration in time intervals - fry the fish for 10 minutes • duration given by gradable terms - fry the fish brie/Ty • duration co-extensive with the duration of an- other action - continue to fry the millet, stirring, until the water boils • duration characterized by a st'~te change - fry the fish until it is opaque • disjuncts of explicit durations and state changes - fry the fish for 5 minutes or until it becomes opaque 3I will not consider the third, which contributes to "quality" of execution of an action, and does not pertain to extent of repetition. In this category, Karlin does distinguish be- tween "explicit duration" and "duration in grad- able terms", whereas in the 'Trequency adverbials" classification, there are not seperate classes for vague and explicit frequency (say turn the fish ev- ery 5 minutes and turn the fish often). To be more consistent, there should be one class within the cat- egory "number of repetitions of an action" that contains frequency adverbials 4, and only one class within the cat~gory "duration of an action" that contains duration in terms of time 5. In both classes there should be the possibility of being explicit or vague. It is also preferable to call Karlin's second category "duration of repetition of an action". The name "duration of an action" conflates the concept of the basic action and its repetition. The sepa- ration is pertinent to the view that repetition has scope and extent. Karlin analyses the remaining three classes in cat- egory 2 explicitly in the context of cooking tasks. In particular, the analysis is related to the view that all processes in the cooking domain must have culminations. The validity of this approach is dis- cussed in the next section. However, before doing that we examine Karlin's final class, "disjuncts of explicit durations and state changes". This is a class of instructions found mainly in the cooking domain. The example used by Karlin is (4). Steam for g minutes or until the mussels (4) open. Karlin asserts that 'the meaning of sentences in this category is not the same as that of logical disjunction'[3, pg 64], and claims that the mean- ing of the disjunction is that 'the state change (the mussels are open) is to be used to determine the duration of the action (2 minutes)' [ibid] (my parentheses) s . I agree that the meaning is not simply that of log- ical disjunction, but we need to examine the issue further. Data that I have collected gives evidence that the use of the or is not significant. There are many examples where a recipe book will give the same instruction, both with and without it. For example, ... at least 10 minutes or until the flour is (5) well browned [2, pg 120] Bake for about g hours, until the rabbit and lentils are tender [2, pgll9] (6) Bake for 45 minutes or until the rabbit is (7) tender [2, pgll8] In all of these, we have an instruction describing one of the following scenariosT: Do some action until an expected state change occurs. This should take the du- (8) ration specified. 4This is as Karlin's classification 5This is different from Karlin's classification 6Karlin sees these as metalinguistic disjunction, which I be- lieve is similar to part of my view. 7I make no claims about exactly which of these scenarios is being described. 280 Do some action for a specified duration. If the expected state change does not occur during this time, then it is likely that some- (9) thing has gone wrong. What is really being given is a way to decide when to stop the action, and the use of two clauses pro- vides a way of deciding whether the stop state is successful or a failure. For success, if the state change has occured, then we will expect that the duration has also passed s. If the duration has passed but the state change has not oecured, or if the state change has occurred but the duration has not passed, we still reach the stop state, but in the failed mode. We then have disjunction for stopping (we stop if either the duration or the state change is true) but conjunction for success (stop and a nor- mal outcome is only true if both the clauses are true). We note that often domain knowledge will allow the hearer to determine whether the duration is given as a minimum or maximum time, and what the effect of failure is. The analysis presented here does not take the use of domain knowledge into ac- count, to give a more general analysis. From the point of view of repetition, what we are given is a stopping condition, that is coded in terms of two alternatives. Using an informal no- tation, what is being expressed with and without or respectively, are the following, which are equiv- alent: should-stop(action, t)*-- (difference(start, t,x), x >_ duration) V (state(q,t)) should-stop(action, t)*- (difference(start, t,x), x >_ duration) A should-stop(action, t),--- (state(q,t)) Thus (7) artd (6) can be represented as should-stop(bake, t)~--- (differenee(start, t,x), x >45-minutes ) V (tender(rabbit, t)) should-stop(bake, t),--- (difference(start, t,z), av >_ about.2-hours) A shoutd-stop(bake, O,-- (tender(rabbit-and-lentils, t)) Sometimes, the order of the two modifiers is different 9 indicating that the positioning of the clauses is not important. ... until the meat is tender, about 45 min'llO~t ) utes [2, pg 119] ... until the meat is meltingly tender --(11) about 30 minutes [2, pgll9] Karlin proposes that the duration modifier is only an approximation, and that it is the state change modifier that determines the truth of the sentence 1°. Most durations, however, in the do- main of cooking tasks, are approximations. Decid- ing whether a state change has been reached is also 8This in fact seems closer to logical conjunction than logical disjunction. 9The exchanged order is usually used without the or. l°The terms left disjunct and right disjunct are used by Kar- lin, but in sentences like (10) and (11) these are not helpful indicators. approximate. In a domain where durations and ev- idence of state change are less approximate (say in chemistry), it is not clear that it is always one of the clauses that is performing the role of establishing the truth of the sentence. Aspectual category and verbal modifiers Karlin's discussion is given in the context of the aspectual category of an event (according to Moens & Steedman [4]). This is useful as it gives a way of extracting semantic information. Karlin claims that points, culminations and cul- minated processes (but not process type events) can have a number of repetitions associated with them (category 1). An expression whose aspectual type is a process or culminated process can co-occur with a duration modifier (category 2). This second claim requires closer examination. First, Moens & Steedman say that 'culminated processes ... (do not combine readily) with a for- adverbial'. Yet for-adverbials are one of the classes of duration modifier ennumerated by Karlin. We look at two of the examples presented by Karlin. Stir for I minute. (12) Saute over high heat until moisture is evaP-(13) orated. The expressions in both of these (without their modifiers - that is Stir and Saute over high heat) are processes, but not culminated processes. An essential part of a culmination is that there is a consequent state [4, pg 16]. None of the exam- ples Karlin uses has a culminated process as the aspectual type of the action expressed. (13) could be seen as culminated processes if viewed together with the duration modifier (in other words, if it already co-occurs with a duration modifier), while (12) is a process, even with the modifier. Thus, the view of Moens & Steedman holds and is in fact use- ful in extracting semantic information. An until- clause signals a culmination, thus making a process into a culminated process. A for-adverbial does not change the aspectual type of a process. We now look at the assertion that 'Each process in the cooking domain must have a culmination ...' [3, pg 62]. This is accompanied by a claim that a verb may contain inherent information about the endpoint of the action it describes, as in Chop the onion. (14) which, according to Karlin, describes a culminated process whose endpoint is defined by the state of the onion. This seems quite feasible, even if it does require that some world knowledge is required. However, if we consider instead the example Stir the soup. (15) this does not describe any culmination, as there is no consequent state. Yet it is a process, as it may be extended in time. 281 Karlin's justification for the above assertion is that cooking tasks involve a sequence of steps with the goal of bringing about a state change. There are also instructions for preventing a state change though, for example stirring (to prevent sticking). We could argue that stirring brings us to the changed state "stirred". But then, if we look back to the Moens & Steedman analysis, where he climbed has no culmination, we could claim that this has the changed state "has climbed". This is not the spirit intended by the Moens & Steedman analysis, and it is more useful to see some actions in cooking as not having culminations. We can then examine what kinds of modifiers change aspectual category and in what manner, as presented above for the for- and until-adverbials. Conclusion The semantics of repetition in instructions is more clearly understood if we view repetition as having scope and extent. Within this framework, Karlin's work on the semantics of verbal modifiers provides a useful starting point. In particular, relating this to the aspectuai category of an instruction according to Moens & Steedman [4] is important. We can make use of Moens & Steedman's schema for the way aspect changes when modifiers are added to expressions, to extract semantic information. This will allow a fuller treatment of extent, for use in the development of a semantics for repetition that treats both scope and extent more completely. References [1] Ball, C. N. "On the Interpretation of Descrip- tive and Metaiinguistic Disjunction", unpub- lished paper, University of Pennsylvania, Au- gust 1985. [2] Floyd, Keith Floyd on Britain and Ireland, BBC Books, London, 1988. [3] Karlin, Robin "Defining the semantics of ver- bal modifiers in the domain of cooking tasks." Proc. 26th Annual Meeting of the Association for Computational Linguistics, Buffalo NY, USA, June 1988, pp. 61-67. [4] Moens, Marc & Steedman, Mark "Temporal Ontology and Temporal Reference." Computa- tional Linguistics 14:2, June i988, pp. 15-28 [5] Webber, Bonnie. Course description for In- structions as Discourse, 3rd European Sum- mer School in Language, Logic & Information, Saarbrucken, August 1991.
1992
37
ON THE INTONATION OF MONO- AND DI-SYLLABIC WORDS WITHIN THE DISCOURSE FRAMEWORK OF CONVERSATIONAL GAMES Jacqueline C. Kowtko* Human Communication Research Centre University of Edinburgh 2 Buccleuch Place Edinburgh EH8 9LW SCOTLAND Internet: [email protected] Abstract Recent studies on the analysis of intonational func- tion examine a ran~ of materials from cue phrases in monologue (Litman and Hirschberg, 1990) and dialogue (Hirschberg and Litman, 1987; Hockey, 1991) to longer utterances in both monologue and dialogue (McLemore, 1991). Results match spe- cific intonational tunes to certain discourse func- tions which are more or less well defined. Al- though these results make a convincing case that intonation does signal a change in discourse struc- ture, the specification of discourse function re- mains vague. A suitable taxonomy is needed to fine-tune the relationship between intonation and discourse function. A recent analysis of dialogue (Kowtko et al., 1991) provides a framework of con- versational games which allows more fine-grained examination of prosodic function. The current pa- per introduces an intonational analysis of mono- and di-syllabic words based upon such a frame- work and compares results in progress with previ- ous work on intonation. Introduction Recent approaches to the analysis of intonational function within dialogue include an examination of the tunes carried by single-word cue phrases (e.g. now (Hirschberg and Litman, 1987), okay (Hockey, 1991), and others (Litman and Hirschberg, 1990)) across different discourse situations. The litera- ture also includes a more sweeping approach to- ward classifying phrase-final tunes which presents broadly generalized discourse functions for each of three types of intonational tune: phrase-final r/se, level, and fall (McLemore, 1991). Since there is currently no workable grammar of discourse, these studies devise their own relevant discourse cate- gories. Hockey (1991, p. 1) reflects upon the prob- lem, with reference to cue phrases. She states that *AUK Overseas Research Student Award provides partial support. Thanks to my advisors Stephen Isaxd and D. Robert Ladd for comments on drafts. cue phrases ...convey information about the structure of a discourse rather than contributing to the semantic content of a sentence .... Context and prosody are major factors contributing to differences in interpretation among various instances of a cue phrase. In order to investi- gate the connection between prosodic features and uses of a cue phrase, uses must be iden- tified. The above is partly a response to Himchberg and Litman (1987; Litman and Hirschberg, 1990) who limit their description to a binary discourse/sentential distinction. Litman and Hirschberg (1990) leave the analysis of cue phrase function to the interpretation of various specific discourse approaches and instead focus on validat- ing their (1987) prosodic model of cue phrase use with additional data from monologue. The model specifies that a cue phrase in discourse use will oc- cur either alone in a phrase (with unspecified tune) or initially in a larger phrase (deaccented or with a low tone). Thus, Litman and Hirschberg leave open the question of how their prosodic model could further specify discourse function. McLemore (1991) approaches discourse as structured by topics and interruptions. Her data includes announcements given at Texas sorority meetings and conversation between members. She finds that phrase-final tunes indicate certain gen- eral functions: rising tune connects, level tune con- tinues, and falling tune segments. The specifics about how each of these tunes operates depends upon the context. For instance, phrase-final rise which indicates non-finality or connection mani- fests itself as turn-holding in one context, phrase subordination in another, and intersentential co- hesion in yet another context. Likewise, the other tunes perform slight variations on the function of continue and segment according to context, which is left up to the reader to determine. Hockey (1991) admits to settling upon an ar- bitrary discourse classification and letting her data 282 speak for itself, after attempting to adopt a sys- tem of analysis based upon a somewhat similar set of speech data 1. She focuses on task oriented di- alogue and attempts to specify discourse function of the cue phrase okay. She presents her results in terms of intonational contours and their cor- responding discourse categories, finding that they correlate with McLemore's (1991) results: 89% of rising contour occurs where the speaker was pass- ing up a turn and letting the other person con- tinue; 86% of level contour serves to continue an instruction; 88% of falling contour marks the end of a subtask. But her categorization of discourse is still weak. Admittedly, there are a limited number of in- tonational tunes (low rise, high rise, level, fall, etc.). But limitation in intonational tune should not force a limitation in discourse category. De- tailed understanding of intonational function is necessarily linked to a more robust view of dis- course structure. These previous studies provide good intonational analysis but within weak dis- course structures. Conversational Games in Dialogue The analysis offered by Kowtko, Isard, and Do- herty (1991) provides an independently defined taxonomy of discourse structure which allows a closer examination of how intonation signals speaker intention within task oriented dialogue. In the analysis, linguistic exchanges termed conver- sational games (from a tradition of literature orig- inating in Power (1974)) embody the initiation- response-feedback patterns which relate to under- lying non-linguistic goals. It is through the frame- work of games and their components, conversa- tional moves, that the intonation of mono- and di-syllabic words can be compared with their dis- course function, as intended by the speaker. A conversational game is defined as consist- ing of the turns necessary to accomplish a con- versational goal or sub-goal. The initiating utter- ance determines which game is being played and is similar to the core speech act in Traum and Allen (1991). The ensuing response and feedback moves function as presentation and acceptance phases, in the terms of Clark and Schaefer (1987). Implicit, mutually agreed rules dictate the shape of a game and what constitutes an acceptable move within a game. These rules embody procedural, as opposed to declarative, knowledge which speakers employ in everyday conversation. ~Hockey had hoped to map discourse categories of okay based upon data collected from conversation at a library reference desk to that arising from a task in which one person described a design for another person to make out of paper clips. 283 The repertoire of games and moves in Kowtko, Isard and Doherty (1991) is based upon a map task (see Anderson et al., 1991, for a detailed de- scription): One person is given a map with a path marked on it and has to tell another person how to draw the path onto a similar map. Neither par- ticipant can see the other's map. The nature of the map task is such that from the conversations the speaker's intentions remain fairly obvious. Kowtko, Isard, and Do- herty (1991) report that one expert and three naive judges agree on an average of 83% of the moves classified in two map task dialogues. Six games appear in the dialogues: Instruction, Con- firmation, Question-YN, Question-W, Explana- tion, and Alignment. They are initiated by the following moves: INSTRUCT (Provides in- struction), CHECK (Elicits confirmation of known information), QUERY-YN (Asks yes-no question for unknown information), QUERY-W (Asks con- tent, wh-, question for unknown information), EX- PLAIN (Gives unelicited description), and ALIGN (Checks alignment of position in task). Six other moves provide response and addi- tional feedback: CLARIFY (Clarifies or rephrases given information), REPLY-Y (Responds affirma- tively), REPLY-N (Responds negatively), REPLY- W (Responds with requested information), AC- KNOWLEDGE (Acknowledges and requests con- tinuation), and READY (Indicates intention to be- gin a new game). Since the map task involves instructing one player on how to draw a path, the conversation naturally consists of many Instruction games. The structure of games allows for nesting of games and looping of response and feedback moves within games ~ The prototypical game consists of two or three moves: Initiation, Response, and optionally Feed- back. The large majority of games (84% from a sample of 3 dialogues, n = 65) match the simple prototype. Games that do not match the proto- type are still well-formed, having extra response- feedback loops, nested games, or extra moves. Very few games (less than 2%) break down as a result of a misunderstanding or other problem. Here is an example of a prototypical Instruc- tion game. The vertical bar indicates the bound- ary of a move: A: Right,[[ just draw round it. READY I[ INSTRUCT B: Okay. ACKNOWLEDGE 2As a comparison with Clark and Schaefer (1987) embedded games often coincide with instances of em- bedded contributions in the acceptance phase. Conversational game structure, offers a taxon- omy which specifies both the function and context of an utterance, as move z within game y. This facilitates the study of the function of intonational tune, since the tune reflects an utterance's conver- sational role. Intonation in Games Using data from map task dialogues (Anderson et at., 1091), I have been analyzing mono- and di- syllabic words which compose single moves within themselves: right, okay, yes, no, mmhmm, and nh- huh. In addition, I am categorizing the cases where these words form part of a move. They typically surface as 5 of the 12 moves in the games anal- ysis (Kowtko et at., 1991): READY, ACKNOWL- EDGE, ALIGN, REPLY-Y, and REPLY-N. The cur- rent data set consists of 68 utterances spoken by 3 of the 4 conversants in 2 dialogues. In order to compare my results with those of McLemore (1991) and Hockey (1991), I have tried to collapse moves and their contexts into the three general categories: ACKNOWLEDGE move following INSTRUCT serves to connect; READY, ACKNOWLEDGE (and other) moves which inter- rupt an INSTRUCT (i.e. precede a continued INSTRUCT move) continue; REPLY-Y, REPLY- N, ACKNOWLEDGE after EXPLAIN, and AC- KNOWLEDGE after a response move (specifically elicited moves) segment. The data yield the following results s: 42% of rises (5 of 11) appear as connecting moves, 30% of levels (13 of 44) as continuing moves, and 69% of falls (9 of 13) as segmenting moves. Only one category approaches a match to other published results. It is possible that my de- cisions of which moves collapse together would not be corroborated and cause some of the dis- agreement. It is also possible that dialectal vari- ation would account for some of the difference (The map task contains Scottish as opposed to American English), but it would be folly to wave such a hand of dismissal. These results reflect an intonation-based approach. Information may be lost in the process of collapsing various dis- course contexts into three intonational categories (McLemore, 1991) and then limiting discourse cat- egories to match those three existing intonational categories (Hockey, 1991). Separate discourse cat- egories, in a discourse-based approach, should fa- cilitate clearer results. When categorized according to move and dis- course context, the data begins to speak on its 3p > .20 for each result, according to the Kolmogorov-Smirnov One-sample Test, indicates sta- tistical non-significance. 284 own. Granted, the numbers for each category are currently small and not statistically reliable, but some trends are striking and suggest that more data will prove to yield interesting results. For ex- ample, of 15 REPLY-Y/N moves, 12, or 80%, are levels, the 3 others being falls in a single category, REPLY-Y after QUERY-YN. All 4 cases of REPLY- Y after ALIGN are high levels, while REPLY-Y/N after QUERY-YN are mostly low levels (6 of 8). Work is progressing on other dialogues, amass- ing enough pitch trace data to allow clear patterns to emerge for each type of move in each game con- text. The goal is, given a discourse context, to be able to predict an utterance's function or move, given the intonation, and, conversely, predict in- tonational tune, given the type of move. References Anderson, Anne H., Miles Bader, Ellen G. Bard, Elizabeth Boyle, Gwyneth Doherty, Simon Car- rod, Stephen Isard, JacqueUne Kowtko, Jan MeAllister, Jim Miller, Catherine Sotillo, Henry Thompson, and Regina Weinert (1991). The HCRC Map Task Corpus. Language and Speech, 34(4):351-366. Clark, Herbert H. and Edward F. Schaefer (1987). Collaborating on contributions to conversations. Language and Cognitive Processes, 2(1):19-41. Hirsehberg, Julia and Diane Litman (1987). Now let's talk about no~ Identifying cue phrases into- nationally. Proceedings of the ~5th annual Meeting of the Association for Computational Linguistics, Stanford, 163-171. Hockey, Beth Ann (1991). Prosody and the inter- pretation of "okay". Presented at the AAAI Fall Symposium, Monterey, CA, November. Kowtko, Jacqueline, Stephen Isard and Gwyneth Doherty (1991). Conversational games within di- alogue. Proceedings of the ESPRIT Workshop on Discourse Coherence, Edinburgh, April. To ap- pear as an HCRC Research Report, Human Com- munication Research Centre, Edinburgh, 1992. Litman, Diane and Julia Hirschberg (1990). Dis- ambiguating cue phrases in text and speech. COLING-90 Proceedings, Helsinki, 251-256. McLemore, Cynthia A (1991). The Pragmatic Interpretation of English Intonation: Sorority Speech. Ph.D. dissertation, University of Texas at Austin. Power, Richard (1974). A Computer Model of Conversation. Ph.D. dissertation, University of Edinburgh. Traum, David R. and James F. Allen (1991). Con- versation Actions. Proceedings of the AAA1 Fall Symposium, Monterey, CA, November, 114-119.
1992
38
RIGHT ASSOCIATION REVISITED * Michael Niv Department of Computer and Information Science University of Pennsylvania Philadelphia, PA, USA niv@linc, cis.upenn.edu Abstract Consideration of when Right Association works and when it fails lead to a restatement of this parsing prin- ciple in terms of the notion of heaviness. A computa- tional investigation of a syntactically annotated corpus provides evidence for this proposal and suggest circum- stances when RA is likely to make correct attachment predictions. 1 Introduction Kimball (1973) proposes the parsing strategy of Right Association (RA). RA resolves modifiers attachment am- biguities by attaching at the lowest syntactically per- missible position along the right frontier. Many au- thors (among them Wilks 1985, Schubert 1986, Whit- temore et al. 1990, and Weischedel et al. 1991) in- corporate RA into their parsing systems, yet none rely on it solely, integrating it instead with disambiguation preferences derived from word/constituent/concept co- occurrence based. On its own, RA performs rather well, given its simplicity, but it is far from adequate: Whitte- more et al. evaluate RA's performance on PP attachment using a corpus derived from computer-mediated dialog. They find that RA makes correct predictions 55% of the time. Weischedel et al., using a corpus of news sto- ries, report a 75% success rate on the general case of attachment using a strategy Closest Attachment which is essentially RA. In the work cited above, RA plays a relatively minor role, as compared with co-occurrence based preferences. The status of RA is very puzzling, consider:. (1) a. John said that Bill left yesterday. b. John said that Bill will leave yesterday. "I wish to thank Bob Frank, Beth Ann Hockey, Yonng-Snk Lee, Mitch Marcus, Ellen Prince, Phil Resnik, Robert Rubinoff, Mark Steedman, and the anonymous referees for their helpful suggestions. This research has been supported by the following grants: DARPA N00014-90-J-1863, ARt DAAL03-89-C-0031, NSF IRI 90-16592, Ben Franklin 91S.30"/8C-1. 285 (2) In China, however, there isn't likely to be any silver lining because the economy remains guided primarily by the state. (from the Penn Treebank corpus of Wall Street Journal articles) On the one hand, many naive informants do not see the ambiguity of la and are often confused by the putatively semantically unambiguous lb - a strong RA effect. On the other hand (2) violates RA with impunity. What is it that makes RA operate so strongly in 1 but disappear in 2? In this paper I argue that it is an aspect of the declarative linguistic competence that is operating here, not a principle of parsing. 2 Heaviness Quirk et al. (1985) define end weight as the ten- dency to place material with more information content after material with less information content. This notion is closely related with end focus which is stated in terms of importance of the contribution of the constituent, (not merely the quantity of lexical material.) These two prin- ciples operate in an additive fashion. Quirk et al. use heaviness to account for a variety of phenomena, among them: • genitive NPs: the shock of his resignation, * his resignation's shock. • it-extraposition: It bothered me that she left quickly. ? That she left quickly bothered me. Heaviness clearly plays a role in modifier attach- ment, as shown in table 1. My claim is that what is wrong with sentences such as (1) is the violation, in the high attachment, of the principle of end weight. While violations of the principle of end weight in unambigu- ous sentences (e.g. those in table 1) cause little grief, as they are easily accommodated by the hearer, the on- line decision process of disambiguationjcould well be much more sensitive to small differences in the degree of violation. In particular, it would seem that in (1)b, John sold it today. John sold the newspapers today. John sold his rusty socket-wrench set today. John sold his collection of 45RPM Elvis records today. John sold his collection of old newspapers from before the Civil War today. John sold today it. John sold today the newspapers. John sold today his rusty socket-wrench set. John sold today his collection of 45RPM Elvis records. John sold today his collection of old newspapers from before the Civil War. Table I: Illustration of heaviness and word order the heaviness-based preference for low attachment has a chance to influence the parser before the inference-based preference for high attachment. Theprecise definition of heaviness is an open prob- lem. It is not clear whether end weight and end focus ad- equately capture all of its subtlety. For the present study I approximate heaviness by easily computable means, namely the presence of a clause within a given con- stituent. 3 A study The consequence of my claim is that light adverbials cannot be placed after heavy VP arguments, while heavy adverbials are not subject to such a constraint. When the speaker wishes to convey the information in (1)a, there are other word-orders available, namely, (3) a. Yesterday John said that Bill left. b. John said yesterday that Bill left. If the claim is correct then when a short adverbial modi- fies a VP which contains a heavy argument, the adverbial will appear either before the VP or between the verb and the argument. Heavy adverbials should be immune from this constraint. To verify this prediction, I conducted an investi- gation of the Penn Treebank corpus of about 1 mil- lion words syntactically annotated text from the Wall Street Journal. Unfortunately, the corpus does not dis- tinguish between arguments and adjuncts - they're both annotated as daughters of VP. Since at this time, I do not have a dictionary-based method for distinguishing (VP asked (S when...)) from (VP left (S when...)), my search cannot include all adverbials, only those which could never (or rarely) serve as arguments. I therefore restricted my search to subgroups of the adverbials. 1. Ss whose complementizers participate overwhelm- ingly in adjuncts: after although as because be- fore besides but by despite even lest meanwhile once provided should since so though unless until upon whereas while. 2. single word adverbials: now however then already here too recently instead often later once yet previ- ously especially again earlier soon ever jirst indeed sharply largely usually together quickly closely di- rectly alone sometimes yesterday The particular words were chosen solely on the basis of frequency in the corpus, without 'peeking' at their word-order behavior 1. For arguments, I only considered NPs and Ss with complementizer that, and the zero complementizer. The results of this investigation appear the following [able: adverbial: arg type light heavy total single word pre-arg posbarg 760 399 267 5 1027 404 clausal pre-arg post-arg 13 597 7 45 20 642 Of 1431 occurrences of single word adverbials, 404 (28.2%) appear after the argument. If we consider only cases where the verb takes a heavy argument (defined as one which contains an S), of the 273 occurrences, only 5 (1.8%) appear after the argument. This interaction with heaviness of the argument is statistically significant (X 2 = 115.5,p < .001). Clausal adverbials tend to be placed after the verbal argument: only 20 out of the 662 occurrences of clausal adverbials appear at a position before the argument of the verb. Even when the argument is heavy, clausal adverbials appear on the right: 45 out of a total of 52 clausal adverbials (86.5%). (2) and (4) are two examples of RA-violating sen- tences which I have found. (4) Bankruptcy specialists say Mr. Kravis set a precedent for putting new money in sour LBOs recently when KKR restructured foundering Sea- man Furniture, doubling KKR's equity slake. To summarize: light adverbials tend to appear be- fore a heavy argument and heavy adverbials tend to ap- pear after it. The prediction is thus confirmed. 1 Each adverbial can appear in at least one position before the argu- ment to the verb (sentence initial, preverb, between verb and argument) and at least one post-verbal-argument position (end of VP, end of S). 286 RA is at a loss to explain this sensitivity to heavi- ness. But even a revision of RA, such as the one pro- posectby Schubert (1986) which is sensitive to the size of the modifier and of the modified constituent, would still require additional stipulation to explain the apparent conspiracy between a parsing strategy and tendencies in generator to produce sentences with the word-order properties observed above. 4 Parsing How can we exploit the findings above in our design of practical parsers? Clearly RA seems to work extremely well for single word adverbials, but how about clausal adverbials? To investigate this, I conducted another search of the corpus, this time considering only ambigu- ous attachment sites. I found all structures matching the following two low-attached schemata 2 low VP attached: [vp ... [s * [vp * adv *] * ] ...] low S attached: [vp ... [s * adv *] ...] and the following two high-attached schemata high VP attached: [vp v * [... [s ]] adv *] high S attached: Is * [... [vp ... [s ]]] adv * ] The results are summarized in the following table: adverb-type low-attached high-att. % high. single word 1116 10 0.8% clausal 817 194 19,2% As expected, with single-word adverbials, RA is al- most always right, failing only 0.8% of the time. How- ever, with clausal adverbials, RA is incorrect almost one out of five times. 5 Toward a Meaning-based ac- count of Heaviness At the end of section 3 I stated that a declarative ac- count of the ill-formedness of a heavy argument fol- lowed by a light modifier is more parsimonious than separate accounts for parsing preferences and generation preferences. I would like to suggest that it is possible to formalize the intuition of 'heaviness' in terms of an as- pect of the meaning of the constituents involved, namely their givenness in the discourse. Given entities tend to require short expressions (typ- ically pronouns) for reactivation, whereas new entities tend to be introduced with more elaborated expressions. 2By * I mean match 0 or more daughters. By Ix ... [y ]] I mean constituent x contains constituent y as a rightmost descendant. By [x ... [y ] ... ] I mean constituent x contains constituent y as a descendant. 287 In fact, it is possible to manipulate heaviness by chang- ing the context. For example, (1)b is natural in the following dialog z (when appropriately intoned) A: John said that Bill will leave next week, and that Mary will go on sabbatical in September. B: Oh really? When announce all this? A: He said that Bill will leave yesterday, and he told us about Mary's sabbatical this morning. 6 Conclusion I have argued that the apparent variability in the applica- bility of Right Association can be explained if we con- sider the heaviness of the constituents involved. I have demonstrated that in at least one written genre, light adverbials are rarely produced after heavy arguments - precisely the configuration which causes the strongest RA-type effects. This demarcates a subset of attach- ment ambiguities where it is quite profitable to use RA as an approximation of the human sentence processor. The work reported here considers only a subset of the attachment data in the corpus. The corpus itself rep- resents a very narrow genre of written discourse. For the central claim to be valid, the findings must be replicated on a corpus of naturally occurring spontaneous speech. A rigorous account of heaviness is also required. These await further research. References [1] Kimball, John. Seven Principles of Surface Structure Parsing. Cognition 2(1). 1973. [2] Quirk, Randolph, Sidney Greenbaum, Geoffrey Leech and Jan Svartvik. A Comprehensive Grammar of the English Language. Longman. London. 1985. [3] Schubert, Lenhart. Are there Preference Trade-offs in Attachment Decisions? AAAI-86. [4] Wilks Yorick. Right Attachment and Preference Se- mantics. ACL-85 [5] Weischedel, Ralph, Damaris Ayuso, R. Bobrow, Sean Boisen, Robert Ingria, and Jeff Palmucci. Partial Parsing: A Report on Work in Progress. Proceedings of the DARPA Speech and Natural Language Workshop. 1991. [6] Whittemore, Greg, Kathleen Ferrara, and Hans Brunner. Empirical study of predictive powers of sim- ple attachment schemes for post-modifier prepositional phrases. ACL-90. sI am grateful to EUen Prince for a discussion of this issue.
1992
39
THE REPRESENTATION OF MULTIMODAL USER INTERFACE DIALOGUES USING DISCOURSE PEGS Susann Luperfoy MITRE Corporation 7525 Colshire Blvd. W418 McLean, VA 22102 luperfoy@ starbase.mitre.org and ATR Interpreting Telephony Research Laboratories Kyoto, Japan ABSTRACT The three-tiered discourse representation defined in (Luperfoy, 1991) is applied to multimodal human- computer interface (HCI) dialogues. In the applied system the three tiers are (1) a linguistic analysis (morphological, syntactic, sentential semantic) of input and output communicative events including keyboard-entered command language atoms, NL strings, mouse clicks, output text strings, and output graphical events; (2) a discourse model representation containing one discourse object, called a peg, for each construct (each guise of an individual) under discussion; and (3) the knowledge base (KB) representation of the computer agent's 'belief' system which is used to support its interpretation procedures. I present evidence to justify the added complexity of this three-tiered system over standard two-tiered representations, based on (A) cognitive processes that must be supported for any non-idealized dialogue environment (e.g., the agents can discuss constructs not present in their current belief systems), including information decay, and the need for a distinction between understanding a discourse and believing the information content of a discourse; (B) linguistic phenomena, in particular, context-dependent NPs, which can be partially or totally anaphoric; and (C) observed requirements of three implemented HCI dialogue systems that have employed this three-tiered discourse representation. THE THREE-TIERED FRAMEWORK This paper argues for a three-tiered computational model of discourse and reports on its use in knowledge based human-computer interface (HCI) dialogue. The first tier holds a linguistic analysis of surface forms. At this level there is a unique object (called a linguistic object or LO) for each linguistic referring expression or non-linguistic communicative gesture issued by either participant in the interface dialogue. The intermediate tier is the discourse model, a tier with one unique object corresponding to each concept or guise of a concept, being discussed in the dialogue. These objects are called pegs after Landman's theoretical construct (Landman, 1986a). 1 The third tier is the knowledge base (KB) that describes the belief system of one agent in the dialogue, namely, the backend system being interfaced to. Figure 1 diagrams a partitioning of the information available to a dialogue processing agent. This partitioning gives rise to the three discourse tiers proposed, and is motivated, in part, by the distinct processes that transfer information between tiers. I-c=::~ ~ DiSCoOUrse I FIGURE 1. Partitioned Discourse Information The linguistic tier is similar to the linguistic representation of Grosz and Sidner (1985) and its LO's are like Sidner's NP bundles (Sidner, 1979), i.e., both encode the syntactic and semantic analyses of surface forms. One difference, however, is that NP bundles specify database objects directly whereas LOs are instead "anchored" to pegs in the discourse model tier and make no direct connection to entries in the static 1The discourse peg functions differently from its namesake but the term provides the suitable metaphor (also suggested by Webber): an empty hook on which to hang properties of the real object. For more background on the Data Semantics framework itself see (Landman 1986b) and (Veltman, 1981). 22 knowledge representation. LOs are also like Discourse Referents (Karttunen, 1968), Discourse Entities ((Webber, 1978), (Dahl and Ball, 1990), (Ayuso, 1989), and others), File Cards (Heim, 1982), and Discourse Markers (Kamp, 1981) in at least two ways. First, they arise from a meaning representation of the surface linguistic form based on a set of generation rules which consider language-specific features, and facts about the logical form representation: quantifier scope assignments, syntactic number and gender markings, distributive versus collective reading information, ordering of modifiers, etc. Janus (Ayuso, 1989) allows for DE's introduced into the discourse context through a non- linguistic (the haptic) channel. But in Janus, a mouse click on a screen icon is assigned honorary linguistic status via the logical form representation of a definite NP, and that introduces a new DE into the context. WML, the intensional language used, also includes time and possible world parameters to situate DE's. These innovations are all important attributes of objects at what I have called the linguistic tier. Secondly, the discourse constructs listed above all correspond either directly (Discourse Referents, File Cards, Discourse Entities of Webber) or indirectly after collapsing of referential equivalence classes (Discourse Markers, DE's of Janus) with referents or surrogates in some representation of the reference world, and it is by virtue of this mapping that they either are assigned denotations or fail to refer. While I am not concerned here with referential semantics I view this linguistic tier as standing in a similar relation to the reference world of its surface forms. The pegs discourse model represents the world as the current discourse assumes it to be only, apart from how the description was formulated, apart from the true state of the reference world, and apart from how either participant believes it to be. This statement is similar to those of both Landman and Webber. The discourse model is also the locus of the objects of discourse structuring techniques, e.g., both intentional and attentional structures of Grosz and Sidner (1985) are superimposed on the discourse model tier. A peg has links to every LO that "mentions" it, the mentioning being either verbal or non-verbal and originating with either dialogue participant. Pegs, like File Cards, are created on the fly as needed in the current discourse and amount to dynamically defined guises of individuals. These guises differ from File Cards in that they do not necessarily correspond I:1 to individuals they represent, i.e., a single individual can be treated as two pegs in the discourse model, if for example the purpose is to contrast guises such as Superman and Clark Kent, without requiring that there also be two individuals in the knowledge structure. In comparing the proposed representation to those of Heim, Webber, and others it is also helpful to note a difference in emphasis. Heim's theory of definiteness defines semantic values for NPs based on their ability to add new File Cards to the discourse state, their "file change potential." Similarly, Webber's goal is to define the set of DE's justified by a segment of text. Examples of a wide range of anaphoric phenomena are used as evidence of which DEs had to have been generated for the antecedent utterance. Thus, the definition of Invoking Descriptions but no labels for subsequent mention of a DE or discussion of their affect on the DE. In contrast, my emphasis is in tracking these representations over the course of a long dialogue; I have nothing to contribute to the theory of how they are originally generated by the logical form representation of a sentence. I am also concerned with how the subsequent utterance is processed given a possibly flawed or incomplete representation of the prior discourse, a possibly flawed or incomplete linguistic representation of the new utterance, and/or a mismatch between KB and discourse. The purpose here is to manage communicative acts encountered in real dialogue and, in particular, HCI dialogues in which the interpreter is potentially receiving information from the other dialogue participant with the intended result of an altered belief structure. So I include no discussion of the referential value of referring expressions or discourse segments, in terms of truth conditions, possible worlds, or sets of admissible models. Neither is the aim a descriptive representation of the dialogue as a whole; rather, the purpose is to define the minimal representation of one agent's egocentric view of a dialogue needed to support appropriate behavior of that agent in real-time dialogue interaction. The remainder of this paper argues for the additional representational complexity of the separate discourse pegs tier being proposed. Evidence for this innovation is divided into three classes (A) cognitive requirements for processing dialogue, (B) linguistic phenomena involving context-dependent NPs, and (C) implementation-based arguments. EVIDENCE FOR THREE TIERS A. COGNITIVE PROCESSING CONSTRAINTS This section discusses four requirements of discourse representation based on the cognitive limitations and pressures faced by any dialogue participant. 1.Incompleteness: The information available to a dialogue agent is always incomplete; the belief system, the linguistic interpretation, the prior discourse representation are partial and potentially flawed representations of the world, the input 23 utterances, and the information content of the discourse, respectively. The distinction between discourse pegs and KB objects is important because it allows for a clear separation between what occurs in the discourse, and what is encoded as beliefs in the KB. The KB is viewed as a source of information consulted by one agent during language processing, not as the locus of referents or referent surrogates. Belief system incompleteness means it is common in dialogue to discuss ideas one is unfamiliar with or does not believe to be true, and to reason based on a partial understanding of the discourse. So it often happens that a discourse peg fails to correspond to anything familiar to the interpreting agent. Therefore, no link to the KB is required or entailed by the occurrence of a peg in the discourse model. There are two occasions where the interpreter is unable to map the discourse model to the KB, The first is where the class referenced is unfamiliar to the interpreting agent, e.g., when an unknown common noun occurs and the interpreter cannot map to any class named by that common noun, e.g., "The picara walked in." The second is where the class is understood but the particular instance being referenced cannot be identified at the time the NP occurs. I.e., the interpreter may either not know of any instances of the familiar class, Picaras, or it may not be able to determine which of those picara instances that it knows of is the single individual indicated by the current NP. The pegs model allows the interpreter to leave the representation in a partial state until further information arrives; an underspecified peg for the unknown class is created and, when possible, linked to the appropriate class. As the dialogue progresses subsequent utterances or inferences add properties to the peg and clarify the link to the KB which becomes gradually more precise. But that is a matter between the peg and the KB; the original LO is considered complete at NP processing time and cannot be revisited. 2. Contradiction: Direct conflicts between what an agent believes about the world (the KB) and what the agent understands of the current discourse (the discourse model) are also common. Examples include failed interpretation, misunderstanding, disagreement between two negotiating parties, a learning system being trained or corrected by the user, a tutorial system that has just recognized that the user is confused, errors, lies, and other hypothetical or counterfactual discourse situations. But it is often an important service of a user interface (UI) to identity just this sort of discrepancy between its own KB information and the user's expressed beliefs. How the 15I responds to recognized conflicts will depend on its assigned task; a tutoring system may leave its own beliefs unchanged and engage the user in an instructional dialogue whereas a knowledge 24 acquisition tool might simply correct its internal information by assimilating the user's assertion. To summarize 1 and 2, since dialogue in general involves transmission of information the interpreting agent is often unfamiliar with individuals being spoken about. In other cases, familiar individuals will receive new, unfamiliar, and/or controversial attributes over the course of the dialogue. Thirdly, on the generation side, it is clear that an agent may choose to produce NL descriptions that do not directly reflect that agent's belief system (generating simplified descriptions for a novice user, testing, game playing, etc.). In all cases, in order to distinguish what is said from what is believed, KB objects must not be created or altered as an automatic side effect of discourse processing, nor can the KB be required to be in a form that is compatible with all possible input utterances. In cases of incompleteness or contradiction the underspecified discourse peg holds a tentative set of properties that highlight salient existing properties of the KB object, and/or others that add to or override properties encoded in the KB. 3. Dynamic Guises: Landman's analysis of identity statements suggests a model (in a model-theoretic semantics) that contains pre-defined guises of individuals. In the system I propose, these guises are instead defined dynamically as needed in the discourse and updated non-monotonically. These are the pegs in the discourse model. Grosz (1977) introduced the notion of focus spaces and vistas in a semantic net representation for the similar purpose of representing the different perspectives of nodes in the semantic net that come into focus and affect the interpretation of subsequent NPs. What is in attentional focus in Grosz's system and in mine, are not individuals in the static belief system but selected views on those individuals and these are unpredictable, defined dynamically as the discourse progresses. I.e., it is impossible to know at KB creation time which guises of known individuals a speaker will present to the discourse. My system differs from the semantic net model in the separation it posits between static knowledge and discourse representation; focus spaces are, in effect, pulled out of the static memory and placed in the discourse model as a smactudng of pegs. This eliminates the need to ever undo individual effects of discourse processing on the KB; the entire discourse model can be studied and either cast away after the dialogue or incorporated into the KB by an independent operation we might call "belief incorporation." 4. Information Decay: In addition to monotonic information growth and non-monotonic changes to the discourse model, the agent participating in a dialogue experiences information decay over the course of the conversation. But information from the linguistic, discourse, and belief system tiers decays at different rates and in response to different cognitive forces/limitations. (1) LOs become old and vanish at an approximately linear rate as a function of time counted from the point of their introduction into the discourse history, i.e., as LOs get older, they fade from the discourse and can no longer serve as linguistic sponsors 2 for anaphors; (2) discourse pegs decay as a function of attentional focus, so that as long as an individual or concept is being attended to in the dialogue, the discourse peg will remain near the top of the focus stack and available as a potential discourse sponsor for upcoming dependent referring expressions; (3) decay of static information in the KB is analogous to more general forgetting of stored beliefs/information which occurs as a result of other cognitive processes, not as an immediate side-effect of discourse processing or the simple passing of time. kinds (signalled by a bare plural NP in English) to sponsor dependent references to indefinite instances. (Substitute "picaras" for "racoons" in Carlson's example to demonstrate the independence of this phenomenon from world knowledge about the referent of the NP.) 3 This holds for mass or count nouns and applies in either direction, i.e., the peg for a specific exemplar can sponsor mention of the generic kind. Nancy ate her oatmeal this morning because she heard that il lowers cholesterol. The two parameters, partial/total dependence and linguistic/discourse sponsoring, classify all anaphoric phenomena (independently of the three-tiered framework) and yield as one result a characterization of indefinite NPs as potentially partially anaphoric in exactly the same way that definite NPs are. B. LINGUISTIC EVIDENCE This section sketches an analysis of context- dependent NPs to help argue for the separation of linguistic and discourse tiers. (Luperfoy, 1991) defines four types of context-dependent NPs and uses the pegs discourse framework to represent them: a dependent (anaphoric) LO must be linguistically sponsored by another LO in the linguistic tier or discourse sponsored by a peg in the discourse model and these two categories are subdivided into total anaphors and partial anaphors. Total anaphors are typified by coreferential, (totally dependent), definite pronouns, such as "himself TM and "he" below, both of which are sponsored by "Karl." Karl saw himself in the mirror. He started to laugh. I stopped the car and when I opened the hoodI saw that a spark plug wire was missing. The distinction between discourse sponsoring and linguistic sponsoring, plus the differential information decay rates for the three tiers discussed in Section A, together predict acceptability conditions and semantic interpretation of certain context- dependent NP forms. For example, the strict locality of one-anaphoric references is predicted by two facts: (a) one-anaphors must always have a linguistic sponsor (i.e., an LO in the linguistic tier). (b) these linguistic sponsor candidates decay more rapidly than pegs in the discourse model tier. Partial anaphors depend on but do not corefer with their sponsors. Examples of partial anaphors have been discussed widely under other labels, by Karttunen, Sidner, Heim, and others, in examples like this one from (Karttunen, 1968) I stopped the car and when I opened the hoodl saw that the radiator was boiling. where knowledge about the world is required in order to make the connection between dependent and sponsor, and others like Carlson's (1977) In contrast, definite NPs can be discourse sponsored. And the sponsoring peg may have been first introduced into the discourse model by a much earlier LO mention and kept active by sustained attentional focus. Thus, discourse- versus linguistic sponsoring helps explain why definite NPs can reach back to distant segments of the discourse history while one- anaphors cannot. 4 Figure 2 illustrates the four possible discourse configurations for context-dependent NPs. The KB interface is omitted in the diagrams in order to show only the interaction between linguistic and discourse Nancy hates racoons because t.hey ate her corn last year. where associating dependent to sponsor requires no specific world knowledge, only a general discourse principle about the ability of generic references to 2Discussed in next section. 3Compare this partial anaphor to the total anaphoric reference in, Nancy hates racoons because they are not extinct. 4For a detailed description of the algorithms for identifying sponsors and assigning pegs as anchors, for all NP types see (Luperfoy 1991) and (Luperfoy and Rich, 1992). 25 tiers, and dark arrows indicate the sponsorship relation. In each case, LO-1 is non-anaphoric and mentions Peg-A, its anchor in the discourse model. For the two examples in the top row LO-2 is linguistically sponsored by LO-1. Discourse sponsorship (bottom row) means that the anaphoric LO-2 depends directly on a peg in the discourse model and does not require sponsoring by a linguistic form. The left column illustrates total dependence, LO-1 and LO-2 are co-anchored to Peg-A. Whereas, in partial anaphor cases (fight column), a new peg, Peg-B, gets introduced into the discourse model by the partially anaphoric LO-2. TOTAL ANAPHORA PARTIAL ANAPHORA Search for a button. Delete it. a button, it. Search for a button. a button, the new icon Search for all buttons. Display one. all buttons, one Search for a button. Delete the label a button the label FIGURE 2. Four Possible Discourse Configurations For Anaphoric NPs The classification of context-dependence is made explicit in the three-tiered discourse representation which also distinguishes incidental coreference from true anaphoric dependence. It supports uniform analysis of context-dependent NPs as diverse as reflexive pronouns and partially anaphoric indefinite NPs. The resulting relationship encodings are important for long-term tracking of the fate of discourse pegs. In File Change Semantics this would amount to recording the relation that justifies accommodation of the new File Card as a permanent fact about the discourse. Furthermore, relationships between objects at different levels inform each other and allow application of standard constraints. The three tiers allow you to uphold linguistic constraints on coreference (e.g., syntactic number and gender agreement) at the LO level but mark them as overridden by discourse or pragmatic constraints at the discourse model level., i.e. apparent violations of constraints are explained as transfer of control to another tier where those constraints have no jurisdiction. In a two-tiered model coreferential LOs must be equated (or collapsed into one) or else they are distinct. Here, the discourse tier is not simply a richer analysis of linguistic tier information nor a conflation of equivalence classes of LOs partitioned by referential identity. C. EVIDENCE BASED ON AN IMPLEMENTED SYSTEM The discourse pegs approach has been implemented as the discourse component of the Human Interface Tool Suite (HITS) project (Hollan, et al. 1988) of the MCC Human Interface Lab and applied to three user interface (UI) designs: a knowledge editor for the Cyc KB (Guha and Lenat, 1990), an icon editor for designing display panels for photocopy machines, and an information retrieval (IR) tool for preparing multi- media presentations. All three UIs are knowledge based with Cyc as their supporting KB. An input utterance is normally a command language operator followed by its arguments. And an argument can be formulated as an NL string representation of an NP, or as a mouse click on presented screen objects that stand for desired arguments. Output utterances can be listed names of Cyc units retrieved from the knowledge base in response to a search query, self- narration statements simultaneous with changes to the screen display, and repair dialogues initiated by the NL interpretation system. Input and output communicative events of any modality are captured and represented as pegs in the discourse model and LOs in the linguistic history so that either dialogue participant can make anaphoric reference to pegs introduced by the other, while the source agent of each assertion is retained on the associated LO. The HITS UIs endeavor to use NL only when the added expressive power is called for and allow input mouse clicks and output graphic gestures for occasions when these less costly modalities are sufficient. The respective strengths of the various UI modalities are reviewed in (P. Cohen et al., 1989) which reports on a similar effort to construct UIs that make maximal benefit of NL by using it in conjunction with other modalities. Two other systems which combine NL and mouse gestures, XTRA (Wahlster, 1989) and CUBRICON (Neal, et al., 1989), differ from the current system in two ways. First, they take on the challenge of ambiguous mouse clicks, their primary goal being to use the strengths of NL (text and speech) to disambiguate these deictic references. In the HITS system described here only presented icons can be clicked on and all uninterpretable mouse input is ignored. A second, related difference is the assumption by CUBRICON and XTRA of a closed 26 world defined by the knowledge base representation of the current screen state. This makes it a reasonable strategy to attempt to coerce any uninterpretable mouse gesture into its nearest approximation from the finite set of target icons. In rejecting the closed world assumption I give up the constraining power it offers, in exchange for the ability to tolerate a partially specified discourse representation that is not fully aligned with the KB. In general, NL systems assume a closed world, in part because the task is often information retrieval or because in order for NL input to be of use it must resolve to one of a finite set of objects that can be acted upon. Because the HITS systems intended to generate and receive new information from the user, it is not possible to follow the approach taken in Janus for example, and resolve the NP "a button" to a sole instance of the class #%Buttons in the KB. Ayuso notes that this does not • reflect the semantics of indefinite NPs but it is a shortcut that makes sense given the UI task undertaken. In human-human dialogue many extraneous behaviors have no intended communicative value (scratching one's ear, picking up a glass, etc.). Similarly, many UI events detectable by the dialogue system are not intended by either agent as communicative and should not be included in the discourse representation, e.g., the user moving the mouse cursor across the screen, or the backend system updating a variable. In the implemented system NL and non-NL knowledge sources exchange information via the HITS blackboard (R. Cohen et al., 1991) and when a knowledge source communicates with the user a statement is put on the blackboard. Only those statements are captured from the blackboard and recorded in the dialogue. In this way, all non- communicative events are ignored by the dialogue manager. Many of the interesting properties of this system arise from the fact that it is a knowledge-based system for editing the same KB it is based on. The three- tiered representation suits the needs of such a system. The HITS knowledge editor is itself represented in the KB and the UI can make reference to itself and its components, e.g., #%Inspector3 is the KB unit for a pane in the window display and can be referred to in the UI dialogue. Secondly, ambiguous reference to a KB unit versus the object in the real world is possible. For example, the unit #%Joseph and the person Joseph are both reasonable referents of an NP: e.g., "When was he born?" requests the value in the #%birthdate slot of the KB unit #%Joseph, whereas "When was it created?" would access a bookkeeping slot in that same unit. Finally, the need to refer to units not yet created or those already deleted would occur in requests such as, "I didn't mean to delete them" which require that a peg persist in focus in the 27 discourse model independent of the status of the corresponding KB unit. These example queries are not part of the implementation but do exemplify reference problems that motivate use of the three- tiered discourse representation for such systems. The dialogue history is the sequences of input and output utterances in the linguistic tier and is structured according to (Clark and Shaeffer 1987) as a list of contributions each of which comprises a presentation and an acceptance. This underlying structure can be displayed to the user on demand. The following example dialogue shows a question-answer sequence in which queries are command language atom followed by NL string or mouse click. user: system: user: system: user: system: user: system: SEARCH FOR a Lisp programmer who speaks French #%Holm, #%Ebihara, #%Jones, #%Baker. FOLLOWUP one who speaks Japanese #%Ebihara FOLLOWUP her creator #%Holm INSPECT it #%Holm displayed in ¢~olnspector3 Here, output utterances are not true generated English but rather canned text string templates whose blanks are filled in with pointers to KB units. The whole output utterance gets captured from the HITS blackboard and placed in the discourse history. The objects filling template slots generate LOs and discourse pegs which are then used by discourse updating algorithms to modify the focus stack. For example, output-template: #%Holm displayed in #%Inspector3. causes the introduction of LOs and pegs for #%Holm and #%Inspector3. Those objects generated as system output can now sponsor anaphoric reference by the user. A collection of discourse knowledge sources update data structures and help interpret context dependent utterances. In this particular application of the three- tiered representation, context-dependence is exclusively a fact about the arguments to commands since command names are never context-sensitive. Input NPs are first processed by morphological, syntactic, and semantic knowledge sources, the result being a 'context-ignorant' (sentential) semantic analysis with relative scope assignments to quantifiers in NPs such as "Every Lisp programmer who owns a dog." This analysis would in principle use the DE generation rules of Webber and Ayuso for introducing its LOs. Discourse knowledge sources use the stored discourse representation to interpret context-dependent LO's, including definite pronouns, contrastive one- anaphors, 5 reference with indexical pronouns (e.g. you, my, I, mouse-clicks on the desktop icons), and totally anaphoric definite NPs. 6 The discourse module augments the logical form output of semantic processing and passes the result to the pragmatics processor whose task is to translate the logical form interpretation into a command in the language of the backend system, in this case Cycl, the language of the Cyc knowledge base system. Productive dialogue includes subdialogues for repairs, requests for confLrrnations, and requests for clarification (Oviatt et al., 1990). The implemented multimodal discourse manager detects one form of interpretation failure, namely, when a sponsor cannot be identified for an input pronoun. The discourse system initiates its own clarification subdialogue and asks the user to select from a set of possible sponsors or to issue a new NP description as in the example user: EDIT it. system: The meaning of "it" is unclear. Do you mean one of the following? <#%Ebihara> <#%Inspector3> user: (mouse clicks on #%Inspector3) system: #%Inspector3 displayed in #%Inspector3 The user could instead type "yes" followed by a mouse click at the system's further prompting or "no" in which case the system prompts for an alternative descriptive NP which receives from-scratch NL processing. During the subdialogue, pegs for the actual LO <LO-it> (the topic of the subdialogue) and for the two screen icons for #%Ebihara and #%Inspector3 are in focus in the discourse model. Figure 3 illustrates the arrangement of information structures in one multimodal HCI dialogue setting. 7 In this example, the user requests creation of a new button. Peg-A represents that hypothetical object. The system responds by (1) creating Button-44, (2) displaying it on the screen, and (3) generating a self- narration statement "Button-44 created." After the non-verbal event a followup deictic pronoun or mouse click, e.g., "Destroy that (button)" or "Destroy <mouse-click on Button-44>," could access the peg directly, but a pronominal reference, e.g., "Destroy it" would require linguistic sponsoring by the LO from 5Luperfoy 1989 defines contrastive one-anaphora as one of three semantic functions of one-anaphora. 6Each anaphoric LO triggers a specialized handier to search for candidate sponsors (Rich and Luperfoy, 1988). 7Exarnples are representative of those of the actual system though simplified for exposition. I KB I#%BUTTONS I 3 ' Tier / r_..~,._~,~_~ J ~ Backend / I ~u,~on-4, ~ ( System "~ JTier ~ ..... = E~--E~EI t /user: CREATE a l.~utton. DESTROY <MOUS~E'CLICK> / L (command) (NL) (command) (mouse gesture) J FIGURE 3. Three Tiers Applied to a Display Panel Design Tool the system's previous output statement. Because the system responded with both a graphical result and simultaneous self-narration statement in this example, either dependent reference type is possible. The knowledge based graphical knowledge source creates the KB unit #%Button44 as an instance of #%Buttons, but in this 15I the user is unaware of the underlying KB and so cannot make or see references to KB units directly. Note that Pegs A and B cannot be merged in the discourse model. The followup examples above only refer to that new Button-44 that was created. Alternatively (in some other UI) the user might have made total- and partial anaphoric re-mention of Peg-A by saying "Create a button. And make it a round 0ng." The relationship between the two pegs is not identity. However this is not just a fact about knowledge acquisition interfaces, since the IR system might have allowed similar elaborated queries, "Search for a button, and make sure it'.__~s a round one. ''8 The relationship between Pegs A and B arises from their being objects in a question-response pair in the structured dialogue history. Finally, if the system is unable to map the word, say it were "knob," to any KB class then that constitutes a missing lexical item. Peg-A still gets created but it is not hooked up to #%Buttons (yet). In response to a 'floating' peg a UI system could choose to engage the user in a lexical acquisition dialogue, leave Peg-A underspecified until later (especially appropriate for text understanding applications), or associate it with the most specific possible node 8Analogous to the issue in Karttunen's John wants to catch a fish and eat it for supper. 28 temporarily (e.g., #%Icons or #%PhysicalObjects). The eventual response may be to acquire a new class, #%Knobs, as a subclass of icons, or acquire a new lexical mapping from "knob" to the class #%Buttons. The implemented systems which test the discourse representation were built primarily to demonstrate other things, i.e., to show the value of combining independent knowledge sources via a centralized blackboard mechanism and to explore options for combining NL with other UI modalities. Consequently, the NL systems were exercised on only a subset of their capabilities, namely, NP arguments to commands, which could be interpreted by most NLU systems. The dialogue situation itself is what argues for the separation of tiers. CONCLUSION The three-tiered discourse representation was used to model dialogue interaction from one agent's point of view. The discourse pegs level is independent of both the surface forms that occur and the immediate condition of the supporting belief system. In the implemented UI systems the discourse model provided a necessary buffer between the Cyc KB undergoing revision and the ongoing dialogue. However, most of the relevant considerations apply to other HCI dialogues, to human-human dialogues, and to NL discourse processing in general. I summarize the advantages of the pegs model under the original three headings and close with suggestions for further work. (A ) Cognitive considerations: The belief system (KB) can serve dialogue processes as a source of information about the reference world without being itself modified as a necessary side effect of discourse interpretation. This means that understanding is not equated with believing, i.e., mismatch between pegs and KB objects is tolerated. Separate processes are allowed to update the KB in the background during discourse processing as the represented world changes and afterward, 'belief acquisition' can take care of assimilating pegs into the KB where appropriate. The separation of tiers allows for differential rates of information decay. The linguistic tier fades from availability rapidly and as a function of time, discourse tier decay is conditioned by attentional focus, and the KB represents a static belief structure in which forgetting, if represented at all, is not affected by discourse processing. Interpretation can be accomplished incrementally. The meaning of an NP is not defined as a KB object it corresponds to but as the peg that it mentions in the discourse model, and that peg is always a partial representation of the speaker's intended referent. How partial it is can vary over time and it can be of use for 29 sponsoring dependent NPs, generating questions, etc., even in its partial state. Indeed, feedback from such use is what helps to further specify the peg. (B ) Linguistic phenomena: In English, all NPs have the potential of being context-dependent. The separation of tiers allows for the distinction between true anaphoric dependence and incidental coreference, encoded as the co-anchoring of multiple LOs to a single peg without sponsorship. Partial and total anaphors are explicitly represented, with linguistic sponsoring distinguished from discourse sponsoring, and these relationships are stored as annotated links in the permanent discourse representation so that internal NL and non-NL procedures may query the discourse structure for information on coreference, KB property values, justifications for later links, etc. The distinction between discourse and linguistic sponsoring allows language-specific syntactic and semantic constraints to be upheld at the LO level and overridden by pragmatic and discourse considerations at the discourse pegs level, thereby providing a mechanism which addresses well-known violations of linguistic constraints on coreference without relaxing the constraints themselves. Input and output are distinguished at the linguistic tier but merged at the discourse model tier. The user can make anaphoric reference through any channel to pegs introduced by the backend system through any channel. Yet it remains part of the discourse history record in the linguistic tier, who made which assertions about which pegs. In the HCI dialogue environment this means that NL and non-NL modalities are equally acceptable as surface forms for input and output utterances, i.e., voice input could be added without extension to the current system as long as the speech recognizer output forms that could be used to generate LOs. ( C) Evidence from a trial implementation: In knowledge-based UIs, the strict separation of tiers means that the KB can be incomplete or incorrect throughout the discourse, it can remain unaffected by discourse processing, and it can be updated by other knowledge acquisition procedures independently of simultaneous discourse processing. Nevertheless, it is possible and may be computationally efficient to implement the discourse model as a specialized, non-static (and potentially redundant) region of the KB so that KB reasoning mechanisms can be applied to the hypothetical state of affairs depicted by pegs in the discourse model. The guise of an individual has just those properties assumed by the current discourse. Using pegs as dynamically defined guises in effect suppresses non-salient properties of the accessed KB unit. Thus Grosz's requirement that the discourse representation encode relations in focus as well as entities in focus is supported at the pegs level. Moreover, the three-tiered design can represent conflict between interpreted discourse information and the agent's static beliefs because KB values can be overridden in the discourse by ascription of contrary properties to corresponding pegs. A related benefit is that the external dialogue participant is allowed to introduce new pegs and new information into the discourse and this does not require creation of a new KB object during discourse interpretation. Because pegs are used to accumulate tentative properties on (actual or hypothetical) individuals without editing the KB either permanently or only for the duration of the discourse, belief acquisition can be postponed until a sufficiently complete understanding has been achieved, so the discourse model can serve as an agenda for later KB updating. Meanwhile, partial and incorrect discourse representations are useful and non-monotonic repair operations make it easy to correct interpretation errors by changing links between LO and peg or between peg and KB unit without disturbing other links. Some pegs are not associated with the linguistic tier at all. Graphical events in the physical environment that make an object salient can inject a peg directly into the discourse model. However, only pegs introduced via the linguistic channel can sponsor linguistic anaphora, e.g., "What is it" requires the presence of an LO, but "What is that" can be sponsored directly by the peg for an icon that just appeared on the screen. Further Research Dependents can sponsor other dependents, and in general, there is complex interaction between sequences of NPs in a discourse. For example, in the sentence Delete the buttons if one of them is missing its label. its label is partially dependent on one, and/t is totally dependent on 9n¢ which is partially dependent on them which is totally dependent on the buttons which is presumably a total anaphoric reference to a discourse peg for some set of buttons currently in focus. The present algorithm attempts pseudo-parallel processing of LOs, taking repeated passes through the new utterance, left to right by NP type, (proper nouns, definite NPs,..,reflexives). One-anaphors modified by partitive PPs are exceptional in that they are processed after the pronoun or definite NP (the object of the preposition) to their right. Further work is needed to describe the ways that various NP types interact as this was a technique for coping with the absence of a theory of the possible relationships between sequences of partial and total anaphoric NPs. LOs for events are created by the semantic processing module and so sequences such as: You deleted that unit. I didn't want to do that. could in theory be handled analogously with other partial and total anaphors. However, they are not of use in the current application UIs and so their theory and implementation have remained undeveloped here. Ambiguous mouse clicks of the sort explored in XTRA and CUBRICON plus the ability of the user to introduce new pegs for regions of the screen, or for events of moving a pane or icon across the screen, or encircling a set of existing icons to place their pegs in attentional focus should all be attempted using the pegs discourse model as a source of target interpretations of mouse clicks and as a place to encode novel, user-defined screen objects. Finally, with this or other representations of dialogue, a variety of UI metaphors should be explored. The UI can be viewed as a single autonomous agent or as merely the clearing house for communication between the user and a collection of agents, the operating system, the graphical interface, the NL system, or any of the knowledge sources, such as those on the HITS blackboard, which could conceivably want to engage the user in a dialogue. The three-tiered discourse design is also used in the knowledge based NL system at MCC (Barnett, et al., 1990), and is being explored as one descriptive device for dialogue in voice-to-voice machine • translation at ATR. ACKNOWLEDGEMENTS This system was designed and developed in cooperation with Kent Wittenburg, Richard Cohen, Paul Martin, Elaine Rich, Inderjeet Mani, and other former members of the MCC Human Interface Lab. I would also like to thank members of the ATR Interpreting Telephony Research Laboratories and anonymous reviewers for valuable comments on an earlier draft of this paper. REFERENCES Ayuso, Damaris (1989) Discourse Entities in Janus. Proceedings of the 27th Annual Meeting of the ACL. pp.243-250. Barnett, James, Kevin Knight, Inderjeet Mani, and Elaine Rich (1990) A Knowledge-Based Natural 80 Language Processing System. Communications of the ACM. Carlson, Gregory (1977). A Unified Analysis of the English Bare Plural. Linguistics and Philosophy, 1,413-457. Clark, Herbert and E. Schaefer. (1987). Collaborating on Contributions to Conversations. Language and Cognitive Processes, pp. 19-41. Cohen, Richard, Timothy McCandless, and Elaine Rich, A Problem Solving Approach to Human- Computer Interface management, MCC Tech Report ACT-HI-306-89, Fall 1989. Cohen, Philip, Mary Dalrymple, Douglas B. Moran, Fernando C.N. Pereira, Joseph W. Sullivan, Robert A. Gargan, Jon L. Schlossberg and Sherman W. Tyler. (1989) Synergistic Use of Direct Manipulation and Natural Language. In Proceedings of CHI, pp. 227-233. Dahl, Deborah and Catherine N. Ball. (1990). Reference Resolution in PUNDIT (Tech. Report). UNISYS. Grosz, Barbara (1977). The Representation and Use of Focus in a System for Understanding Dialogs. In Proceedings of IJCAI 5. Grosz, Barbara and Candace Sidner (1985) The Structures of Discourse Structure (Tech. Report). SRI Intemational Guha, R. V. and Douglas Lenat. (1990). Cyc: A Mid-Term Report. A/Magazine Heim, Irena (1982) The Semantics of Definite and Indefinite Noun Phrases. U of Massachusetts, PhD Thesis. Hollan, James, Elaine Rich, William Hill, David Wroblewski, Wayne Wilner, Kent Wittenburg, Jonathan Grudin, and members of the Human Interface Laboratory. (1988). An Introduction to HITS: Human Interface Tool Suite (Tech Report). MCC Karttunen, Lauri (1968) What Makes Definite Noun Phrases Definite? Technical Report, Rand Corp. Karttunen, Lauri (1976) Discourse Referents. In McCawley, J. (ed.), Syntax and Semantics. Academic Press, New York. Landman, F. (1986) Pegs and Alecs. Linguistics and Philosophy, pp. 97-155. Landman, F. (1986) Data Semantics for Attitude Reports. Linguistics and Philosophy, pp. 157- 183. Luperfoy, Susann (1989) The Semantics of Plural Indefinite Anaphors in English. Texas Linguistic Forum. pp. 91-136. Luperfoy, Susann (1991) Discourse Pegs: A Computational Analysis of Context-Dependent Referring Expressions. Doctoral dissertation, Department of Linguistics, The University of Texas. Luperfoy, Susann and Elaine Rich (1992) A Computational Model for the Resolution of Context-Dependent References. (in submission) Neal, Jeanette, Zuzana Dobes, Keith E. Bettinger, and Jong S. Byoun (1990) Multi-Modal References in Human-Computer Dialogue. Proceedings of AAAI. pp 819-823 Oviatt, Sharon L., Philip R. Cohen and Ann Podlozny (1990) Spoken Language in Interpreted Telephone Dialogues. SRI International Technical Note 496. Rich, Elaine A. and Susann Luperfoy (1988) An Architecture for Anaphora Resolution, Proceedings of Applied ACL. Sidner, Candace L. (1979) Towards a Computational Theory of Definite Anaphora Comprehension in Discourse. Doctoral dissertation, Electrical Engineering and Computer Science, Massachusetts Institute of Technology. Veltman, F. (1981) Data Semantics In Groenendijk, J. A. G., T. M. V. Janssen and M. B. J. Stokhof (eds.) Formal Methods in the Study of Language Part 2, Amsterdam: Mathematisch Centrum. Wahlster, Wolfgang. (1989) User and Discourse models for multimodal Communication. In J.W. Sullivan and S.W. Tyler, eds., Architectures for Intelligent Interfaces: Elements and Prototypes, Addison-Wesley, Palo Alto, CA. Webber, Bonnie L. (1978) A Formal Approach to Discourse Anaphora. Doctoral dissertation, Division of Applied Mathematics, Harvard University. 31
1992
4
AN LR CATEGORY-NEUTRAL PARSER WITH LEFT CORNER PREDICTION Paola Merlo University of Maryland/Universit~ de Gen~ve Fscult~ des Lettres CH-1211 Gen~ve 4 merlo@divsun.,nige.ch Abstract In this paper we present a new parsing model of linguistic and computational interest. Linguisti- cally, the relation between the paxsez and the the- ory of grammar adopted (Government and Bind- ing (GB) theory as presented in Chomsky(1981, 1986a,b) is clearly specified. Computationally, this model adopts a mixed parsing procedure, by using left corner prediction in a modified LR parser. ON LINGUISTIC THEORY For a parser to be linguistically motivated, it must be transparent to a linguistic theory, under some precise notion of transparency (see Abney 1987)~ GB theory is a modular theory of abstract prin- ciples. A parser which encodes a modular theory of grammax must fulfill apparently contradictory demands: for the parser to be explanatory it must maintain the modularity of the theory, while for the paxser to be efficient, modularization must be minimized so that all potentially necessary infor- mation is available at all times, x We explore a possible solution to this contradiction. We observe that linguistic information can be classified into 5 different classes, as shown in (1), on the basis of their informational content. These we will ca]] IC Classes. (1) a. Configurations: sisterhood, c-command, m-command, :t:maximal projection ... b. Lexical features: ~N, ±V, ±Funct, ±c-selected, :t:Strong Agr ... c. Syntactic features: ±Case, ~8, ±7, ~baxrier. d. Locality information: minimality, binding, antecedent government. e. Referential information: +D-linked, ±anaphor, ±pronominal. IOn efficiency of GB-based systems tad(1990), Kashkett(1991). see RJs- 288 This classification can be used to specify pre- cisely the amount of modularity in the parser. Berwick(1982:400ff) shows that a modulax system is efficient only if modules that depend on each other axe compiled, while independent modules axe not. We take the notion of dependent and independent to correspond to IC Classes, in that primitives that belong to the same IC Class axe dependent on each other, while primitives that be- long to different IC Classes axe independent from each other. We impose a modularity requirement that makes precise predictions for the design of the parser. Modularity Requirement (MR) Only primi- tives that belong to the same IC Class can be compiled in the parser. RECOVERING PHRASE STRUCTURE According to the MR, notions such as headedness, directionality, sisterhood, and maximal projection can be compiled and stored in a data structure, be- cause these notions belong to the same IC Class, configurations. These features are compiled into context-free rules in our parser. These basic X rules axe augmented by A rules licensed by the part of Trace theory that deals with configura- tions. The crucial feature of this grammar is that nontermina]s specify only the X projection level, and not the category. The full context-free gram- max is shown in Figure 1. The recovery of phrase structure is a crucial component of a parser, as it builds the skeleton which is needed for feature annotation. It must be efficient and it must fail as soon as an error is encountered, in order to limit backtracking. An LR(k) parser (Knuth 1965) has these properties, since it is deterministic on unambiguous input, and it has been proved to recognize only valid prefixes. In our parser, we compile the grammar shown above into an LALR(1) (Aho and Ullma~n 1972) parse table. The table has been modified X" ~ Y" X' X" --' X' Y" X' --' X Y" X' --+ ¥" X X' --* Y" X' X' --' X' Y" X" --~ Y" X" X" --' X" Y" X --, empty X" --, empty Figure 1: specification complementation modification adjunction empty heads empty Xmaxs Category-Neutral Grammar in order to have more than one action for each table entry. 2 Three stacks are used: a stack for the states traversed so far; a stack for the seman- tic attributes associated with each of the nodes; a tree stack of partial trees. The LR algorithm is encoded in a parse predicate, which establishes a relation between two sets of 5-tuples, as shown in (2). s (2) Tix$ixA~xCixPT~--* T~xSjxA.~xCjxPT~ Our parser is more elaborate and less restric- tive than a standard LR parser, because it im- poses conditions on the attributes of the states and it is nondeterministic. In order to reduce the amount of nondeterminism, some predictive power has been introduced. The cooccurenee restrictions between categories, and subcategorization infor- mation of verbs is compiled in a table, which we call Left Corner Prediction Table (LC Table). By looking at the current token, at its category la- bel, and its subcategorization frame, the number of choices of possible next states can be restricted. For instance, if the current token is a verb, and the LR table allows the parser either to project one level up to V ~, or it requires to create an empty ob- ject NP, then, on consulting the subcategorization information, the parser can eliminate the second option as incorrect if the verb is intransitive. RESULTS AND COMMENTS The design presented so far embodies the MR, since it compiles only dependent features in two tables off-line. Compared to the use of partially or fully instantiated context-free grammars, this 2This modification is necessary because the gram- mar compiled into the LR table is not an LR grammar. Sin (2) T~ is an element of the set of input tokens, Ss is an element of the set of states in the LR table, At is an element of the set of attributes associated with each state in the table, C~ iS an element of the set of chains, i.e. displaced element, and PTk iS an element of the set of tokens predicted by the left corner table (see below). 289 Grammar Instantiated Number of Rules 51 46 224 Number of States Shift/reduce conflicts Reduce/reduce conflicts 270 X 16 14 24 36 Figure 2: Numbers organization of the parsing algorithms has been found to be better on several grounds. Consider again the X grammar that we use in the parser, shown in Figure 1. One of the crucial features of this grammar is that the nonterminals are specified only for level and headedness. This version of the grammar is a recent result. In previ- ous implementations of the parser, the projections of the head in a rule were instantiated: for in- stance NP--~ YP IV' . Empirically, we find that on compiling the partially instantiated grammar the number of rules is increased proportionately to the number of categories, and so is the num- ber of conflicts in the table. Figure 2 shows the relative sizes of the LALR(1) tables and the num- ber of conflicts. Moreover, on closer inspection of the entries in the table, categories that belong to the same level of projection show the same re- duce/reduce conflicts. This means that introduc- ing unrestricted categoriM information increases the size of the table without decreasing the num- ber of conflicts in each entry, i.e. without reducing the nondeterminism in the table. These findings confirm that categorial infor- mation can be factored out of the compiled table, as predicted by the MR. The information about cooccurrenee restrictions, category and subcatego- rization frame is compiled in the Left Corner (LC) table, as described above. Using two compiled ta- bles that interact on-line is better than compiling all the information into a fully instantiated, stan- dard context-free grammar for several reasons. 4 Computational]y, it is more efllcient, s Practically, manipulating a small, highly abstract grammar is 4Fully iustantiated grammars have been used, among others, by Tomita(1985) in an LR parser, and by Doff(1990), Fong(1991) in GB-based parsers. sit has been argued elsewhere that for context-free parsing algorithms, the size of the graxrtrnsr (which iS a constant factor) can easily become the predominant factor for a11 useful inputs (see Berwick and Weinberg 1982). Work on compilation of parsers that use GPSG seems to point in the same direction. The separation of strnctu~al information from cooccttrence restrictions iS advocated in Kilbury(1986); both Shieber(1986) and Phi]Hps(1987) argue that the combinatorial explosion (Barton 1985) of a fully expanded ID/LP formalism can be avoided by using feature variables in the com- piled gxammar. See also Thompson 1982. much easier. It is easy to maintain and to embed in a full-fledged parsing system. Linguistically, a fully-instantiated paxser would not be transpaxent to the theory and it would be language dependent. Finally, it could not model some experimental psy- cholingnistic evidence, which we present below. PSYCHOLINGUISTIC SUPPORT A reading task is presented in F~azier and Rayner 1987 where eye movements are monitored: they find that in locally ambiguous contexts, the am- biguous region takes less time than an unambigu- ous eounterpaxt, while a slow down in process- ing time is registered in the disambiguating re- gion. This suggests that selection of major catego- rial information in lexically ambiguous sentences is delayed, e This delay means that the parser must be able to operate in absence of categorial infor- mation, making use of a set of category-neutral phrase structure rules. This separation of item- dependent and item-independent information is encoded in the grammax used in our paxser. A parser that uses instantiated categories would have to store categorial cooccurence restrictions in a dif- ferent data structure, to be consulted in case of lexically ambiguous inputs. Such design would be redundant, because categorial information would be encoded twice. CONCLUSION The module described in this paper is imple- mented and embedded in a parser for English of limited coverage, but it has some shortcomings, which axe currently under investigation. Refine- ments axe needed to compile the LC table auto- matically, to define IC Classes predictively instead of by exhaustive listing. Finally, a formal proof is needed to show that our definition of indepen- dent and dependent is always going to increase efficiency. ACKNOWLEDGEMENTS This work has benefited from suggestions by Bon- nie Doff, Paul Gorrell, Eric Wehrli and Amy Weinberg. The author is supported by a Fellow- ship from the Swiss-Italian Foundation. eFor instance, in the sentences in (3), (from F~azier and Rayner 1987) the ambiguous target item, shown in capitals in (3)a, takes less time than the unambigu- ous control in (3)b, while there is a slow down in the disambiguating material (in italics). (3) a. The warehouse FIRES numerous employees each year. b. That warehouse fixes numerous employees each year. REFERENCES Abney Steven 1987, "GB Paxsing and Psycholog- ical Reality" in MIT Paxsing Volume, Cognitive Science Center. Aho A.V. and J.D. Ullman 1972, The Theory of Parsing, Translation and Compiling, Prentice- Hall, Englewood Cliffs, NJ. Barton Edward 1985, "The Computational Difficulty of ID/LP Parsing" in Proc. of the ACL. Berwick Robert 1982, Locality Principles and the Acquisition of Syntactic Knowledge, Ph.D Diss., MIT. Berwick Robert and Amy Weinberg 1982, " Paxsing Efficiency, Computational Complexity and the Evaluation of Grammatical Theories ", Linguistic Inquiry, 13:165-191. Chomsky Noam 1981, Lectures on Govern- ment and Binding, Foris, Dordrecht. Chomsky Noam 1986a, Knowledge of Lan- guage: Its Nature, Origin and Use, Praeger, New York. Chomsky Noam 1986b, Barriers,MIT Press, Cambridge MA. Dorr Bonnie J. 1990,Lezical Conceptual Struc- ture and Machine Translation, Ph.D Diss., MIT. Fong Sandiway 1991, Computational Prop- erties of Principle-based Grammatical Theories, Ph.D Diss., MIT. Frazier Lyn and Keith Rayner 1987, "Res- olution of Syntactic Category Ambiguities: Eye Movements in Parsing Lexically Ambiguous Sen- tences" in Journal of Memory and Language, 26:505-526. Kashkett Michael 1991, A Parameterised Parser for English and Warlpiri, Ph.D Diss., MIT. Kilbury James 1986, "Category Cooccurrence Restrictions and the Elimination of Metaxules", in Proc. of COLING, 50-55. Knuth Donald 1965, "On the 'I~anslation of Languages from Left to Right", Information and Control, 8. Phillips John 1987, "A Computational Repre- sentation for GPSG", DAI Research Paper 316. Ristad Eric 1990 , Computational Strnc~ure of Human Language, MIT AI Lab, TR 1260. Shieber Stuart 1986, "A Simple Reconstruc- tion of GPSG" in Proc. of COLING, 211-215. Thompson Henry 1982, "Handling Metaxules in a Parser for GPSG" in Proc. of COLING. Tomita Masaru 1985, E~cien~ Parsing for Natural Language, KluweI, Hingham, MA. 290
1992
40
INCREMENTAL DEPENDENCY PARSING Vincenzo Lombardo Dipartimento di Informatica - Universita" di Torino C.so Svizzera 185 - 10149 Torino - Italy e-mail: [email protected] Abstract The paper introduces a dependency-based grammar and the associated parser and focusses on the problem of determinism in parsing and recovery from errors. First, it is shown how dependency-based parsing can be afforded, by taking into account the suggestions coming from other approaches, and the preference criteria for parsing are briefly addressed. Second, the issues of the interconnection between the syntactic analysis and the semantic interpretation in incremental processing are discussed and the adoption of a TMS for the recovery of the processing errors is suggested. THE BASIC PARSING ALGORITHM The parser has been devised for a system that works on the Italian language. The structure that results from the parsing process is a dependency tree, that exhibits syntactic and semantic information. The dependency structure: The structure combines the traditional view of dependency syntax with the feature terms of the unification based formalisms (Shieber 86): single attributes (like number or tense) appear inside the nodes of the tree, while complex attributes (like grammatical relations) are realized as relations between nodes. The choice of a dependency structure, which is very suitable for free word order languages (Sgall et al. 86), reflects the intuitive idea of a language with few constraints on the order of legal constructions. Actually, the flexibility of a partially configurational language like Italian (that can be considered at an intermediate level between the totally configurational languages like English and the totally inflected free-ordered Slavonic languages) can be accounted for with a relaxation of the strong constraints posed by a constituency grammar (Stock 1989) or by constraining to a certain level a dependency grammar. Cases of topicalization, like un dolce di frutta ha ordinato il maestro a cake with fruits has ordered the teacher and in general all the five permutations of the "basic" (i.e. more likely) SVO structure of the sentence are so common in Italian, that it seems much more economical to express the syntactic knowledge in terms of dependency relations. Every node in the structure is associated with a word in the sentence, in such a way that the relation between two nodes at any level is of a head&modifier type. The whole sentence has a head, namely the verb, and its roles (the subj is included) are its modifiers. Every modifier in turn has a head (a noun, which can be a proper, common or pro-noun, for participants not marked by a preposition, a preposition, or a verb, in case of subordinate sentences not preceded by a conjunction) and further modifiers. Hence the dependency tree gives an immediate representation of the thematic structure of the sentence, thus being very suitable for the semantic interpretation. Such a structure also allows the application of the rules, based on grammatical relations, that govern complex syntactic phenomena, as revealed by the extensive work on Relational Grammar. The dependency grammar is expressed declaratively via two tables, that represent the relations of immediate dominance and linear order for pairs of categories. The constraints on the order between a head and one of its modifiers and between two modifiers of the same head are reflected by the nodes in the dependency structure. The formation of the complex structure that is associated with the nodes is accomplished by means of unification: the basic terms are originated by the lexicon and associated with the nodes. There exist principles that govern the propagation of the features in the dependency tree expressed as analogous conventions to GPSG ones. The incremental parser: In the system, the semantic, as well as the contextual and the anaphoric binding analysis, is interleaved with the syntactic parsing. The analysis is incremental, in the sense that it is carried out in a piecemeal strategy, by taking care of partial results too. In order to accomplish the incremental parsing and to build a dependency representation of the sentence, the linguistic knowledge of the two tables is 291 compiled into more suitable data structures, called diamonds. Diamonds represent a redundant version of the linguistic knowledge of the tables: their graphical representation (see the figure) gives an immediate idea of how to employ them in an incremental parsing with a dependency grammar. OUN I ~ /cat (ADJ, ~/ NOUN) PREP ~VERB VERB ~at (DET, NOUN, / ADJ,VERB) & head. tense=+ NOUN cat ,~ I | cat (RELPRON) & DET d.tense=+ I~ 121 eat ( D~.~J ~PREP) ADY 2 I --PR P I~ADJ) i ~"~AD J The center of the diamond is instanfiated as a node of the category indicated during the course of the analysis. The lower half of the diamond represents the categories that can be seen as modifiers of the center category. In particular, the categories on the left will precede the head, while the categories on the right will follow it (the number on the edges totally order the modifiers on the same side of the head). The upper half of the diamond represents the possible heads of the center: the categories on the right will follow it, while the categories on the left, that precede it, indicate the type of node that will become active when the current center has no more modifiers in the sentence. The (incremental) parsing algorithm is straightforward: if the current node is of category X, the correspondent diamond (which has X as the center) individuates the possible alternatives in the parsing. The next input word can be one of its possible modifiers that follow it (right-low branch), its head (right-up branch), another modifier of its head, i.e. a sister (right-up branch and the following left-down one in the diamond activated immediately next), or a modifier of its head's head, an aunt (left-up branch). The edges are augmented with conditions on the input word (cat is a predicate which tests its category as belonging to a set of categories allowed to be the left-corner of the subtree headed by a node of the category that stands at the end of the edge). Constraints on features are tested on the node itself or stored for a subsequent verification. Which edge to follow in the currently active diamond is almost always a matter of a non deterministic choice. Non determinism can be handled via the interaction of many knowledge sources that use the dependency tree as a shared information structure, that represents the actual state of the parsing. Such a structure does not contain only syntactic, but also semantic information. For example, every node associated with a non functional word points to a concept in a terminological knowledge base and the thematic structure of the verb is explicitly represented by the edges of the dependency tree. PARSING PREFERENCES Many preference strategies have been proposed in the literature for guiding parsers (Hobbs and Bear (1990) present a review). There are some preferences of syntactic (i.e. structural) nature, like the Right Association and the Minimal Attachment, that were among the first to be devised. Semantic preferences, like the assignment of thematic roles to the elements in the sentence 1 can contradict the expectations of the syntactic preferences (Schubert 1984). Contextual information (Crain, Steedman 1985) has also been demonstrated to affect the parsing of sentences in a series of psycholinguistic experiments. Lexical preferencing (Stock 1989) (van der Linden 1991) is particularly useful for the treatment of idiomatic expressions. Parsing preferences are integrated in the framework described above, by making the syntactic parser interact with condition-action rules, that implement such preferences, at each step on the diamond structure. This technique can be classified under the weak integration strategy (Crain, Steedman 1985) at the word level. The rules for the resolution of ambiguities that belong to the various knowledge sources analyze the state of the parsing on the dependency structure and take into account the current input word. For example, in the two sentences a) Giorgio le diede con riluttanza una ingente somma di denaro Giorgio (to) her gave with reluctance a big amount of money b) Giorgio le diede con riluttanza a Pamela Giorgio them gave with reluctance to Pamela the pronoun "le" can be a plural accusative or a singular dative case. In an incremental parser, when we arrive to "le" we are faced with an ambiguity that can be solved in a point which is arbitrarily ahead (impossibility of using Marcus' (1980) bounded 1As we have noted in the beginning, this is not an easy task to accomplish, since flexible languages like Italian feature a hardly predictable behavior in ordering: such assignments must sometimes be revised (see below). 292 lookahead), when we find which grammatical relation is needed to complete the subcategorization frame of the verb. Contextual information can help in solving such an ambiguity, by binding the pronoun to a referent, which can be singular or plural. Of course there could be more than one possible referent for the pronoun in the example above: in such a case there exist a preference choice based on the meaning of the verb and its selectional restrictions, and, in case of further ambiguity, a default choice among the possible referents. This choice must be stored as a backtracking point (in JTMS style) or as being an assumption of a context (in ATMS style), since it can reveal to be wrong in the subsequent analysis. The revision of the interpretation can be accomplished via a reason maintenance system. INTEGRATION WITH A REASON MAINTENANCE SYSTEM Zernik and Brown (1988) have described a possible integration of default reasoning in natural language processing. Their use of a JTMS has been criticized because of the impossibility to evaluate the best way in presence of multiple contexts, that are available at a certain point of the parsing process. This is the reason why more recent works have focussed on ATMS techniques (Charniak, Goldman 1988) and their relations to chart parsing (Wiren 1990). ATMS allows to continue the processing, by reactivating interpretations, which have been previously discarded. Currently, the integration with a reason maintenance system (which can possibly be more specialized for this particular task) is under study. The dependency structure contains the short term knowledge about the sentence at hand, with a "dependency" (in the TMS terminology) net that keeps the information on what relations have been inferred from what choices. Once that new elements contradict some previous conclusions, the dependency net allows to individuate the choice points that are meaningful for the current situation and to relabel, according to the IN and OUT separation, the asserted facts. In the example a) if we have disambiguated the pronoun "le" as an object, such an interpretation must be revised when we find the actual object Ca big amount of money"). One of the reasons for adopting truth maintenance techniques is that all the facts that must be withdrawn and the starting of a new analysis (in JTMS style) or to make relevant a new context in place of an old one (in ATMS) must take into account that partial analyses, not related to the changes at hand ("with reluctance" in the example), must be left unchanged. The specific substructure A, affected by the value chosen for the element B, and the element B are connected via a (direct or indirect) link in the "dependency" net. A change of value for B is propagated through the net toward all the linked substructures and, particularly, to A, which is to be revised. In the example a), once detected that "le" is an indirect object, and then that its referent must be female and singular, a new search in the focus is attempted according to this new setting. Hence, the revision process operates on both the syntactic structure, with changes of category and/or features values for the nodes involved (gender and number for "le") and of attachment points for whole substructures, and the semantic representation (from direct to indirect object relation), which has been previously built. ACKNOWLEDGEMENTS I thank prof. Leonardo Lesmo for his active and precious support. REFERENCES Charniak, E., Goldman, R. (1988). A Logic for Semantic Interpretation. In Proceedings of the 26th ACL (87-94). Crain, S., Steedman, M. (1985). On not being led up the Garden Path: The Use of Context by the psychological Syntax Processor. In D. Dowty, L. Karttunen and A. Zwicky (eds), Natural Language Parsing. Psychological, Computational, and Theoretical Perspectives, Cambridge University Press, Cambridge, England (320-358). Hobbs, J., Bear, J. (1990). Two Principles of Parse Preference. In COLING 90 (162-167). van der Linden, E., J. (1991). Incremental Processing and Hierarchical Lexicon. To appear. Marcus, M. (1980). A Theory of Syntactic Recognition for Natural Language. MIT Press, Cambridge, Massachussets. Schubert, L. (1984). On parsing preferences. In COLING 84 (247-250). Sgall, P., Haijcova, E. and Panevova, J. (1986). The Meaning of the Sentence in its Semantic and Pragmatic Aspects. D. Reidel Publishing Company. Shieber, S., M. (1986). An Introduction to Unification-Based Approach to Grammar. CSLI Lecture Notes 4, CSLI, Stanford. Stock, O. (1989). Parsing with flexibility, dynamic strategies and idioms in mind. In Computational Linguistics 15 (1-19). Wiren, M. (1990). Incremental Parsing and Reason Maintenance. In COLING 90 (287-292). Zernik, U., Brown, A. (1988). Default Reasoning in Natural Language Processing. In COLING 88 (801- 805). 293
1992
41
DOCUMENTATION PARSER TO EXTRACT SOFTWARE TEST CONDITIONS Patricia Lutsky Brandeis University Digital Equipment Corporation 111 Locke Drive LMO2-1/Lll Marlboro, MA 01752 OVERVIEW This project concerns building a document parser that can be used as a software engineer- ing tool. A software tester's task frequently involves comparing the behavior of a running system with a document describing the behav- ior of the system. If a problem is found, it may indicate an update is required to the document, the software system, or both. A tool to generate tests automatically based on documents would be very useful to software engineers, but it re- quires a document parser which can identify and extract testable conditions in the text. This tool would also be useful in reverse en- gineering, or taking existing artifacts of a soft- ware system and using them to write the spec- ification of the system. Most reverse engineer- ing tools work only on source code. However, many systems are described by documents that contain valuable information for reverse engi- neering. Building a document parser would al- low this information to be harvested as well. Documents describing a large software project (i.e. user manuals, database dictionaries) are often semi-formatted text in that they have fixed-format sections and free text sections. The benefits of parsing the fixed-format por- tions have been seen in the CARPER project (Schlimmer, 1991), where information found in the fixed-format sections of the documents de- scribing the system under test is used to ini- tialize a test system automatically. The cur- rent project looks at the free text descriptions to see what useful information can be extracted from them. PARSING A DATABASE DICTIONARY The current focus of this project is on ex- tracting database related testcases from the database dictionary of the XCON/XSEL con- figuration system (XCS) (Barker & O'Connor, 294 1989). The CARPER project is aimed at build- ing a self-maintaining database checker for the XCS database. As part of its processing, it ex- tracts basic information contained in the fixed- format sections of the database dictionary. This project looks at what additional testing information can be retrieved from the database dictionary. In particular, each attribute de- scription contains a "sanity checks" section which includes information relevant for test- ing the attribute, such as the format and al- lowable values of the attribute, or information about attributes which must or must not be used together. If this information is extracted using a text parser, either it will verify the ac- curacy of CARPER's checks, or it will augment them. The database checks generated from a docu- ment parser will reflect changes made to the database dictionary automatically. This will be particularly useful when new attributes are added and when changes are made to attribute descriptions. (Lutsky, 1989) investigated the parsing of manuals for system routines to extract the maximum allowed length of the character string parameters. Database dictionary pars- ing represents a new software domain as well as a more complex type of testable information. SYSTEM ARCHITECTURE The overall structure of the system is given in Figure 1. The input to the parser is a set of system documents and the output is testcase information. The parser has two main domain- independent components, one a testing knowl- edge module and one a general purpose parser. It also has two domain-specific components: a domain model and a sublanguage grammar of expressions for representing testable informa- tion in the domain. Figure 1 Document Parser System XCS database dictionary which concern these test conditions. Input .................................. ~. Output ! Domain Independent ! I i I' Testing knowledge i , i ' Parser I i. i * i 1 ! Domain Dependent i , , 1 i! Subfanguage grammar I i] Domain Model 1 i L. ................................. I II (Documents)~ 0 Canonical sentences 0 Additions to test system For this to be a successful architecture, the domain-independent part must be robust enough to work for multiple domains. A person work- ing in a new domain should be given the frame- work and have only to fill in the appropriate domain model and sublanguage grammar. The grammar developed does not need to parse the attribute descriptions of the input text exhaustively. Instead, it extracts the spe- cific concepts which can be used to test the database. It looks at the appropriate sections of the document on a sentence-by-sentence ba- sis. If it is able to parse a sentence and de- rive a semantic interpretation for it, it re- turns the corresponding semantic expression. If not, it simply ignores it and moves on to the next sentence. This type of partial pars- ing is well suited to this job because any infor- mation parsed and extracted will usefully aug- ment the test system. Missed testcases will not adversely impact the test system. COMBINATION CONDITIONS In order to evaluate the effectiveness of the document parser, a particular type of testable condition for database tests was chosen: legal combinations of attributes and classes. These conditions include two or more attributes that must or must not be used together, or an at- tribute that must or must not be used for a class. The following are example sentences from the 1. If BUS-DATA is defined, then BUS must also be defined. 2. Must be used if values exist for START- ADDRESS or ADDRESS-PRIORITY attributes. 3. This attribute is appropriate only for class SYNC-COMM. 4. The attribute ABSOLUTE-MAX-PER-BUS must also be defined. Canonical forms for the sentences were devel- oped and are listed in Figure 2. Examples of sentences and their canonical forms are given in Figure 3. The canonical form can be used to generate a logical formula or a representation appropriate for input to the test system. Figure 2 Canonical sentences ATTRIBUTE must [not] be defined if ATTRIBUTE is [not] defined. ATTRIBUTE must [not] be defined for CLASS. ATTRIBUTE can only be defined for CLASS. Figure 3 Canonical forms of example sentences Sentence: If BUS-DATA is defined then BUS must also be defined. Canonical form: BUS must be defined if BUS-DATA is defined. Sentence: This attribute is appropriate only for class SYNC-COMM. Canonical form: BAUD-RATE can only be defined for class SYNC-COMM. THE GRAMMAR Since we are only interested in retrieving spe- cific types of information from the documen- tation, the sublanguage grammar only has to 295 cover the specific ways of expressing that in- formation which are found in the documents. As can be seen in the list of example sentences, the information is expressed either in the form of modal, conditional, or generic sentences. In the XCS database dictionary, sentences de- scribing legal combinations of attributes and classes use only certain syntactic constructs, all expressible within context-free grammar. The grammar is able to parse these specific types of sentence structure. These sentences also use only a restricted set of semantic concepts, and the grammar specifi- cally covers only these, which include negation, value phrases Ca value of,") and verbs of def- inition or usage ("is defined," "is used"). They also use the concepts of attribute and class as found in the domain model. Two specific lex- ical concepts which were relevant were those for "only," which implies that other things are excluded from the relation, and "also," which presupposes that something is added to an al- ready established relation. The semantic pro- cessing module uses the testing knowledge, the sublanguage semantic constructs, and the do- main model to derive the appropriate canonical form for a sentence. The database dictionary is written in an in- formal style and contains many incomplete sentences. The partially structured nature of the text assists in anaphora resolution and el- lipses expansion for these sentences. For ex- ample, "Only relevant for software" in a san- ity check for the BACKWARD-COMPATIBLE attribute is equivalent to the sentence "The BACKWARD-COMPATIBLE attribute is only relevant for software." The parsing system keeps track of the name of the attribute be- ing described and it uses it to fill in missing sentence components. EXPERIMENTAL RESULTS Experiments were done to investigate the utility of the document parser. A portion of the database dictionary was analyzed to determine the ways the target concepts are expressed in that portion of the document. Then a gram- mar was constructed to cover these initial sen- tences. The grammar was run on the entire document to evaluate its recall and precision in identifying additional relevant sentences. The outcome of the run on the entire document was 296 used to augment the grammar, which can then be run on successive versions of the document over time to determine its value. Preliminary experiments using the grammar to extract information about the allowable XCS attribute and class combinations showed that the system works with good recall (six of twenty-six testcases were missed) and pre- cision (only two incorrect testcases were re- turned). The grammar was augmented to cover the additional cases and not return the incorrect ones. Subsequent versions of the database dictionary will provide additional data on its effectiveness. SUMMARY A document parser can be an effective soft- ware engineering tool for reverse engineering and populating test systems. Questions re- main about the potential depth and robust- ness of the system for more complex types of testable conditions, for additional document types, and for additional domains. Experi- ments in these areas will investigate deeper representational structures for modal, condi- tional, and generic sentences, appropriate do- main modeling techniques, and representa- tions for general testing knowledge. ACKNOWLEDGMENTS I would like to thank James Pustejovsky for his helpful comments on earlier drafts of this paper. REFERENCES Barker, Virginia, & O'Connor, Dennis (1989). Expert systems for configuration at DIGITAL: XCON and beyond. Communications of the ACM, 32, 298-318. Lutsky, Patricia (1989). Analysis of a sublanguage grammar for parsing software documentation. Unpublished master's thesis, Harvard University Extension. Schlimmer, Jeffrey (1991) Learning meta knowl- edge for database checking. Proceedings of AAAI 91, 335-340.
1992
42
A Linguistic and Computational Analysis of the German "Third Construction"* Owen Rambow Department of CIS, University of Pennsylvania Philadelphia, PA 19104, USA rambow@linc, cis. upenn, edu 1 The Linguistic Data For German, most transformational lingusitic theories such as GB posit center-embedding as the underlying word order of sentences with embedded clauses: Weft ich [das Fahrrad zu reparieren] versprochen habe Because I the bike (ace) to repair promised have Because I promised to repair the bike However, far more common is a construction in which the entire subordinate clause is extraposed: Weil ich ti ver- sprochen habe, [das Fahrrad zu reparieren]i. In addition, a third construction is possible, which has been called the "third construction", in which only the embedded verb, but not its nominal argument has been extraposed: Weil ich das Fahrrad ti versprochen habe [zu reparieren]i, A similar construction can also be observed ff there are two levels of embedding. In this case, the number of pos- sible word orders increases from 3 to 30, 6 of which are shown in Figure 1. Of the 30 sentences, 7 are clearly un- grammatical (marked "*"), and 3 are extremely marginal, but not "flat out" (marked "?*"). The remaining 20 are acceptable to a greater or lesser degree (marked "ok" or "?"). No attempt has been made in the linguistic or com- putational literature to account for this full range of data. 2 A Linguistic TAG Analysis Following [den Besten and Rutten 1989], [Santorini and Kroch 1990] argue that the third construction, rather than being a morphological effect of clause union, is in fact a syntactic phenomenon. The construction de- rives from two independently motivated syntactic oper- ations, scrambling and (remnant) extraposition. In this work, I have implemented this suggestion in a variant of multi-component TAG (TIC-TAG, [Weir 1988]) defined in [Lee 1991], which I will call SI-TAG. In SI-TAG, as in MC-TAG, the elementary structures are sets of trees, which can be initial or auxiliary trees. Contrary to the regular MC- 'lAG, in SI-TAG the trees can also be adjoined into trees *This work was supported by the following grants: ARO DAAL 03-89-C-0031; DARPA N00014-90-J-1863; NSF IRI 90- 16592; and Ben Franklin 91S.3078C-1. I would like to thank Bob Frank and Aravind Joshi for fruitful discussions relating to this paper. from the same set (set-internal adjunction). Furthermore, the trees can be annotated with dominance constraints (or "links"), which hold between foot nodes of auxiliary trees and nodes of other trees. These constraints must be met when the tree set is adjoined. The following SI-TAG accounts for the German data. We have 5 elementary sets: for the two verbs that subcategorize for clauses, versuchen 'to try" and versprechen 'to promise', there are two sets each, representing the center-embedded and extraposed versions. For reparieren 'to repair', there is only one set. Sample sets can be found in Figure 2. The dominance links are shown by dotted lines. ...... . S ..- ............. ;::.':l vr, i vr, ivP Air is, vPiv k../ "''" I verspmchen S /'"~ I vP ..'vP VP stiv "... ...... °.,'° [ i versuchen } Figure 2: Sample tree sets for versprechen 'to promise', and versuchen 'to try' with extraposed subordinate clause This analysis rules out those sentences that are ungram- matical, since the dominance constraints would be circular and could not be satisfied. Derivations am possible for the sentences that are acceptable. However, the analysis also provides derivations for the three sentences that are extremely marginal, but not ungrammatical. Since these sentences can be derived by a sequence of 3 licit steps, the combination of any two of which is also licit, a syntactic analysis cannot insightfully rule them out. Instead, I would like to explore a processing-based analysis. A processing account holds two promises: first, it should account for the differences in degree among the acceptable sentences; second, it should rule out the extremely marginal sentences. 297 (i) (iv) (xvi) (xxiii) (xxv) (xxvii) Weil ich des Fahrrad zu reparieren zu versuchen versproehen habe ok Weil ich das Fahrrad zu versuchen zu reparieren versprochen habe 7 Well ich versprochen babe, zu versuchen, das Falurad zu reparieren ok Weil ich zu versuchen versprochen habe, das Fahrrad zu reparieren 7 Weft ich das Fahrrad zu versuchen versprochen habe zu reparieren 7* Weil zu versuchen ich das Fahrrad versprochen habe zu reparieren * Figure 1: An excerpt from the data 3 A Processing Account Based on Bottom-Up EPDAs [Joshi 1990] proposes to model human sentence process- ing with an Embedded Pushdown Automaton (EPDA), the automaton that recognizes tree adjoining languages. He defines the Principle of Partial Interpretation (PPI), which stipulates that structures are only popped from the EPDA when they are a properly integrated predicate-argument structure. Furthermore, it requires that they be popped only when they are either the root clause or they are the immedi- ately embedded clause of the previously popped structure. Before extending this approach to the extraposition cases, I will recast it in terms of a closely related automaton, the Bottom-up EPDA (BEPDA) ~. The BEPDA consists of a finite-state control and of a stack of stacks. There are two types of moves: either an input is read and pushed onto a new stack on top of the stack of stacks, or a fixed num- ber of stacks below and above a designated stack on the stack of stacks is removed and a new symbol is pushed on the top of the designated stack, which is now the top stack (an "unwrap" move). The operation of this automaton will be illustrated on the German center-embedded sentence N1N2N3VzVzVI 2. The moves of the BEPDA are shown in Table 3. The three nouns are read in, and each is pushed onto a new stack on top of the stack of stacks (steps 1-3). When V3 is read, it is combined with its nominal argument and replaces it on the top stack (Step 4). The PPI prevents V3** from being popped from the automaton, since V3** is not the root clause and V2 has not yet been popped. V2 is then read and pushed onto a new stack (Step 5a). In the next move (5b), N2, V~ ° and I/"2 (i.e., V2 and its nominal and clausal complements) are unwrapped, and the com- bined V2** is placed on top of the new top stack (the one formerly containing V3**). A similar move happens in steps 6a and 6b. Now, Vx *° can be popped from the automaton in accordance with the PPI. (Recall that V~ *° contains its clausal argument, V2 *°, which in turn contains its clausal argument, V3 *°, so that at this point all input has been pro- cessed.). In summary, the machine operates as follows: it creates a new top stack for each input it reads, and unwraps aI am indebted to Yves Schabes for suggesting the use of the BEPDA. 2I will abbreviate the lexemes so that for example sentence (i) will be represented as N1N3V3VzV1. As in [Joshi 1990], an asterisk (e.g., V~*) denotes a verb not lacking any overt nominal complements. In extension to this notation, a circle (e.g., 111") denotes a verb not lacking any clausal complements. 1 [Na 2 [Na [N2 3 (Na [N2 4 (N~ (N2 5a [N1 [N2 5b [N~ [W* 6a [N1 [1/2"* 6b [W* INs [W* [W [v1 [½ Figure 3: BEPDA moves for N1 N2 Na Va V21"1 whenever and as soon as this is possible. Using a BEPDA rather than an EPDA has two advan- tages: first, the data-driven bottom-up automaton repre- sents a more intuitive model of human sentence processing than the top-down automaton; second, the grammar that corresponds to the BEPDA analysis is the TAG grammar proposed independently on linguistic grounds, as shown in Figure 4 a. The unwrap in move 5afo corresponds to the adjunction of tree /~2 to tree ota at the root node of ~3 (shown by the arrow), and the unwrap in Move 6a/b to the adjunction of tree/31 to tree/~2. S ~ S ~ S ~-mmm N 3 V 3 N 2 S N 1 S S V 2 S V 1 Figure 4: Derivation for German Center-Embedding Let us consider how the BEPDA account can be ex- tended to the extraposition cases, such as sentence (xxiii), NtV2V1N3Va. If we simply use the BEPDA for center- embedding described above, we get the sequence of moves in Figure 5. In move 3a, we can unwrap the nominal ar- gument and verb of the matrix clause, which is popped in move 3b in accordance with the PPI. In move 3c, the clause of V2" can also be popped. Then, the remaining noun and verb are simply read and popped. If we use any of the metrics proposed in [Joshi 1990] (such as the sum of the number of moves that input el- ements are stored in the stack) we predict that sentence 3In the interest of conciseness, VP nodes and empty categories have been omitted. 298 1 [~rl 2 [~q [W 3a [Aq [W [v~ 3b [V¢ W 3c [W 4 [I~3 5 Iv3" Figure 5: BEPDA moves for N1VzVtNaV3 (xxiii) is easier to process than sentence (i), which appears to be correcL It is easy to see how this analysis extends to sentence (xvi). Its processing would be predicted to be the easiest possible, and in fact it is the word order by far preferred by German speakers. Now let us turn to the third construction cases. If we assume the PPI, the only way for a simple TAG to derive the relevant word orders (e.g., N1N2V1V2) is by an analy- sis corresponding to verb raising as employed in Dutch. In Section 2, I mentioned linguistic evidence against a verb-raising analysis for German. Processing considera- tions also speak against this approach: we would have to postulate that German speakers can either use the German center-embedding strategy, or the Dutch verb-raising strat- egy. This would mean that German speakers should be as good at cross-serial dependencies as at center-embedding. However, in German at levels of embedding beyond 2, the center-embedding construction is clearly preferred. We are left with the conclusion that we must go beyond simple TAGs, as was in fact proposed in Section 2. Therefore, a simple BEPDA will not handle such cases either, and we will need an extension of the automaton. This extension will be explained by way of an example, sentence (iv). N1, Na, V2 and Va are read in and placed on new top stacks (moves 1 - 4a). (Popping I/2" would violate the PPI.) Now we unwrap V2* and combine it with 1/3". This yields 1/2°: while formerly V2* did not lack any nominal arguments (since it has none of its own), ]/2° now has its clausal complement, but it is lacking a nominal comple- ment (namely Va's) 4. The reason why Na and V3 can't be unwrapped around V~ is that Va does not subcatego- rize for a clausal complement. We then unwrap N3 around V~ and get V~** in step 4c. We can then unwrap and pop the matrix clause, and then pop Vz** in the usual manner. The grammar corresponding to the BEPDA of Figure 6 is shown in Figure 7 (the arrows again show the sequence of adjunctions): we see that the deferred incorporation of Na corresponds to the use of a tree set for the clause of V3. Finally, let us consider the extremely marginal sentence (xxv), N1NaV2V1Va. Here, the automaton as defined so far would simply read in the input elements and push them on separate stacks. At no point can a clause be unwrapped (because both verb/noun pairs are too far apart), and the extension proposed to handle the third construction, the deferred incorporation of nominal arguments, cannot apply, 4This operation can be likened to the operation of function composition in a categorial framework. 1 [Na 2 [N1 [Ns 3 [Na [N~ 4a IN1 IN3 4b [Na [JV3 a¢ [N~ [W* 5 IV2** [W [W [W [~* Figure 6: BEPDA moves for N1 N31/2 V31/1 V N a S V z S S V~ Figure 7: Derivation for NtNaV2VaV1 either. The automaton rejects the string, as desired. 4 Current and Future Work In summary, the linguistic analysis correctly predicts which sentences are ungrammatical, and the processing analy- sis shows promise for correctly ruling out the extremely marginal sentences, and for accounting for the differences in acceptability among the remaining sentences. Immediate further goals include testing the coverage of this approach, and exploring the relation between the proposed extension to the BEPDA and the form of the SI-TAG grammar. References [Besten and Rut~n 1989] Besten, Hans den and Rutten, Jean, 1989. On verb raising, extraposition and free word order in Dutch. In Jaspers, Dany (editor), Sentential complementation and the lexicon, pages 41-56. Foris, Dordrecht. [Joshi 1990] Joshi, Aravind K., 1990. Processing Crossed and Nested Dependencies: an Automaton Perspective on the Psycholinguistic Results. Language and Cognitive Processes. [Lee 1991] Lee, Young-Suk, 1991. Scrambling and the Adjoined Argument Hypothesis. Thesis Proposal, Uni- versity of Pennsylvania. [Santorini and Kr~h 19901 Santorini, Beatrice and Kroch, Anthony, 1990. Remnant Extraposition in German. Uno published Paper, University of Pennsylvania. [Weir 1988] Weir, David J., 1988. Characterizing Mildly Context-Sensitive Grammar Formalisms. Phi) thesis, Department of Computer and Information Science, Uni- versity of Pennsylvania. 299
1992
43